threads
listlengths
1
275
[ { "msg_contents": "Hello everybody,\n\n\n\nI have trouble with my table that has four columns which their data types\nare text, JSON, boolean and timestamp.\n\nAlso, I have 1K rows, but my JSON column size approximately 110KB and maybe\nover it.\n\nWhen I select all the data from my table, it takes 600 seconds.\n\nBut I explain my query;\n\n\n\n\n\n*Seq Scan on zamazin (cost=0.00..21.77 rows=1077 width=49) (actual\ntime=0.004..0.112 rows=1077 loops=1)*\n\n*Planning time: 0.013 ms*\n\n*Execution time: 0.194 ms*\n\n\n\n\n\nWhen I investigated why these execution times are so different, I find a\nnew storage logic like TOAST.\n\nI overlook some details on TOAST logic and increased some config like\nshared_buffers, work_mem, maintenance_work_mem, max_file_per_process.\n\nBut there was no performance improvement on my query.\n\n\n\nI do not understand why it happens. My table size is 168 MB, but my TOAST\ntable size that is related to that table, is 123 MB.\n\n\n\n*My environment is;*\n\nPostgreSQL 9.4.1\n\nWindows Server 2012 R2\n\n16 GB RAM\n\n100 GB HardDisk (Not SSD)\n\nMy database size 20 GB.\n\n\n\n*My server configuration ;*\n\nShared_buffers: 8GB\n\n\n\n( If I understand correctly, PostgreSQL says, For 9.4 The useful range for\nshared_buffers on Windows systems is generally from 64MB to 512MB. Link:\nhttps://www.postgresql.org/docs/9.4/runtime-config-resource.html )\n\n\n\nwork_mem : 512 MB\n\nmaintenance_work_mem: 1GB\n\nmax_file_per_process: 10000\n\neffective_cache_size: 8GB\n\n\n\nHow I can achieve good performance?\n\nRegards,\n\nMustafa BÜYÜKSOY\n\nHello everybody,\n \nI have trouble with my table that has four columns which\ntheir data types are text, JSON, boolean and timestamp.\nAlso, I have 1K rows, but my JSON column size approximately\n110KB and maybe over it. \nWhen I select all the data from my table, it takes 600 seconds.\nBut I explain my query;\n \n \n\n\n\nSeq Scan on zamazin  (cost=0.00..21.77 rows=1077\n width=49) (actual time=0.004..0.112 rows=1077 loops=1)\nPlanning time: 0.013 ms\nExecution time: 0.194 ms\n\n\n\n \n \nWhen I investigated why these execution times are so\ndifferent, I find a new storage logic like TOAST.\nI overlook some details on TOAST logic and  increased\nsome config like shared_buffers, work_mem, maintenance_work_mem,\nmax_file_per_process.\nBut there was no performance improvement on my query.\n \nI do not understand why it happens. My table size is 168 MB,\nbut my TOAST table size that is related to that table,  is 123 MB.\n \nMy environment is;\n\n\n\nPostgreSQL 9.4.1\n\n\n\n\nWindows\n Server 2012 R2 \n\n\n\n\n16 GB RAM\n\n\n\n\n100 GB\n HardDisk (Not SSD) \n\n\n\n\nMy\n database size 20 GB.\n\n\n\n \nMy\nserver configuration ;\n\n\n\nShared_buffers:\n 8GB \n \n( If I\n understand correctly, PostgreSQL says, For 9.4 The useful range for shared_buffers on Windows systems is generally from 64MB to 512MB. Link: https://www.postgresql.org/docs/9.4/runtime-config-resource.html\n)\n \n\n\n\n\nwork_mem : 512 MB\n\n\n\n\nmaintenance_work_mem: 1GB\n\n\n\n\nmax_file_per_process: 10000\n\n\n\n\neffective_cache_size: 8GB\n\n\n\n \nHow I can\nachieve good performance?\nRegards,\nMustafa\nBÜYÜKSOY", "msg_date": "Fri, 7 Feb 2020 16:07:28 +0300", "msg_from": "Asya Nevra Buyuksoy <[email protected]>", "msg_from_op": true, "msg_subject": "TOAST table performance problem" }, { "msg_contents": "På fredag 07. februar 2020 kl. 14:07:28, skrev Asya Nevra Buyuksoy <\[email protected] <mailto:[email protected]>>: \n\nHello everybody,\n\n[...]\n\nHow I can achieve good performance?\n\n\nNobody here understands anything unless you show the exact query and schema... \n\nAnd of course you'll be encurraged to upgrade to latest version (12.1) as \n9.4.1 is now 5 years old.. \n\n\n--\n Andreas Joseph Krogh", "msg_date": "Fri, 7 Feb 2020 14:12:39 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Sv: TOAST table performance problem" }, { "msg_contents": "Sorry for the misunderstanding.\nI have a table like;\nCREATE TABLE zamazin\n(\n paramuser_id text,\n paramperson_id integer,\n paramdata json,\n paramisdeleted boolean,\n paramactiontime timestamp without time zone\n)\nparamdata row size is 110KB and over.\n\nWhen I execute this query like;\n*select * from zamazin*\nit takes *600 seconds*.\nBut when analyze the query ;\n\n\n\n\n*\"Seq Scan on public.zamazin (cost=0.00..21.77 rows=1077 width=49) (actual\ntime=0.008..0.151 rows=1077 loops=1)\"\" Output: paramuser_id,\nparamperson_id, paramdata, paramisdeleted, paramactiontime\"\" Buffers:\nshared hit=11\"\"Planning time: 0.032 ms\"\"Execution time: 0.236 ms\"*\n Why the query takes a long time, I do not understand. I assume that this\nrelates to the TOAST structure.\n\npng.png\n(11K)\n<https://mail.google.com/mail/u/0?ui=2&ik=887fea0f99&attid=0.1&permmsgid=msg-a:r3734143643823656667&view=att&disp=safe&realattid=f_k6c77tsa1>\n\nAndreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 16:12 tarihinde\nşunu yazdı:\n\n> På fredag 07. februar 2020 kl. 14:07:28, skrev Asya Nevra Buyuksoy <\n> [email protected]>:\n>\n> Hello everybody,\n>\n> [...]\n>\n> How I can achieve good performance?\n>\n>\n> Nobody here understands anything unless you show the exact query and\n> schema...\n>\n> And of course you'll be encurraged to upgrade to latest version (12.1) as\n> 9.4.1 is now 5 years old..\n>\n> --\n> Andreas Joseph Krogh\n>", "msg_date": "Fri, 7 Feb 2020 16:23:35 +0300", "msg_from": "Asya Nevra Buyuksoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TOAST table performance problem" }, { "msg_contents": ">\n> Andreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 16:12\n> tarihinde şunu yazdı:\n>\n>> På fredag 07. februar 2020 kl. 14:07:28, skrev Asya Nevra Buyuksoy <\n>> [email protected]>:\n>>\n>>\n>>\n>> *[...]*\n>>\n>> *And of course you'll be encurraged to upgrade to latest version (12.1)\n>> as 9.4.1 is now 5 years old..*\n>> You are right but for now I have to use this version :)\n>>\n> --\n>> Andreas Joseph Krogh\n>>\n>\n\nAndreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 16:12 tarihinde şunu yazdı:På fredag 07. februar 2020 kl. 14:07:28, skrev Asya Nevra Buyuksoy <[email protected]>:\n\n\n\n\n\n \n[...]\n \nAnd of course you'll be encurraged to upgrade to latest version (12.1) as 9.4.1 is now 5 years old..\n  You are right but for now I have to use this version :) \n\n--\nAndreas Joseph Krogh", "msg_date": "Fri, 7 Feb 2020 16:34:34 +0300", "msg_from": "Asya Nevra Buyuksoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TOAST table performance problem" }, { "msg_contents": "På fredag 07. februar 2020 kl. 14:23:35, skrev Asya Nevra Buyuksoy <\[email protected] <mailto:[email protected]>>: \n\nSorry for the misunderstanding. \nI have a table like; \nCREATE TABLE zamazin\n (\n paramuser_id text,\n paramperson_id integer,\n paramdata json,\n paramisdeleted boolean,\n paramactiontime timestamp without time zone\n ) \nparamdata row size is 110KB and over. \n\nWhen I execute this query like;\nselect * from zamazin \nit takes 600 seconds. \nBut when analyze the query ; \n\"Seq Scan on public.zamazin (cost=0.00..21.77 rows=1077 width=49) (actual \ntime=0.008..0.151 rows=1077 loops=1)\"\n \" Output: paramuser_id, paramperson_id, paramdata, paramisdeleted, \nparamactiontime\"\n \" Buffers: shared hit=11\"\n \"Planning time: 0.032 ms\"\n \"Execution time: 0.236 ms\" \n Why the query takes a long time, I do not understand. I assume that this \nrelates to the TOAST structure. \n\nMy guess is the time is spent in the client retrieving the data, not in the DB \nitself. Are you on a slow network? \n\n\n-- \nAndreas Joseph Krogh \nCTO / Partner - Visena AS \nMobile: +47 909 56 963 \[email protected] <mailto:[email protected]> \nwww.visena.com <https://www.visena.com> \n <https://www.visena.com>", "msg_date": "Fri, 7 Feb 2020 14:41:14 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TOAST table performance problem" }, { "msg_contents": "Yes, I would concur that this planning time and execution time do not \ntake into account the network time sending the data back to the client, \nespecially since your are sending back the entire contents of the table.\n\nRegards,\nMichael Vitale\n\nAndreas Joseph Krogh wrote on 2/7/2020 8:41 AM:\n> På fredag 07. februar 2020 kl. 14:23:35, skrev Asya Nevra Buyuksoy \n> <[email protected] <mailto:[email protected]>>:\n>\n> Sorry for the misunderstanding.\n> I have a table like;\n> CREATE TABLE zamazin\n> (\n>   paramuser_id text,\n>   paramperson_id integer,\n>   paramdata json,\n>   paramisdeleted boolean,\n>   paramactiontime timestamp without time zone\n> )\n> paramdata row size is 110KB and over.\n> When I execute this query like;\n> *select * from zamazin*\n> it takes *600 seconds*.\n> But when analyze the query ;\n> *\"Seq Scan on public.zamazin  (cost=0.00..21.77 rows=1077\n> width=49) (actual time=0.008..0.151 rows=1077 loops=1)\"\n> \"  Output: paramuser_id, paramperson_id, paramdata,\n> paramisdeleted, paramactiontime\"\n> \"  Buffers: shared hit=11\"\n> \"Planning time: 0.032 ms\"\n> \"Execution time: 0.236 ms\"*\n>  Why the query takes a long time, I do not understand. I\n> assume that this relates to the TOAST structure.\n>\n> My guess is the time is spent in the /client/ retrieving the data, not \n> in the DB itself. Are you on a slow network?\n> -- \n> *Andreas Joseph Krogh*\n> CTO / Partner - Visena AS\n> Mobile: +47 909 56 963\n> [email protected] <mailto:[email protected]>\n> www.visena.com <https://www.visena.com>\n> <https://www.visena.com>", "msg_date": "Fri, 7 Feb 2020 08:52:26 -0500", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TOAST table performance problem" }, { "msg_contents": ">\n>\n> Andreas Joseph Krogh wrote on 2/7/2020 8:41 AM:\n>\n> På fredag 07. februar 2020 kl. 14:23:35, skrev Asya Nevra Buyuksoy <\n> [email protected]>:\n>\n>\n> My guess is the time is spent in the *client* retrieving the data, not in\n> the DB itself. Are you on a slow network?\n>\n> It works in my local area and speed is 1 Gbps. When I use\nanother local computer that has SSD disk the query execution time reduced\nto 12 seconds. But this query has to execute my local computer.", "msg_date": "Fri, 7 Feb 2020 17:16:13 +0300", "msg_from": "Asya Nevra Buyuksoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TOAST table performance problem" }, { "msg_contents": "På fredag 07. februar 2020 kl. 15:16:13, skrev Asya Nevra Buyuksoy <\[email protected] <mailto:[email protected]>>: \n\n\nAndreas Joseph Krogh wrote on 2/7/2020 8:41 AM: \nPå fredag 07. februar 2020 kl. 14:23:35, skrev Asya Nevra Buyuksoy <\[email protected] <mailto:[email protected]>>: \n\n\nMy guess is the time is spent in the client retrieving the data, not in the DB \nitself. Are you on a slow network? \n It works in my local area and speed is 1 Gbps. When I use another local \ncomputer that has SSD disk the query execution time reduced to 12 seconds. But \nthis query has to execute my local computer. \n\nWhat client are you using? \n\n\n--\n Andreas Joseph Krogh", "msg_date": "Fri, 7 Feb 2020 15:19:19 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TOAST table performance problem" } ]
[ { "msg_contents": "På fredag 07. februar 2020 kl. 15:35:04, skrev Asya Nevra Buyuksoy <\[email protected] <mailto:[email protected]>>: \nI use pgadmin3. \n\nTry \"psql\", it has the lowest overhead (I think). pgAdmin might use time \npresenting the results etc. which is easy to overlook. \n\n\n-- \nAndreas Joseph Krogh", "msg_date": "Fri, 7 Feb 2020 15:42:43 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TOAST table performance problem" }, { "msg_contents": "I try it, but there is no enhancement.\nI read this link is about TOAST and also its sub_links;\nhttps://blog.gojekengineering.com/a-toast-from-postgresql-83b83d0d0683\nWhen I execute this query, except JSON data like;\nSELECT paramuser_id, paramperson_id, paramisdeleted, paramactiontime\n FROM zamazin;\nIt takes 94 ms. :)\n\n\nAndreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 17:42 tarihinde\nşunu yazdı:\n\n> På fredag 07. februar 2020 kl. 15:35:04, skrev Asya Nevra Buyuksoy <\n> [email protected]>:\n>\n> I use pgadmin3.\n>\n>\n> Try \"psql\", it has the lowest overhead (I think). pgAdmin might use time\n> presenting the results etc. which is easy to overlook.\n>\n> --\n> Andreas Joseph Krogh\n> ​\n>\n\nI try it, but there is no enhancement.  I read this link is about TOAST and also its sub_links;https://blog.gojekengineering.com/a-toast-from-postgresql-83b83d0d0683  When I execute this query, except JSON data like;SELECT paramuser_id, paramperson_id, paramisdeleted, paramactiontime  FROM zamazin;It takes 94 ms. :)Andreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 17:42 tarihinde şunu yazdı:På fredag 07. februar 2020 kl. 15:35:04, skrev Asya Nevra Buyuksoy <[email protected]>:\n\nI use pgadmin3.\n\n \nTry \"psql\", it has the lowest overhead (I think). pgAdmin might use time presenting the results etc. which is easy to overlook.\n \n\n--\nAndreas Joseph Krogh\n​", "msg_date": "Fri, 7 Feb 2020 17:59:05 +0300", "msg_from": "Asya Nevra Buyuksoy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TOAST table performance problem" }, { "msg_contents": "Try \\o <filename> in psql, to redirect the output to file, and prevent it from processing the json (ie. format it)\n\nDen 7. februar 2020 15:59:05 CET, skrev Asya Nevra Buyuksoy <[email protected]>:\n>I try it, but there is no enhancement.\n>I read this link is about TOAST and also its sub_links;\n>https://blog.gojekengineering.com/a-toast-from-postgresql-83b83d0d0683\n>When I execute this query, except JSON data like;\n>SELECT paramuser_id, paramperson_id, paramisdeleted, paramactiontime\n> FROM zamazin;\n>It takes 94 ms. :)\n>\n>\n>Andreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 17:42 tarihinde\n>şunu yazdı:\n>\n>> På fredag 07. februar 2020 kl. 15:35:04, skrev Asya Nevra Buyuksoy <\n>> [email protected]>:\n>>\n>> I use pgadmin3.\n>>\n>>\n>> Try \"psql\", it has the lowest overhead (I think). pgAdmin might use time\n>> presenting the results etc. which is easy to overlook.\n>>\n>> --\n>> Andreas Joseph Krogh\n>> ​\n>>\n\n-- \nSendt fra min Android-enhet med K-9 e-post. Unnskyld min kortfattethet.\nTry \\o <filename> in psql, to redirect the output to file, and prevent it from processing the json (ie. format it)Den 7. februar 2020 15:59:05 CET, skrev Asya Nevra Buyuksoy <[email protected]>:\nI try it, but there is no enhancement.  I read this link is about TOAST and also its sub_links;https://blog.gojekengineering.com/a-toast-from-postgresql-83b83d0d0683  When I execute this query, except JSON data like;SELECT paramuser_id, paramperson_id, paramisdeleted, paramactiontime  FROM zamazin;It takes 94 ms. :)Andreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 17:42 tarihinde şunu yazdı:På fredag 07. februar 2020 kl. 15:35:04, skrev Asya Nevra Buyuksoy <[email protected]>:\n\nI use pgadmin3.\n\n \nTry \"psql\", it has the lowest overhead (I think). pgAdmin might use time presenting the results etc. which is easy to overlook.\n \n\n--\nAndreas Joseph Krogh\n​\n\n\n-- Sendt fra min Android-enhet med K-9 e-post. Unnskyld min kortfattethet.", "msg_date": "Fri, 07 Feb 2020 16:15:48 +0100", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TOAST table performance problem" }, { "msg_contents": "---------- Forwarded message ---------\nGönderen: Asya Nevra Buyuksoy <[email protected]>\nDate: 10 Şub 2020 Pzt, 10:51\nSubject: Re: TOAST table performance problem\nTo: Andreas Joseph Krogh <[email protected]>\n\n\nI copied my data to the CSV file, yes it is very fast. However, this does\nnot solve my problem.\nAfter deserializing the on the front side, I want to visualize my data on\nthe web page effectively.\nWhen I select my data one by one with a limit clause, the query executes\n200 ms. For example, If I create a function that takes data with a loop,\nthe execution time will be 200 ms*1000=200 sec.\n\n\nAndreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 18:15 tarihinde\nşunu yazdı:\n\n> Try \\o <filename> in psql, to redirect the output to file, and prevent it\n> from processing the json (ie. format it)\n>\n> Den 7. februar 2020 15:59:05 CET, skrev Asya Nevra Buyuksoy <\n> [email protected]>:\n>>\n>> I try it, but there is no enhancement.\n>> I read this link is about TOAST and also its sub_links;\n>> https://blog.gojekengineering.com/a-toast-from-postgresql-83b83d0d0683\n>> When I execute this query, except JSON data like;\n>> SELECT paramuser_id, paramperson_id, paramisdeleted, paramactiontime\n>> FROM zamazin;\n>> It takes 94 ms. :)\n>>\n>>\n>> Andreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 17:42\n>> tarihinde şunu yazdı:\n>>\n>>> På fredag 07. februar 2020 kl. 15:35:04, skrev Asya Nevra Buyuksoy <\n>>> [email protected]>:\n>>>\n>>> I use pgadmin3.\n>>>\n>>>\n>>> Try \"psql\", it has the lowest overhead (I think). pgAdmin might use time\n>>> presenting the results etc. which is easy to overlook.\n>>>\n>>> --\n>>> Andreas Joseph Krogh\n>>>\n>>\n> --\n> Sendt fra min Android-enhet med K-9 e-post. Unnskyld min kortfattethet.\n>\n\n---------- Forwarded message ---------Gönderen: Asya Nevra Buyuksoy <[email protected]>Date: 10 Şub 2020 Pzt, 10:51Subject: Re: TOAST table performance problemTo: Andreas Joseph Krogh <[email protected]>I copied my data to the CSV file, yes it is very fast. However, this does not solve my problem.After deserializing the on the front side, I want to visualize my data on the web page effectively.   When I select my data one by one with a limit clause, the query executes 200 ms. For example, If I create a function that takes data with a loop, the execution time will be 200 ms*1000=200 sec. Andreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 18:15 tarihinde şunu yazdı:Try \\o <filename> in psql, to redirect the output to file, and prevent it from processing the json (ie. format it)Den 7. februar 2020 15:59:05 CET, skrev Asya Nevra Buyuksoy <[email protected]>:\nI try it, but there is no enhancement.  I read this link is about TOAST and also its sub_links;https://blog.gojekengineering.com/a-toast-from-postgresql-83b83d0d0683  When I execute this query, except JSON data like;SELECT paramuser_id, paramperson_id, paramisdeleted, paramactiontime  FROM zamazin;It takes 94 ms. :)Andreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 17:42 tarihinde şunu yazdı:På fredag 07. februar 2020 kl. 15:35:04, skrev Asya Nevra Buyuksoy <[email protected]>:\n\nI use pgadmin3.\n\n \nTry \"psql\", it has the lowest overhead (I think). pgAdmin might use time presenting the results etc. which is easy to overlook.\n \n\n--\nAndreas Joseph Krogh\n\n\n\n-- Sendt fra min Android-enhet med K-9 e-post. Unnskyld min kortfattethet.", "msg_date": "Mon, 10 Feb 2020 10:52:21 +0300", "msg_from": "Asya Nevra Buyuksoy <[email protected]>", "msg_from_op": false, "msg_subject": "Fwd: TOAST table performance problem" }, { "msg_contents": "> ---------- Forwarded message ---------\n> Gönderen: *Asya Nevra Buyuksoy* <[email protected] \n> <mailto:[email protected]>>\n> Date: 10 Şub 2020 Pzt, 10:51\n> Subject: Re: TOAST table performance problem\n> To: Andreas Joseph Krogh <[email protected] <mailto:[email protected]>>\n>\n>\n> I copied my data to the CSV file, yes it is very fast. However, this \n> does not solve my problem.\n> After deserializing the on the front side, I want to visualize my data \n> on the web page effectively.\n> When I select my data one by one with a limit clause, the query \n> executes 200 ms. For example, If I create a function that takes data \n> with a loop, the execution time will be 200 ms*1000=200 sec.\n>\n> Andreas Joseph Krogh <[email protected] <mailto:[email protected]>>, \n> 7 Şub 2020 Cum, 18:15 tarihinde şunu yazdı:\n>\n> Try \\o <filename> in psql, to redirect the output to file, and\n> prevent it from processing the json (ie. format it)\n>\n> Den 7. februar 2020 15:59:05 CET, skrev Asya Nevra Buyuksoy\n> <[email protected] <mailto:[email protected]>>:\n>\n> I try it, but there is no enhancement.\n> I read this link is about TOAST and also its sub_links;\n> https://blog.gojekengineering.com/a-toast-from-postgresql-83b83d0d0683\n>\n> When I execute this query, except JSON data like;\n> SELECT paramuser_id, paramperson_id, paramisdeleted,\n> paramactiontime\n>   FROM zamazin;\n> It takes 94 ms. :)\n>\n>\n> Andreas Joseph Krogh <[email protected]\n> <mailto:[email protected]>>, 7 Şub 2020 Cum, 17:42 tarihinde\n> şunu yazdı:\n>\n> På fredag 07. februar 2020 kl. 15:35:04, skrev Asya Nevra\n> Buyuksoy <[email protected] <mailto:[email protected]>>:\n>\n> I use pgadmin3.\n>\n> Try \"psql\", it has the lowest overhead (I think). pgAdmin\n> might use time presenting the results etc. which is easy\n> to overlook.\n> -- \n> Andreas Joseph Krogh\n>\n>\n> -- \n> Sendt fra min Android-enhet med K-9 e-post. Unnskyld min\n> kortfattethet.\n>\n\nWhat Andreas is trying to say is that it's not PostgreSQL that is slow \nto read the JSON, but your client app that is slow to parse it.\n\n\n\n\n\n\n\n---------- Forwarded message ---------\n\nGönderen: Asya Nevra Buyuksoy\n<[email protected]>\n Date: 10 Şub 2020 Pzt, 10:51\n Subject: Re: TOAST table performance problem\n To: Andreas Joseph Krogh <[email protected]>\n\n\n\n\nI copied my data to the CSV\n file, yes it is very fast. However, this does not solve\n my problem.\nAfter\n deserializing the on the front side, I want to visualize\n my data on the web page effectively.   \n When I select my data one by one with a limit clause,\n the query executes 200 ms. For example, If I create a\n function that takes data with a loop, the execution time\n will be 200 ms*1000=200 sec.\n \n\n\n\nAndreas Joseph Krogh <[email protected]>, 7 Şub\n 2020 Cum, 18:15 tarihinde şunu yazdı:\n\n\nTry \\o <filename> in psql, to redirect the\n output to file, and prevent it from processing the json\n (ie. format it)\n\nDen 7. februar 2020 15:59:05\n CET, skrev Asya Nevra Buyuksoy <[email protected]>:\n \nI try it, but there is no\n enhancement.  \nI read this link is about TOAST and also its\n sub_links;\nhttps://blog.gojekengineering.com/a-toast-from-postgresql-83b83d0d0683  \n\nWhen I execute this query, except JSON data\n like;\nSELECT paramuser_id, paramperson_id,\n paramisdeleted, paramactiontime\n   FROM zamazin;\n\nIt takes 94 ms. :)\n\n\n\n\n\nAndreas Joseph\n Krogh <[email protected]>,\n 7 Şub 2020 Cum, 17:42 tarihinde şunu yazdı:\n\n\nPå fredag 07. februar 2020 kl. 15:35:04,\n skrev Asya Nevra Buyuksoy <[email protected]>:\n\nI use pgadmin3.\n\n \nTry \"psql\", it has the lowest overhead (I\n think). pgAdmin might use time presenting the\n results etc. which is easy to overlook.\n \n\n--\n Andreas Joseph Krogh\n\n\n\n\n\n\n\n\n -- \n Sendt fra min Android-enhet med K-9 e-post. Unnskyld min\n kortfattethet.\n\n\n\n\n\n\n What Andreas is trying to say is that it's not PostgreSQL that is\n slow to read the JSON, but your client app that is slow to parse it.", "msg_date": "Mon, 10 Feb 2020 09:31:30 -0300", "msg_from": "=?UTF-8?Q?Lu=c3=ads_Roberto_Weck?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: TOAST table performance problem" }, { "msg_contents": "Got it, thanks! I understand and know it that PostgreSQL is not slow, but I\nwant to a piece of advice how can I load this data fastly :)\n\nLuís Roberto Weck <[email protected]>, 10 Şub 2020 Pzt, 15:31\ntarihinde şunu yazdı:\n\n> ---------- Forwarded message ---------\n> Gönderen: Asya Nevra Buyuksoy <[email protected]>\n> Date: 10 Şub 2020 Pzt, 10:51\n> Subject: Re: TOAST table performance problem\n> To: Andreas Joseph Krogh <[email protected]>\n>\n>\n> I copied my data to the CSV file, yes it is very fast. However, this does\n> not solve my problem.\n> After deserializing the on the front side, I want to visualize my data on\n> the web page effectively.\n> When I select my data one by one with a limit clause, the query executes\n> 200 ms. For example, If I create a function that takes data with a loop,\n> the execution time will be 200 ms*1000=200 sec.\n>\n>\n> Andreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 18:15\n> tarihinde şunu yazdı:\n>\n>> Try \\o <filename> in psql, to redirect the output to file, and prevent it\n>> from processing the json (ie. format it)\n>>\n>> Den 7. februar 2020 15:59:05 CET, skrev Asya Nevra Buyuksoy <\n>> [email protected]>:\n>>>\n>>> I try it, but there is no enhancement.\n>>> I read this link is about TOAST and also its sub_links;\n>>> https://blog.gojekengineering.com/a-toast-from-postgresql-83b83d0d0683\n>>> When I execute this query, except JSON data like;\n>>> SELECT paramuser_id, paramperson_id, paramisdeleted, paramactiontime\n>>> FROM zamazin;\n>>> It takes 94 ms. :)\n>>>\n>>>\n>>> Andreas Joseph Krogh <[email protected]>, 7 Şub 2020 Cum, 17:42\n>>> tarihinde şunu yazdı:\n>>>\n>>>> På fredag 07. februar 2020 kl. 15:35:04, skrev Asya Nevra Buyuksoy <\n>>>> [email protected]>:\n>>>>\n>>>> I use pgadmin3.\n>>>>\n>>>>\n>>>> Try \"psql\", it has the lowest overhead (I think). pgAdmin might use\n>>>> time presenting the results etc. which is easy to overlook.\n>>>>\n>>>> --\n>>>> Andreas Joseph Krogh\n>>>>\n>>>\n>> --\n>> Sendt fra min Android-enhet med K-9 e-post. Unnskyld min kortfattethet.\n>>\n>\n> What Andreas is trying to say is that it's not PostgreSQL that is slow to\n> read the JSON, but your client app that is slow to parse it.\n>\n\nGot it, thanks! I understand and know it that PostgreSQL is not slow, but I want to a piece of advice how can I load this data fastly :)Luís Roberto Weck <[email protected]>, 10 Şub 2020 Pzt, 15:31 tarihinde şunu yazdı:\n\n\n---------- Forwarded message ---------\n\nGönderen: Asya Nevra Buyuksoy\n<[email protected]>\n Date: 10 Şub 2020 Pzt, 10:51\n Subject: Re: TOAST table performance problem\n To: Andreas Joseph Krogh <[email protected]>\n\n\n\n\nI copied my data to the CSV\n file, yes it is very fast. However, this does not solve\n my problem.\nAfter\n deserializing the on the front side, I want to visualize\n my data on the web page effectively.   \n When I select my data one by one with a limit clause,\n the query executes 200 ms. For example, If I create a\n function that takes data with a loop, the execution time\n will be 200 ms*1000=200 sec.\n \n\n\n\nAndreas Joseph Krogh <[email protected]>, 7 Şub\n 2020 Cum, 18:15 tarihinde şunu yazdı:\n\n\nTry \\o <filename> in psql, to redirect the\n output to file, and prevent it from processing the json\n (ie. format it)\n\nDen 7. februar 2020 15:59:05\n CET, skrev Asya Nevra Buyuksoy <[email protected]>:\n \nI try it, but there is no\n enhancement.  \nI read this link is about TOAST and also its\n sub_links;\nhttps://blog.gojekengineering.com/a-toast-from-postgresql-83b83d0d0683  \n\nWhen I execute this query, except JSON data\n like;\nSELECT paramuser_id, paramperson_id,\n paramisdeleted, paramactiontime\n   FROM zamazin;\n\nIt takes 94 ms. :)\n\n\n\n\n\nAndreas Joseph\n Krogh <[email protected]>,\n 7 Şub 2020 Cum, 17:42 tarihinde şunu yazdı:\n\n\nPå fredag 07. februar 2020 kl. 15:35:04,\n skrev Asya Nevra Buyuksoy <[email protected]>:\n\nI use pgadmin3.\n\n \nTry \"psql\", it has the lowest overhead (I\n think). pgAdmin might use time presenting the\n results etc. which is easy to overlook.\n \n\n--\n Andreas Joseph Krogh\n\n\n\n\n\n\n\n\n -- \n Sendt fra min Android-enhet med K-9 e-post. Unnskyld min\n kortfattethet.\n\n\n\n\n\n\n What Andreas is trying to say is that it's not PostgreSQL that is\n slow to read the JSON, but your client app that is slow to parse it.", "msg_date": "Mon, 10 Feb 2020 15:38:17 +0300", "msg_from": "Asya Nevra Buyuksoy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: TOAST table performance problem" }, { "msg_contents": "On Mon, Feb 10, 2020 at 7:38 AM Asya Nevra Buyuksoy <[email protected]>\nwrote:\n\n> Got it, thanks! I understand and know it that PostgreSQL is not slow, but\n> I want to a piece of advice how can I load this data fastly :)\n>\n\nYou haven't told us anything about your client, so what advice can we\noffer? Unless the bottleneck is in the libpq library, this is probably not\nthe right place to ask about it anyway.\n\nCheers,\n\nJeff\n\nOn Mon, Feb 10, 2020 at 7:38 AM Asya Nevra Buyuksoy <[email protected]> wrote:Got it, thanks! I understand and know it that PostgreSQL is not slow, but I want to a piece of advice how can I load this data fastly :)You haven't told us anything about your client, so what advice can we offer?  Unless the bottleneck is in the libpq library, this is probably not the right place to ask about it anyway.Cheers,Jeff", "msg_date": "Mon, 10 Feb 2020 07:53:57 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: TOAST table performance problem" }, { "msg_contents": "Ok, you are right. Thanks for everything.\n\nJeff Janes <[email protected]>, 10 Şub 2020 Pzt, 15:54 tarihinde şunu\nyazdı:\n\n> On Mon, Feb 10, 2020 at 7:38 AM Asya Nevra Buyuksoy <[email protected]>\n> wrote:\n>\n>> Got it, thanks! I understand and know it that PostgreSQL is not slow, but\n>> I want to a piece of advice how can I load this data fastly :)\n>>\n>\n> You haven't told us anything about your client, so what advice can we\n> offer? Unless the bottleneck is in the libpq library, this is probably not\n> the right place to ask about it anyway.\n>\n> Cheers,\n>\n> Jeff\n>\n\n  Ok, you are right. Thanks for everything.  Jeff Janes <[email protected]>, 10 Şub 2020 Pzt, 15:54 tarihinde şunu yazdı:On Mon, Feb 10, 2020 at 7:38 AM Asya Nevra Buyuksoy <[email protected]> wrote:Got it, thanks! I understand and know it that PostgreSQL is not slow, but I want to a piece of advice how can I load this data fastly :)You haven't told us anything about your client, so what advice can we offer?  Unless the bottleneck is in the libpq library, this is probably not the right place to ask about it anyway.Cheers,Jeff", "msg_date": "Mon, 10 Feb 2020 15:59:42 +0300", "msg_from": "Asya Nevra Buyuksoy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: TOAST table performance problem" } ]
[ { "msg_contents": "Hi,\n\nI have a large table of immutable events that need to be aggregated\nregularly to derive statistics. To improve the performance, that table is\nrolled up every 15minutes, so that online checks can aggregate rolled up\ndata and combine it with latest events created after the last roll up.\n\nTo implement this a query is executed that selects only events after the\ntime of the last rollup.\nThat time is determined dynamically based on a log table.\n\nWhen using a sub select or CTE to get the latest roll up time, the query\nplanner fails to recognize that a most of the large table would be filtered\nout by the condition and tries a sequential scan instead of an index scan.\nWhen using the literal value for the WHERE condition, the plan correctly\nuses an index scan, which is much faster.\n\nI analyzed the involved tables and increased the collected histogram, but\nthe query plan did not improve. Is there a way to help the query planner\nrecognize this in the dynamic case?\n\nBest Regards\nChris\n\n==== Original query with a CTE to get the timestamp to filter on\n\nhttps://explain.depesz.com/s/Hsix\n\nEXPLAIN (ANALYZE, BUFFERS) WITH current_rollup AS (\n SELECT COALESCE(MAX(window_end), '-infinity') AS cutoff\n FROM exchange.ledger_zerosum_rollup\n)\nSELECT *\nFROM exchange.ledger\nWHERE created > (SELECT cutoff FROM current_rollup);\n\n==== Query with literal value\n\nhttps://explain.depesz.com/s/ULAq\n\nEXPLAIN (ANALYZE, BUFFERS)\nSELECT *\nFROM exchange.ledger\nWHERE created > '2020-02-10T08:54:39.857789Z';\n\nHi,I have a large table of immutable events that need to be aggregated regularly to derive statistics. To improve the performance, that table is rolled up every 15minutes, so that online checks can aggregate rolled up data and combine it with latest events created after the last roll up.To implement this a query is executed that selects only events after the time of the last rollup.That time is determined dynamically based on a log table.When using a sub select or CTE to get the latest roll up time, the query planner fails to recognize that a most of the large table would be filtered out by the condition and tries a sequential scan instead of an index scan.When using the literal value for the WHERE condition, the plan correctly uses an index scan, which is much faster.I analyzed the involved tables and increased the collected histogram, but the query plan did not improve. Is there a way to help the query planner recognize this in the dynamic case?Best RegardsChris==== Original query with a CTE to get the timestamp to filter on https://explain.depesz.com/s/HsixEXPLAIN (ANALYZE, BUFFERS) WITH current_rollup AS (    SELECT COALESCE(MAX(window_end), '-infinity') AS cutoff    FROM exchange.ledger_zerosum_rollup)SELECT *FROM exchange.ledgerWHERE created > (SELECT cutoff FROM current_rollup);==== Query with literal valuehttps://explain.depesz.com/s/ULAqEXPLAIN (ANALYZE, BUFFERS)SELECT *FROM exchange.ledgerWHERE created > '2020-02-10T08:54:39.857789Z';", "msg_date": "Mon, 10 Feb 2020 11:34:01 +0100", "msg_from": "Chris Borckholder <[email protected]>", "msg_from_op": true, "msg_subject": "Bad selectivity estimate when using a sub query to determine WHERE\n condition" }, { "msg_contents": "Chris Borckholder <[email protected]> writes:\n> When using a sub select or CTE to get the latest roll up time, the query\n> planner fails to recognize that a most of the large table would be filtered\n> out by the condition and tries a sequential scan instead of an index scan.\n> When using the literal value for the WHERE condition, the plan correctly\n> uses an index scan, which is much faster.\n\nYeah, a scalar sub-select is pretty much a black box to the planner.\n\n> EXPLAIN (ANALYZE, BUFFERS) WITH current_rollup AS (\n> SELECT COALESCE(MAX(window_end), '-infinity') AS cutoff\n> FROM exchange.ledger_zerosum_rollup\n> )\n> SELECT *\n> FROM exchange.ledger\n> WHERE created > (SELECT cutoff FROM current_rollup);\n\nWell, it's not that hard to get rid of that scalar sub-select: since\nyou're already relying on current_rollup to produce exactly one row,\nyou could write a plain join instead, something like\n\nWITH current_rollup AS ...\nSELECT l.*\nFROM exchange.ledger l, current_rollup c\nWHERE l.created > c.cutoff;\n\nUnfortunately I doubt that will improve matters much, since the\nplanner also knows relatively little about MAX() and nothing about\nCOALESCE, so it's not going to be able to estimate what comes out\nof the WITH. I think you're going to have to cheat a bit.\n\nThe form of cheating that comes to mind is to wrap the sub-select\nin a function that's marked STABLE:\n\ncreate function current_rollup_cutoff() returns timestamp -- or whatever\nstable language sql as $$\nSELECT COALESCE(MAX(window_end), '-infinity') AS cutoff\nFROM exchange.ledger_zerosum_rollup\n$$;\n\nSELECT *\nFROM exchange.ledger\nWHERE created > current_rollup_cutoff();\n\nI have not actually tried this, but I think that since the function is\nmarked stable, the planner would test-run it to get an estimated value,\nand then produce a plan similar to what you'd get with a literal constant.\n\nOf course, then it's going to run the function once more when the query is\nexecuted for-real, so this approach doubles the cost of getting the MAX().\nThat shouldn't be too awful if you have an index on window_end, though.\n\nIf you like living dangerously, you could cheat a LOT and mark the\nfunction immutable so that its value gets substituted at plan time.\nBut that will only work for interactive submission of the outer\nquery --- if the plan gets cached and re-used, you'll have a stale\ncutoff value. Personally I wouldn't risk that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 10 Feb 2020 10:39:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad selectivity estimate when using a sub query to determine\n WHERE condition" }, { "msg_contents": "On Mon, Feb 10, 2020 at 11:34:01AM +0100, Chris Borckholder wrote:\n> I have a large table of immutable events that need to be aggregated\n> regularly to derive statistics. To improve the performance, that table is\n> rolled up every 15minutes, so that online checks can aggregate rolled up\n> data and combine it with latest events created after the last roll up.\n> \n> To implement this a query is executed that selects only events after the\n> time of the last rollup.\n> That time is determined dynamically based on a log table.\n\nPerhaps that could be done as an indexed column in the large table, rather\nthan querying a 2nd log table.\nPossibly with a partial index on that column: WHERE unprocessed='t'.\n\n> When using a sub select or CTE to get the latest roll up time, the query\n> planner fails to recognize that a most of the large table would be filtered\n> out by the condition and tries a sequential scan instead of an index scan.\n> When using the literal value for the WHERE condition, the plan correctly\n> uses an index scan, which is much faster.\n> \n> I analyzed the involved tables and increased the collected histogram, but\n> the query plan did not improve. Is there a way to help the query planner\n> recognize this in the dynamic case?\n\nAlso, if you used partitioning with pgostgres since v11, then I think most\npartitions would be excluded:\n\nhttps://www.postgresql.org/docs/12/release-12.html\n|Allow partition elimination during query execution (David Rowley, Beena Emerson)\n|Previously, partition elimination only happened at planning time, meaning many joins and prepared queries could not use partition elimination.\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=499be013de65242235ebdde06adb08db887f0ea5\n\nhttps://www.postgresql.org/about/featurematrix/detail/332/\n\nJustin\n\n\n", "msg_date": "Mon, 10 Feb 2020 13:13:04 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad selectivity estimate when using a sub query to determine\n WHERE condition" }, { "msg_contents": "Using a column to mark rolled up rows might have been a better choice, but\nthere are unfortunately some regulatory requirements\nthat require that table to be immutable. I'm not sure about the\nimplications w.r.t. auto vacuum, which is already a consideration for us\ndue to the sheer size of the table.\n\nI'm planning to partition the table as soon as we finish upgrading to v11.\n\nThanks for your insight!\n\nBest Regards\nChris\n\nOn Mon, Feb 10, 2020 at 8:13 PM Justin Pryzby <[email protected]> wrote:\n\n> On Mon, Feb 10, 2020 at 11:34:01AM +0100, Chris Borckholder wrote:\n> > I have a large table of immutable events that need to be aggregated\n> > regularly to derive statistics. To improve the performance, that table is\n> > rolled up every 15minutes, so that online checks can aggregate rolled up\n> > data and combine it with latest events created after the last roll up.\n> >\n> > To implement this a query is executed that selects only events after the\n> > time of the last rollup.\n> > That time is determined dynamically based on a log table.\n>\n> Perhaps that could be done as an indexed column in the large table, rather\n> than querying a 2nd log table.\n> Possibly with a partial index on that column: WHERE unprocessed='t'.\n>\n> > When using a sub select or CTE to get the latest roll up time, the query\n> > planner fails to recognize that a most of the large table would be\n> filtered\n> > out by the condition and tries a sequential scan instead of an index\n> scan.\n> > When using the literal value for the WHERE condition, the plan correctly\n> > uses an index scan, which is much faster.\n> >\n> > I analyzed the involved tables and increased the collected histogram, but\n> > the query plan did not improve. Is there a way to help the query planner\n> > recognize this in the dynamic case?\n>\n> Also, if you used partitioning with pgostgres since v11, then I think most\n> partitions would be excluded:\n>\n> https://www.postgresql.org/docs/12/release-12.html\n> |Allow partition elimination during query execution (David Rowley, Beena\n> Emerson)\n> |Previously, partition elimination only happened at planning time, meaning\n> many joins and prepared queries could not use partition elimination.\n>\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=499be013de65242235ebdde06adb08db887f0ea5\n>\n> https://www.postgresql.org/about/featurematrix/detail/332/\n>\n> Justin\n>\n\nUsing a column to mark rolled up rows might have been a better choice, but there are unfortunately some regulatory requirementsthat require that table to be immutable. I'm not sure about the implications w.r.t. auto vacuum, which is already a consideration for us due to the sheer size of the table.I'm planning to partition the table as soon as we finish upgrading to v11.Thanks for your insight!Best RegardsChrisOn Mon, Feb 10, 2020 at 8:13 PM Justin Pryzby <[email protected]> wrote:On Mon, Feb 10, 2020 at 11:34:01AM +0100, Chris Borckholder wrote:\n> I have a large table of immutable events that need to be aggregated\n> regularly to derive statistics. To improve the performance, that table is\n> rolled up every 15minutes, so that online checks can aggregate rolled up\n> data and combine it with latest events created after the last roll up.\n> \n> To implement this a query is executed that selects only events after the\n> time of the last rollup.\n> That time is determined dynamically based on a log table.\n\nPerhaps that could be done as an indexed column in the large table, rather\nthan querying a 2nd log table.\nPossibly with a partial index on that column: WHERE unprocessed='t'.\n\n> When using a sub select or CTE to get the latest roll up time, the query\n> planner fails to recognize that a most of the large table would be filtered\n> out by the condition and tries a sequential scan instead of an index scan.\n> When using the literal value for the WHERE condition, the plan correctly\n> uses an index scan, which is much faster.\n> \n> I analyzed the involved tables and increased the collected histogram, but\n> the query plan did not improve. Is there a way to help the query planner\n> recognize this in the dynamic case?\n\nAlso, if you used partitioning with pgostgres since v11, then I think most\npartitions would be excluded:\n\nhttps://www.postgresql.org/docs/12/release-12.html\n|Allow partition elimination during query execution (David Rowley, Beena Emerson)\n|Previously, partition elimination only happened at planning time, meaning many joins and prepared queries could not use partition elimination.\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=499be013de65242235ebdde06adb08db887f0ea5\n\nhttps://www.postgresql.org/about/featurematrix/detail/332/\n\nJustin", "msg_date": "Wed, 12 Feb 2020 09:09:25 +0100", "msg_from": "Chris Borckholder <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad selectivity estimate when using a sub query to determine\n WHERE condition" }, { "msg_contents": "On Mon, Feb 10, 2020 at 4:39 PM Tom Lane <[email protected]> wrote:\n>\n> Well, it's not that hard to get rid of that scalar sub-select: since\n> you're already relying on current_rollup to produce exactly one row,\n> you could write a plain join instead, something like\n>\n\nUsing a join instead of the sub-select did already help.\n\nEXPLAIN (ANALYZE, BUFFERS ) WITH current_rollup AS (\n SELECT COALESCE(MAX(window_end), '-infinity') AS cutoff\n FROM exchange.ledger_zerosum_rollup\n)\nSELECT *\nFROM exchange.ledger, current_rollup\nWHERE created > current_rollup.cutoff;\n\nhttps://explain.depesz.com/s/Zurb\n\nI'm a bit confused, because the row estimate on the index scan for the\nledger table seems to be way off still,\nbut nonetheless the planner now chooses the index scan.\nMaybe it has more insight into the result of the CTE this way?\nOr picks the index scan because it fits well with the nested loop?\n\n> The form of cheating that comes to mind is to wrap the sub-select\n> in a function that's marked STABLE:\n> create function current_rollup_cutoff() returns timestamp -- or whatever\n> stable language sql as $$\n> SELECT COALESCE(MAX(window_end), '-infinity') AS cutoff\n> FROM exchange.ledger_zerosum_rollup\n> $$;\n> SELECT *\n> FROM exchange.ledger\n> WHERE created > current_rollup_cutoff();\n> I have not actually tried this, but I think that since the function is\n> marked stable, the planner would test-run it to get an estimated value,\n> and then produce a plan similar to what you'd get with a literal constant.\n\nThe version with a function is even better, the query planner now uses\ngood estimates and produces a trivial execution plan.\nI'll go with that one as it seems to be the most future proof approach.\n\nhttps://explain.depesz.com/s/34m8\n\nThanks for your insight!\n\nBest Regards\nChris\n\n\n", "msg_date": "Wed, 12 Feb 2020 09:12:34 +0100", "msg_from": "Chris Borckholder <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad selectivity estimate when using a sub query to determine\n WHERE condition" } ]
[ { "msg_contents": "Hi\n\nI recently came across a performance problem with a big transaction block,\nwhich doesn't make sense to me and hopefully someone more knowledgeable can\nexplain the reasons and point out a direction for a solution.\n\n-- TL; DR;\n\nUPDATE on a row takes relatively constant amount of time outside a\ntransaction block, but running UPDATE on a single row over and over inside\na transaction gets slower and slower as the number of UPDATE operations\nincreases.\n\nWhy is updating the same row large number of times progressively slower\ninside a transaction? And is there a way to avoid this performance\ndegradation?\n\nI set up a POC repository to demonstrate the problem:\nhttps://github.com/DeadAlready/pg-test\n\n-- Backstory\n\nNeeded to run a large block of operations (a mix of inserts and updates) on\na table. It took a considerable amount of time inside a transaction and was\nabout 10x faster without the transaction. Since I need all the operations\nto run as a single block that can be rolled back this was unsatisfactory.\nThus began my quest to locate the problem. Since the actual data structure\nis complex and involves a bunch of triggers, foreign keys etc it took some\ntime to narrow down, but in the end I found that the structure itself is\nirrelevant. The issue occurs even if you have a single two column table\nwith a handful of rows. The only requirement seems to be that the NR of\nUPDATEs per single row is large. While the update performance inside a\ntransaction starts out faster than outside, the performance starts to\ndegrade from the get go. It really isn't noticeable until about 5k UPDATEs\non a single row. At around 100k UPDATEs it is about 2.5x slower than the\nsame operation outside the transaction block and about 4x slower than at\nthe beginning of the transaction.\n\nThanks,\nKarl\n\nHiI recently came across a performance problem with a big transaction block, which doesn't make sense to me and hopefully someone more knowledgeable can explain the reasons and point out a direction for a solution.-- TL; DR;UPDATE on a row takes relatively constant amount of time outside a transaction block, but running UPDATE on a single row over and over inside a transaction gets slower and slower as the number of UPDATE operations increases.Why is updating the same row large number of times progressively slower inside a transaction? And is there a way to avoid this performance degradation?I set up a POC repository to demonstrate the problem: https://github.com/DeadAlready/pg-test-- BackstoryNeeded to run a large block of operations (a mix of inserts and updates) on a table. It took a considerable amount of time inside a transaction and was about 10x faster without the transaction. Since I need all the operations to run as a single block that can be rolled back this was unsatisfactory. Thus began my quest to locate the problem. Since the actual data structure is complex and involves a bunch of triggers, foreign keys etc it took some time to narrow down, but in the end I found that the structure itself is irrelevant. The issue occurs even if you have a single two column table with a handful of rows. The only requirement seems to be that the NR of UPDATEs per single row is large. While the update performance inside a transaction starts out faster than outside, the performance starts to degrade from the get go. It really isn't noticeable until about 5k UPDATEs on a single row. At around 100k UPDATEs it is about 2.5x slower than the same operation outside the transaction block and about 4x slower than at the beginning of the transaction.Thanks,Karl", "msg_date": "Thu, 13 Feb 2020 12:21:17 +0200", "msg_from": "=?UTF-8?B?S2FybCBEw7zDvG5h?= <[email protected]>", "msg_from_op": true, "msg_subject": "How to avoid UPDATE performance degradation in a transaction" }, { "msg_contents": "On Thu, Feb 13, 2020 at 1:42 PM Karl Düüna <[email protected]> wrote:\n\n> It really isn't noticeable until about 5k UPDATEs on a single row.\n>\n\nDon't know why, and never dealt with a scenario where this would even come\nup, but that this doesn't perform well inside a transaction isn't\nsurprising to me. Kinda surprised it works well at all actually. I'd\nprobably try and rework the processing algorithm to create an unlogged\ntemporary table with data from the row's initial state, manipulate until my\nheart's content, then take the final result and update the single live row\nwith the final state.\n\nDavid J.\n\nOn Thu, Feb 13, 2020 at 1:42 PM Karl Düüna <[email protected]> wrote:It really isn't noticeable until about 5k UPDATEs on a single row.Don't know why, and never dealt with a scenario where this would even come up, but that this doesn't perform well inside a transaction isn't surprising to me.  Kinda surprised it works well at all actually.  I'd probably try and rework the processing algorithm to create an unlogged temporary table with data from the row's initial state, manipulate until my heart's content, then take the final result and update the single live row with the final state.David J.", "msg_date": "Thu, 13 Feb 2020 13:50:38 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to avoid UPDATE performance degradation in a transaction" }, { "msg_contents": "=?UTF-8?B?S2FybCBEw7zDvG5h?= <[email protected]> writes:\n> -- TL; DR;\n> UPDATE on a row takes relatively constant amount of time outside a\n> transaction block, but running UPDATE on a single row over and over inside\n> a transaction gets slower and slower as the number of UPDATE operations\n> increases.\n\nYeah, that's unsurprising. Each new update creates a new version of\nits row. When you do them in separate transactions, then as soon as\ntransaction N+1 commits the system can recognize that the row version\ncreated by transaction N is dead (no longer visible to anybody) and\nrecycle it, allowing the number of row versions present on-disk to\nstay more or less constant. However, there's not equivalently good\nhousekeeping for row versions created by a transaction that's still\nrunning. So when you do N updates in one transaction, there are going\nto be N doomed-but-not-yet-recyclable row versions on disk.\n\nAside from the disk-space bloat, this is bad because the later updates\nhave to scan through all the row versions created by earlier updates,\nlooking for the version they're supposed to update. So you have an O(N^2)\ncost associated with that, which no doubt is what you're observing.\n\nThere isn't any really good fix for this, other than \"don't do that\".\nDavid's nearby suggestion of using a temp table won't help, because\nthis behavior is the same whether the table is temp or regular.\n\nIn principle perhaps we could improve the granularity of dead-row\ndetection, so that if a row version is both created and deleted by\nthe current transaction, and we have no live snapshots that could\nsee it, we could go ahead and mark the row dead. But it's not clear\nthat that'd be worth the extra cost to do. Certainly no existing PG\nrelease tries to do it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Feb 2020 16:16:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to avoid UPDATE performance degradation in a transaction" }, { "msg_contents": "Thank you for the explanation.\n\nThat is pretty much what I suspected, but I held out hope that there is\nsome functionality I could use to clear the bloat as the transaction\nprogresses and bring the UPDATE time back down again.\n\"dont do that\" is sensible, but much more easily said than done, as the in\nthe actual use case I have, the single row updates are caused by various\ntriggers running on separate operations -\nwhich means I will have to muck about with conditional trigger disabling\nand/or change a large part of the database logic around these tables.\nBut I guess that is a whole other issue.\n\nAnyways thank you for the time and explanation,\nKarl\n\n\nOn Thu, 13 Feb 2020 at 23:16, Tom Lane <[email protected]> wrote:\n\n> =?UTF-8?B?S2FybCBEw7zDvG5h?= <[email protected]> writes:\n> > -- TL; DR;\n> > UPDATE on a row takes relatively constant amount of time outside a\n> > transaction block, but running UPDATE on a single row over and over\n> inside\n> > a transaction gets slower and slower as the number of UPDATE operations\n> > increases.\n>\n> Yeah, that's unsurprising. Each new update creates a new version of\n> its row. When you do them in separate transactions, then as soon as\n> transaction N+1 commits the system can recognize that the row version\n> created by transaction N is dead (no longer visible to anybody) and\n> recycle it, allowing the number of row versions present on-disk to\n> stay more or less constant. However, there's not equivalently good\n> housekeeping for row versions created by a transaction that's still\n> running. So when you do N updates in one transaction, there are going\n> to be N doomed-but-not-yet-recyclable row versions on disk.\n>\n> Aside from the disk-space bloat, this is bad because the later updates\n> have to scan through all the row versions created by earlier updates,\n> looking for the version they're supposed to update. So you have an O(N^2)\n> cost associated with that, which no doubt is what you're observing.\n>\n> There isn't any really good fix for this, other than \"don't do that\".\n> David's nearby suggestion of using a temp table won't help, because\n> this behavior is the same whether the table is temp or regular.\n>\n> In principle perhaps we could improve the granularity of dead-row\n> detection, so that if a row version is both created and deleted by\n> the current transaction, and we have no live snapshots that could\n> see it, we could go ahead and mark the row dead. But it's not clear\n> that that'd be worth the extra cost to do. Certainly no existing PG\n> release tries to do it.\n>\n> regards, tom lane\n>\n\nThank you for the explanation.That is pretty much what I suspected, but I held out hope that there is some functionality I could use to clear the bloat as the transaction progresses and bring the UPDATE time back down again.\"dont do that\" is sensible, but much more easily said than done, as the in the actual use case I have, the single row updates are caused by various triggers running on separate operations - which means I will have to muck about with conditional trigger disabling and/or change a large part of the database logic around these tables.But I guess that is a whole other issue.Anyways thank you for the time and explanation,KarlOn Thu, 13 Feb 2020 at 23:16, Tom Lane <[email protected]> wrote:=?UTF-8?B?S2FybCBEw7zDvG5h?= <[email protected]> writes:\n> -- TL; DR;\n> UPDATE on a row takes relatively constant amount of time outside a\n> transaction block, but running UPDATE on a single row over and over inside\n> a transaction gets slower and slower as the number of UPDATE operations\n> increases.\n\nYeah, that's unsurprising.  Each new update creates a new version of\nits row.  When you do them in separate transactions, then as soon as\ntransaction N+1 commits the system can recognize that the row version\ncreated by transaction N is dead (no longer visible to anybody) and\nrecycle it, allowing the number of row versions present on-disk to\nstay more or less constant.  However, there's not equivalently good\nhousekeeping for row versions created by a transaction that's still\nrunning.  So when you do N updates in one transaction, there are going\nto be N doomed-but-not-yet-recyclable row versions on disk.\n\nAside from the disk-space bloat, this is bad because the later updates\nhave to scan through all the row versions created by earlier updates,\nlooking for the version they're supposed to update.  So you have an O(N^2)\ncost associated with that, which no doubt is what you're observing.\n\nThere isn't any really good fix for this, other than \"don't do that\".\nDavid's nearby suggestion of using a temp table won't help, because\nthis behavior is the same whether the table is temp or regular.\n\nIn principle perhaps we could improve the granularity of dead-row\ndetection, so that if a row version is both created and deleted by\nthe current transaction, and we have no live snapshots that could\nsee it, we could go ahead and mark the row dead.  But it's not clear\nthat that'd be worth the extra cost to do.  Certainly no existing PG\nrelease tries to do it.\n\n                        regards, tom lane", "msg_date": "Fri, 14 Feb 2020 09:14:57 +0200", "msg_from": "=?UTF-8?B?S2FybCBEw7zDvG5h?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to avoid UPDATE performance degradation in a transaction" }, { "msg_contents": "If your trigger is supposed to change certain fields, you could return OLD\ninstead of NEW if those fields have not been changed by the trigger. You\ncould also check an updated_on timestamp field to verify if the row has\nalready been modified and potentially skip the trigger altogether. Just a\ncouple thoughts to avoid the bloat.\n\nIf your trigger is supposed to change certain fields, you could return OLD instead of NEW if those fields have not been changed by the trigger. You could also check an updated_on timestamp field to verify if the row has already been modified and potentially skip the trigger altogether. Just a couple thoughts to avoid the bloat.", "msg_date": "Fri, 14 Feb 2020 10:54:29 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to avoid UPDATE performance degradation in a transaction" }, { "msg_contents": "Hi,\n\nOn 2020-02-13 16:16:14 -0500, Tom Lane wrote:\n> In principle perhaps we could improve the granularity of dead-row\n> detection, so that if a row version is both created and deleted by\n> the current transaction, and we have no live snapshots that could\n> see it, we could go ahead and mark the row dead. But it's not clear\n> that that'd be worth the extra cost to do. Certainly no existing PG\n> release tries to do it.\n\nI've repeatedly wondered about improving our logic around this. There's\na lot of cases where we deal with a lot of bloat solely because our\nsimplistic liveliness analysis.\n\nIt's not just within a single transaction, but also makes the impact of\nlongrunning transactions significantly worse. It's common to have\n\"areas\" of some tables that change quickly, without normally causing a\nlot of problems - but once there is a single longrunning transaction the\namount of bloat created is huge. It's not that bad to have the \"hot\nareas\" increased in size by 2-3x, but right now it'll often be several\norders of magnitude.\n\nBut perhaps it doesn't make sense to conflate your suggestion above with\nwhat I brought up: There'd might not be a lot of common\ncode/infrastructure between deleting row versions that are invisible due\nto no backend having a snapshot to see them (presumably inferred via\nxmin/xmax), and newly created row versions within a transaction that are\ninvisible because there's no snapshot with that cid.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 19 Feb 2020 20:35:03 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to avoid UPDATE performance degradation in a transaction" } ]
[ { "msg_contents": "Hello,\n\nWhen creating partial indexes, can postgres utilize another index for\nfiguring which rows should be included in the partial index, without\nperforming a full table scan?\n\nMy scenario is that I have a table with 50M rows that are categorized into\n10K categories. I need to create a partial index for each category. I have\ncreated a index on the category column, hoping that postgres can use this\ninformation when creating the partial indexes. However, postgres always\nperforms full table scan.\n\nI've tested with PostgreSQL 12.2. Below is an example setup showing the\nproblem.\n\nTEST1 shows that building a full index covering all rows takes 18 seconds.\n\nTEST2 shows that creating a partial index for one of the category1\n(category1=1) takes 3 seconds. This means that for creating 10K partial\nindexes for each category, it will take over 8 hours. Compared to just 18\nseconds in TEST1, it is much longer due to repeated full table scans.\n\nTEST3 shows that even with another index (index_category2 created in SETUP)\ncovering category2, creating a partial index for one of the category2\n(category2=1) still takes 3 seconds. I think postgres is still doing a full\ntable scan here.\n\nMy question is: can postgres utilize index_category2 is TEST3?\n\nThank you.\n\n---------\n-- SETUP\n---------\n\nCREATE TABLE test_data (\n id bigint PRIMARY KEY,\n category1 bigint,\n category2 bigint\n);\n\n\nINSERT INTO test_data(id, category1, category2)\nSELECT id, category, category FROM (\n SELECT\n generate_series(1, 50000000) AS id,\n (random()*10000)::bigint AS category\n) q;\n-- Query returned successfully in 1 min 47 secs.\n\nCREATE INDEX index_category2 ON test_data(category2);\n-- Query returned successfully in 32 secs 347 msec.\n\n\n--------------\n-- TEST1: CREATE FULL INDEX\n--------------\n\nCREATE INDEX index_full ON test_data(id);\n-- Query returned successfully in 18 secs 713 msec.\n\n\n--------------\n-- TEST2: CREATE PARTIAL INDEX, using category1\n--------------\n\nCREATE INDEX index_partial_1 ON test_data(id) WHERE category1=1;\n-- Query returned successfully in 3 secs 523 msec.\n\n\n--------------\n-- TEST3: CREATE PARTIAL INDEX, using category2\n--------------\n\nCREATE INDEX index_partial_2 ON test_data(id) WHERE category2=1;\n-- Query returned successfully in 3 secs 651 msec.\n\n\n--- END ---\n\nHello,When creating partial indexes, can postgres utilize another index for figuring which rows should be included in the partial index, without performing a full table scan?My scenario is that I have a table with 50M rows that are categorized into 10K categories. I need to create a partial index for each category. I have created a index on the category column, hoping that postgres can use this information when creating the partial indexes. However, postgres always performs full table scan.I've tested with PostgreSQL 12.2. Below is an example setup showing the problem.TEST1 shows that building a full index covering all rows takes 18 seconds.TEST2 shows that creating a partial index for one of the category1 (category1=1) takes 3 seconds. This means that for creating 10K partial indexes for each category, it will take over 8 hours. Compared to just 18 seconds in TEST1, it is much longer due to repeated full table scans.TEST3 shows that even with another index (index_category2 created in SETUP) covering category2, creating a partial index for one of the category2 (category2=1) still takes 3 seconds. I think postgres is still doing a full table scan here.My question is: can postgres utilize index_category2 is TEST3?Thank you.----------- SETUP---------CREATE TABLE test_data (    id bigint PRIMARY KEY,    category1 bigint,    category2 bigint);INSERT INTO test_data(id, category1, category2)SELECT id, category, category FROM (    SELECT        generate_series(1, 50000000) AS id,        (random()*10000)::bigint AS category) q;--  Query returned successfully in 1 min 47 secs.CREATE INDEX index_category2 ON test_data(category2);-- Query returned successfully in 32 secs 347 msec.---------------- TEST1: CREATE FULL INDEX--------------CREATE INDEX index_full ON test_data(id);-- Query returned successfully in 18 secs 713 msec.---------------- TEST2: CREATE PARTIAL INDEX, using category1--------------CREATE INDEX index_partial_1 ON test_data(id) WHERE category1=1;-- Query returned successfully in 3 secs 523 msec.---------------- TEST3: CREATE PARTIAL INDEX, using category2--------------CREATE INDEX index_partial_2 ON test_data(id) WHERE category2=1;-- Query returned successfully in 3 secs 651 msec.--- END ---", "msg_date": "Sat, 15 Feb 2020 19:04:48 +0800", "msg_from": "MingJu Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Partial index creation always scans the entire table" }, { "msg_contents": "Hello\n\n> When creating partial indexes, can postgres utilize another index for figuring which rows should be included in the partial index, without performing a full table scan?\n\nNo.\ncreate index always perform a seqscan on table. And two full table scan for create index concurrently.\n\nregards, Sergei\n\n\n", "msg_date": "Sat, 15 Feb 2020 15:47:51 +0300", "msg_from": "Sergei Kornilov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index creation always scans the entire table" }, { "msg_contents": "On Sat, Feb 15, 2020 at 07:04:48PM +0800, MingJu Wu wrote:\n> Hello,\n> \n> When creating partial indexes, can postgres utilize another index for\n> figuring which rows should be included in the partial index, without\n> performing a full table scan?\n> \n> My scenario is that I have a table with 50M rows that are categorized into\n> 10K categories. I need to create a partial index for each category. I have\n> created a index on the category column, hoping that postgres can use this\n> information when creating the partial indexes. However, postgres always\n> performs full table scan.\n> \n> I've tested with PostgreSQL 12.2. Below is an example setup showing the\n\nI don't think it's possible, and an index scan wouldn't necessarily be faster,\nanyway, since the reads might be unordered rather than sequantial, and might\nhit large fractions of the table even though only returning a fraction of its\ntuples.\n\nBut have you thought about partitioning on category rather than partial\nindexes? Possibly hash partition of (category). If your queries usually\ninclude category_id=X, that might be a win for performance anyway, since tables\ncan now be read sequentially rather than scannned by index (again, probably out\nof order).\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 15 Feb 2020 06:53:30 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index creation always scans the entire table" }, { "msg_contents": "On Sat, 2020-02-15 at 19:04 +0800, MingJu Wu wrote:\n> When creating partial indexes, can postgres utilize another index for figuring which rows\n> should be included in the partial index, without performing a full table scan?\n\nNo; it has to be a full sequential scan.\n\n> My scenario is that I have a table with 50M rows that are categorized into 10K categories.\n> I need to create a partial index for each category. I have created a index on the category\n> column, hoping that postgres can use this information when creating the partial indexes.\n> However, postgres always performs full table scan.\n\nThere is your problem.\n\nYou don't need a partial index per category, you need a single index that *contains* the category.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Sat, 15 Feb 2020 22:15:53 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index creation always scans the entire table" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Sat, 2020-02-15 at 19:04 +0800, MingJu Wu wrote:\n>> My scenario is that I have a table with 50M rows that are categorized into 10K categories.\n>> I need to create a partial index for each category.\n\n> You don't need a partial index per category, you need a single index that *contains* the category.\n\nYeah, that's an anti-pattern. Essentially, you are trying to replace the\nfirst branching level of an index that includes the category column with\na ton of system catalog entries and planner proof logic to select one of\nN indexes that don't include the category. It is *highly* unlikely that\nthat's going to be a win. It's going to be a huge loss if the planner\nfails to make the proof you need, and even when it does, it's not really\ngoing to be faster overall --- you've traded off run-time for planning\ntime, at a rather unfavorable exchange rate. Updates on the table are\ngoing to be enormously penalized, too, because the index machinery doesn't\nhave any way to understand that only one of the indexes needs work.\n\nI've seen people try to do this before. I wonder if the manual page\nabout partial indexes should explicitly say \"don't do that\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Feb 2020 10:30:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index creation always scans the entire table" }, { "msg_contents": "On Sat, Feb 15, 2020 at 10:15:53PM +0100, Laurenz Albe wrote:\n> > My scenario is that I have a table with 50M rows that are categorized into 10K categories.\n> > I need to create a partial index for each category. I have created a index on the category\n> > column, hoping that postgres can use this information when creating the partial indexes.\n> > However, postgres always performs full table scan.\n> \n> There is your problem.\n> \n> You don't need a partial index per category, you need a single index that *contains* the category.\n\nOn Sun, Feb 16, 2020 at 10:30:05AM -0500, Tom Lane wrote:\n> Laurenz Albe <[email protected]> writes:\n> > On Sat, 2020-02-15 at 19:04 +0800, MingJu Wu wrote:\n> >> My scenario is that I have a table with 50M rows that are categorized into 10K categories.\n> >> I need to create a partial index for each category.\n> \n> > You don't need a partial index per category, you need a single index that *contains* the category.\n> \n> Yeah, that's an anti-pattern. Essentially, you are trying to replace the\n\nThe OP mentioned having an index on \"category\", which they were hoping the\ncreation of partial indexes would use:\n\nOn Sat, Feb 15, 2020 at 07:04:48PM +0800, MingJu Wu wrote:\n> My scenario is that I have a table with 50M rows that are categorized into\n> 10K categories. I need to create a partial index for each category. I have\n> created a index on the category column, hoping that postgres can use this\n> information when creating the partial indexes. However, postgres always\n> performs full table scan.\n\nSo the question is why they (think they) *also* need large number of partial\nindexes.\n\nI was reminded of reading this, but I think it's a pretty different case.\nhttps://heap.io/blog/engineering/running-10-million-postgresql-indexes-in-production\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 16 Feb 2020 09:59:19 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index creation always scans the entire table" }, { "msg_contents": "Justin Pryzby <[email protected]> writes:\n> I was reminded of reading this, but I think it's a pretty different case.\n> https://heap.io/blog/engineering/running-10-million-postgresql-indexes-in-production\n\nYeah, the critical paragraph in that is \n\n This isn’t as scary as it sounds for a two main reasons. First, we\n shard all of our data by customer. Each table in our database holds\n only one customer’s data, so each table has a only a few thousand\n indexes at most. Second, these events are relatively rare. The most\n common defined events make up only a few percent of a customer’s raw\n events, and most are much more rare. This means that we perform\n relatively little I/O maintaining this schema, because most incoming\n events match no event definitions and therefore don’t need to be\n written to any of the indexes. Similarly, the indexes don’t take up\n much space on disk.\n\nA set of partial indexes that cover a small part of the total data\ncan be sensible. If you're trying to cover most/all of the data,\nyou're doing it wrong --- basically, you're reinventing partitioning\nusing the wrong tools.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Feb 2020 11:35:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index creation always scans the entire table" }, { "msg_contents": ">From: Tom Lane <[email protected]> Sent: Sunday, February 16, 2020 7:30\nAM\n>I've seen people try to do this before. I wonder if the manual page about\npartial indexes should explicitly say \"don't do that\". \n>\tregards, tom lane\n\nYes please (seriously). The utter beauty of Postgres is the flexibility and\npower that its evolutionary path has allowed/created. The tragic danger is\nthat the beauty is fairly easy to misapply/misuse. Caveats in the\ndocumentation would be very beneficial to both seasoned practitioners and\nnewcomers - it is quite challenging to keep up with everything Postgres and\nthe documentation is where most of us turn for guidance. \n\nAnd thank you Tom (and others), for your willingness to share these (and\nmany, many other) insights - it is so powerful when facts connect with\ndatabase reality.\n\nMike Sofen \n\n\n\n\n", "msg_date": "Sun, 16 Feb 2020 16:43:10 -0800", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Partial index creation always scans the entire table" }, { "msg_contents": "On Sun, Feb 16, 2020 at 04:43:10PM -0800, Mike Sofen wrote:\n> >From: Tom Lane <[email protected]> Sent: Sunday, February 16, 2020 7:30\n> AM\n> >I've seen people try to do this before. I wonder if the manual page about\n> partial indexes should explicitly say \"don't do that\". \n> >\tregards, tom lane\n> \n> Yes please (seriously). The utter beauty of Postgres is the flexibility and\n> power that its evolutionary path has allowed/created. The tragic danger is\n> that the beauty is fairly easy to misapply/misuse.\n\nQuote. Enough rope to shoot yourself in the foot.\n\nWould you care to suggest text to be included here ?\nhttps://www.postgresql.org/docs/devel/indexes-partial.html\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 16 Feb 2020 18:52:26 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index creation always scans the entire table" }, { "msg_contents": "\"Mike Sofen\" <[email protected]> writes:\n>> From: Tom Lane <[email protected]> Sent: Sunday, February 16, 2020 7:30 AM\n>>> I've seen people try to do this before. I wonder if the manual page about\n>>> partial indexes should explicitly say \"don't do that\". \n\n> Yes please (seriously). The utter beauty of Postgres is the flexibility and\n> power that its evolutionary path has allowed/created. The tragic danger is\n> that the beauty is fairly easy to misapply/misuse. Caveats in the\n> documentation would be very beneficial to both seasoned practitioners and\n> newcomers - it is quite challenging to keep up with everything Postgres and\n> the documentation is where most of us turn for guidance. \n\nOK, so how about something like this added to section 11.8\n(no pretty markup as yet):\n\nExample 11.4. Do Not use Partial Indexes as a Substitute for Partitioning\n\nYou might be tempted to create a large set of non-overlapping partial\nindexes, for example\n\n\tCREATE INDEX mytable_cat_1 ON mytable (data) WHERE category = 1;\n\tCREATE INDEX mytable_cat_2 ON mytable (data) WHERE category = 2;\n\tCREATE INDEX mytable_cat_3 ON mytable (data) WHERE category = 3;\n\t...\n\nThis is a bad idea! Almost certainly, you'll be better off with a single\nnon-partial index, declared like\n\n\tCREATE INDEX mytable_cat_data ON mytable (category, data);\n\n(Put the category column first, for the reasons described in section 11.3\nMulticolumn Indexes.) While a search in this larger index might have to\ndescend through a couple more tree levels than a search in a smaller\nindex, that's almost certainly going to be cheaper than the planner effort\nneeded to select the appropriate one of the partial indexes. The core of\nthe problem is that the system does not understand the relationship among\nthe partial indexes, and will laboriously test each one to see if it's\napplicable to the current query.\n\nIf your table is large enough that a single index really is a bad idea,\nyou should look into using partitioning instead (section whatever-it-is).\nWith that mechanism, the system does understand that the tables and\nindexes are non-overlapping, so much better performance is possible.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 16 Feb 2020 20:09:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index creation always scans the entire table" } ]
[ { "msg_contents": "Hi\n\n\nOn a server with 32 cores and 250 GB memory, with CentOS 7 and kernel 4.4.214-1.el7.elrepo.x86_64, I try to run 30 parallel threads using dblink. (https://github.com/larsop/postgres_execute_parallel) . I have tried to disconnect and reconnect in the dblink code and that did not help.\n\nIf I reduce the number of threads I get less CPU usage and much less SubtransControlLock.\n\nEach thread are inserting many lines into a Postgis Topology layer. I have a lot of try catch in this code to avoid missing lines (https://github.com/larsop/resolve-overlap-and-gap/tree/add_postgis_topology_using_proc) .\n\n\nWhat happens is that after some minutes the CPU can fall to maybe 20% usage and most of the threads are blocked by SubtransControlLock, and when the number SubtransControlLock goes down the CPU load increases again. The jobs usually goes through without any errors, but it takes to long time because of the SubtransControlLock blocks.\n\n\nThere is no iowait on the server and there is plenty of free memory on the server. There seems to be no locks on the common tables.\n\n“SELECT relation::regclass, * FROM pg_locks WHERE NOT GRANTED;” is always empty.\n\n\nI am using a lot temp tables and unlogged tables.\n\n\nTo reduce the number locks I have a simple check before I kick off new jobs like the one below, but that did not help very much either. Yes it does a lot waiting, but SubtransControlLock kick in when all threads are up running again.\n\n\nLOOP\n\nEXECUTE Format('SELECT count(*) from pg_stat_activity where wait_event = %L and query like %L',\n\n'SubtransControlLock',\n\n'CALL resolve_overlap_gap_save_single_cells%') into subtransControlLock;\n\nEXIT WHEN subtransControlLock = 0;\n\nsubtransControlLock_count := subtransControlLock_count + 1;\n\nPERFORM pg_sleep(subtransControlLock*subtransControlLock_count*0.1);\n\nEND LOOP;\n\n\nI have tested with postgres 11, postgres 12, postgis 2.5 , postgis 3.0 and it seems to behave save.\n\n\nI have also tried to recompile postgres with the setting below and that did not solve the problem either.\n\n/* Number of SLRU buffers to use for subtrans */\n\n#define NUM_SUBTRANS_BUFFERS 2048\n\n\n\nI have tested different values for memory and other settings nothing seems to matter. Here are the settings right now.\n\n\nmaintenance_work_mem = 8GB\n\nmax_connections = 600\n\nwork_mem = 500MB\n\ntemp_buffers = 100MB\n\nshared_buffers = 64GB\n\neffective_cache_size = 124GB\n\nwal_buffers = 640MB\n\nseq_page_cost = 2.0\n\nrandom_page_cost = 2.0\n\ncheckpoint_flush_after = 2MB\n\ncheckpoint_completion_target = 0.9\n\ndefault_statistics_target = 1000\n\nshared_preload_libraries = 'pg_stat_statements'\n\npg_stat_statements.max = 10000\n\npg_stat_statements.track = all\n\neffective_io_concurrency = 500 # 1-1000; 0 disables prefetching\n\n# test to avoid SubtransControlLock\n\n#bgwriter_lru_maxpages = 100000\n\n#bgwriter_lru_maxpages=0\n\n#bgwriter_delay = 20ms\n\nsynchronous_commit = off\n\n\nAny idea about how to solve this ?\n\n\nLars\n\n\n\n\n\n\n\n\n\n\nHi\n\n\n\n\nOn a server with 32 cores and 250 GB memory, with CentOS 7 and kernel 4.4.214-1.el7.elrepo.x86_64, I try to run 30 parallel threads using dblink.\n(https://github.com/larsop/postgres_execute_parallel) . I have tried to disconnect and reconnect in the dblink code and that did not help.\n\n\nIf I reduce the number of threads I get less CPU usage and much less SubtransControlLock.\n\n\nEach thread are inserting many lines into a Postgis Topology layer. I have a lot of try catch in this code to avoid missing lines (https://github.com/larsop/resolve-overlap-and-gap/tree/add_postgis_topology_using_proc)\n .\n\n\n\n\n\nWhat happens is that after some minutes the CPU can fall to maybe 20% usage and most of the threads are blocked by SubtransControlLock, and when the number SubtransControlLock goes down the CPU load increases again. The jobs usually goes through without any\n errors, but it takes to long time because of the SubtransControlLock blocks.\n\n\n\n\nThere is no iowait on the server and there is plenty of free memory on the server. There seems to be no locks on the common tables. \n\n“SELECT relation::regclass, * FROM pg_locks WHERE NOT GRANTED;” is always empty. \n\n\n\n\nI am using a lot temp tables and unlogged tables.\n\n\n\n\nTo reduce the number locks I have a simple check before I kick off new jobs like the one below, but that did not help very much either. Yes it does a lot waiting, but SubtransControlLock\n kick in when all threads are up running again.\n\n\n\n\nLOOP\n\nEXECUTE Format('SELECT count(*) from pg_stat_activity where wait_event = %L and query like %L',\n\n'SubtransControlLock',\n\n'CALL resolve_overlap_gap_save_single_cells%') into subtransControlLock;\n\nEXIT WHEN subtransControlLock = 0;\n\nsubtransControlLock_count := subtransControlLock_count + 1;\n\nPERFORM pg_sleep(subtransControlLock*subtransControlLock_count*0.1);\n\nEND LOOP;\n\n\n\n\n\nI have tested with postgres 11, postgres 12, postgis 2.5 , postgis 3.0 and it seems to behave save.\n\n\n\n\nI have also tried to recompile postgres with the setting below and that did not solve the problem either.\n\n/* Number of SLRU buffers to use for subtrans */\n\n#define NUM_SUBTRANS_BUFFERS 2048\n\n\n\n\n\n\n\nI have tested different values for memory and other settings nothing seems to matter. Here are the settings right now.\n\n\n\n\nmaintenance_work_mem = 8GB\n\nmax_connections = 600\n\nwork_mem = 500MB\n\ntemp_buffers = 100MB\n\nshared_buffers = 64GB\n\neffective_cache_size = 124GB\n\nwal_buffers = 640MB\n\nseq_page_cost = 2.0\n\nrandom_page_cost = 2.0\n\ncheckpoint_flush_after = 2MB\n\ncheckpoint_completion_target = 0.9\n\ndefault_statistics_target = 1000\n\nshared_preload_libraries = 'pg_stat_statements'\n\npg_stat_statements.max = 10000\n\npg_stat_statements.track = all\n\neffective_io_concurrency = 500 # 1-1000; 0 disables prefetching\n\n# test to avoid SubtransControlLock\n\n#bgwriter_lru_maxpages = 100000\n\n#bgwriter_lru_maxpages=0\n\n#bgwriter_delay = 20ms\n\nsynchronous_commit = off\n\n\n\n\n\nAny idea about how to solve this ?\n\n\n\n\nLars", "msg_date": "Sun, 16 Feb 2020 17:15:25 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "SubtransControlLock and performance problems" }, { "msg_contents": "Lars Aksel Opsahl wrote:\n> What happens is that after some minutes the CPU can fall to maybe 20% usage and most of\n> the threads are blocked by SubtransControlLock, and when the number SubtransControlLock\n> goes down the CPU load increases again. The jobs usually goes through without any errors,\n> but it takes to long time because of the SubtransControlLock blocks.\n\nThat's typically a sign that you are using more than 64 subtransactions per transaction.\n\nThat could either be SAVEPOINT SQL statements or PL/pgSQL code with blocks\ncontaining the EXCEPTION clause.\n\nThe data structure in shared memory that holds information for each session\ncan cache 64 subtransactions, beyond that it has to access \"pg_subtrans\" to get\nthe required information, which leads to contention.\n\nOften the problem is caused by a misguided attempt to wrape every single\nstatement in a subtransaction to emulate the behavior of other database\nsystems, for example with the \"autosave = always\" option of the JDBC driver.\n\nThe solution is to use fewer subtransactions per transaction.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Mon, 17 Feb 2020 10:53:21 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SubtransControlLock and performance problems" }, { "msg_contents": ">From: Laurenz Albe <[email protected]>\n\n>Sent: Monday, February 17, 2020 10:53 AM\n\n>To: Lars Aksel Opsahl <[email protected]>; [email protected] <[email protected]>\n\n>Subject: Re: SubtransControlLock and performance problems\n\n>\n\n>Lars Aksel Opsahl wrote:\n\n>> What happens is that after some minutes the CPU can fall to maybe 20% usage and most of\n\n>> the threads are blocked by SubtransControlLock, and when the number SubtransControlLock\n\n>> goes down the CPU load increases again. The jobs usually goes through without any errors,\n\n>> but it takes to long time because of the SubtransControlLock blocks.\n\n>\n\n>That's typically a sign that you are using more than 64 subtransactions per transaction.\n\n>\n\n>That could either be SAVEPOINT SQL statements or PL/pgSQL code with blocks\n\n>containing the EXCEPTION clause.\n\n>\n\n>The data structure in shared memory that holds information for each session\n\n>can cache 64 subtransactions, beyond that it has to access \"pg_subtrans\" to get\n\n> the required information, which leads to contention.\n\n>\n\n> Often the problem is caused by a misguided attempt to wrape every single\n\n> statement in a subtransaction to emulate the behavior of other database\n\n> systems, for example with the \"autosave = always\" option of the JDBC driver.\n\n>\n\n> The solution is to use fewer subtransactions per transaction.\n\n>\n\n\nHi\n\n\nI have tested in branch ( https://github.com/larsop/resolve-overlap-and-gap/tree/add_postgis_topology_using_func) where I use only have functions and no procedures and I still have the same problem with subtransaction locks.\n\n\nCan I based on this assume that the problem is only related to exceptions ?\n\n\nDoes this mean that if have 32 threads running in parallel and I get 2 exceptions in each thread I have reached a state where I will get contention ?\n\n\nIs it any way increase from 64 to a much higher level, when compiling the code ?\n\n\nBasically what I do here is that I catch exceptions when get them and tries to solve the problem in a alternative way.\n\n\nThanks a lot.\n\n\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n\n\n>From: \nLaurenz Albe <[email protected]>\n>Sent: Monday, February 17, 2020 10:53 AM\n>To: \nLars Aksel \nOpsahl <[email protected]>;\[email protected] <[email protected]>\n>Subject: Re: SubtransControlLock and performance problems\n> \n>Lars\nAksel \nOpsahl wrote:\n>> What happens is that after some minutes the CPU can fall to maybe 20% usage and most of\n>> the threads are blocked by SubtransControlLock, and when the number SubtransControlLock\n>> goes down the CPU load increases again. The jobs usually goes through without any errors,\n>> but it takes to long time because of the SubtransControlLock blocks.\n>\n>That's typically a sign that you are using more than 64\nsubtransactions per transaction.\n>\n>That could either be SAVEPOINT SQL statements or PL/pgSQL code with blocks\n>containing the EXCEPTION clause.\n>\n>The data structure in shared memory that holds information for each session\n>can cache 64 \nsubtransactions, beyond that it has to access \"pg_subtrans\" to get\n> the required information, which leads to contention.\n>\n> Often the problem is caused by a misguided attempt to\nwrape every single\n> statement in a \nsubtransaction to emulate the behavior of other database\n> systems, for example with the \"autosave = always\" option of the JDBC driver.\n>\n> The solution is to use fewer\nsubtransactions per transaction.\n>\n\n\nHi\n\n\nI have tested in branch (\n\nhttps://github.com/larsop/resolve-overlap-and-gap/tree/add_postgis_topology_using_func) where I use only have functions and no procedures and I still have the same problem with subtransaction locks. \n\n\nCan I based on this assume that the problem is only related to exceptions  ?\n\n\nDoes this mean that if have 32 threads running in parallel and I get 2 exceptions in each thread I have reached a state where I will get contention ?\n\n\nIs it any way increase from 64 to a much higher level, when compiling the code ?\n\n\nBasically what I do here is that I catch exceptions when get them and\n tries to solve the problem in a alternative way.\n\n\nThanks a lot.\n \n\nLars", "msg_date": "Mon, 17 Feb 2020 15:03:56 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SubtransControlLock and performance problems" }, { "msg_contents": "On Mon, 2020-02-17 at 15:03 +0000, Lars Aksel Opsahl wrote:\n> I have tested in branch ( https://github.com/larsop/resolve-overlap-and-gap/tree/add_postgis_topology_using_func)\n> where I use only have functions and no procedures and I still have the same problem with subtransaction locks. \n> \n> Can I based on this assume that the problem is only related to exceptions ?\n\nNo, it is related to BEGIN ... EXCEPTION ... END blocks, no matter if\nan exception is thrown or not.\n\nAs soon as execution enters such a block, a subtransaction is started.\n\n> Does this mean that if have 32 threads running in parallel and I get 2 exceptions in each thread I have reached a state where I will get contention ?\n\nNo, it means that if you enter a block with an EXCEPTION clause more\nthan 64 times in a single transaction, performance will drop.\n\n> Is it any way increase from 64 to a much higher level, when compiling the code ?\n\nYes, you can increase PGPROC_MAX_CACHED_SUBXIDS in src/include/storage/proc.h\n\n> Basically what I do here is that I catch exceptions when get them and tries to solve the problem in a alternative way.\n\nEither use shorter transactions, or start fewer subtransactions.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Mon, 17 Feb 2020 17:35:52 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SubtransControlLock and performance problems" }, { "msg_contents": "Hi\n\npo 17. 2. 2020 v 17:36 odesílatel Laurenz Albe <[email protected]>\nnapsal:\n\n> On Mon, 2020-02-17 at 15:03 +0000, Lars Aksel Opsahl wrote:\n> > I have tested in branch (\n> https://github.com/larsop/resolve-overlap-and-gap/tree/add_postgis_topology_using_func\n> )\n> > where I use only have functions and no procedures and I still have the\n> same problem with subtransaction locks.\n> >\n> > Can I based on this assume that the problem is only related to\n> exceptions ?\n>\n> No, it is related to BEGIN ... EXCEPTION ... END blocks, no matter if\n> an exception is thrown or not.\n>\n> As soon as execution enters such a block, a subtransaction is started.\n>\n> > Does this mean that if have 32 threads running in parallel and I get 2\n> exceptions in each thread I have reached a state where I will get\n> contention ?\n>\n> No, it means that if you enter a block with an EXCEPTION clause more\n> than 64 times in a single transaction, performance will drop.\n>\n> > Is it any way increase from 64 to a much higher level, when compiling\n> the code ?\n>\n> Yes, you can increase PGPROC_MAX_CACHED_SUBXIDS in\n> src/include/storage/proc.h\n>\n> > Basically what I do here is that I catch exceptions when get them and\n> tries to solve the problem in a alternative way.\n>\n> Either use shorter transactions, or start fewer subtransactions.\n>\n> Yours,\n> Laurenz Albe\n>\n\nit is interesting topic, but I don't see it in my example\n\nCREATE OR REPLACE FUNCTION public.fx(integer)\n RETURNS void\n LANGUAGE plpgsql\nAS $function$\nbegin\n for i in 1..$1 loop\n begin\n --raise notice 'xx';\nexception when others then\n raise notice 'yyy';\nend;\nend loop;\nend;\n$function$\n\nthe execution time is without performance drops.\n\nIs there some prerequisite to see performance problems?\n\nPavel\n\n-- \n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n>\n>\n\nHipo 17. 2. 2020 v 17:36 odesílatel Laurenz Albe <[email protected]> napsal:On Mon, 2020-02-17 at 15:03 +0000, Lars Aksel Opsahl wrote:\n> I have tested in branch ( https://github.com/larsop/resolve-overlap-and-gap/tree/add_postgis_topology_using_func)\n> where I use only have functions and no procedures and I still have the same problem with subtransaction locks. \n> \n> Can I based on this assume that the problem is only related to exceptions  ?\n\nNo, it is related to BEGIN ... EXCEPTION ... END blocks, no matter if\nan exception is thrown or not.\n\nAs soon as execution enters such a block, a subtransaction is started.\n\n> Does this mean that if have 32 threads running in parallel and I get 2 exceptions in each thread I have reached a state where I will get contention ?\n\nNo, it means that if you enter a block with an EXCEPTION clause more\nthan 64 times in a single transaction, performance will drop.\n\n> Is it any way increase from 64 to a much higher level, when compiling the code ?\n\nYes, you can increase PGPROC_MAX_CACHED_SUBXIDS in src/include/storage/proc.h\n\n> Basically what I do here is that I catch exceptions when get them and tries to solve the problem in a alternative way.\n\nEither use shorter transactions, or start fewer subtransactions.\n\nYours,\nLaurenz Albeit is interesting topic, but I don't see it in my example CREATE OR REPLACE FUNCTION public.fx(integer) RETURNS void LANGUAGE plpgsqlAS $function$begin  for i in 1..$1 loop  begin    --raise notice 'xx';exception when others then  raise notice 'yyy';end;end loop;end;$function$the execution time is without performance drops.Is there some prerequisite to see performance problems?Pavel\n-- \nCybertec | https://www.cybertec-postgresql.com", "msg_date": "Mon, 17 Feb 2020 19:01:11 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SubtransControlLock and performance problems" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> po 17. 2. 2020 v 17:36 odesílatel Laurenz Albe <[email protected]>\n> napsal:\n>> Either use shorter transactions, or start fewer subtransactions.\n\n> it is interesting topic, but I don't see it in my example\n\n> CREATE OR REPLACE FUNCTION public.fx(integer)\n> RETURNS void\n> LANGUAGE plpgsql\n> AS $function$\n> begin\n> for i in 1..$1 loop\n> begin\n> --raise notice 'xx';\n> exception when others then\n> raise notice 'yyy';\n> end;\n> end loop;\n> end;\n> $function$\n\nThis example doesn't create or modify any table rows within the\nsubtransactions, so (I think) we won't assign XIDs to them.\nIt's consumption of subtransaction XIDs that causes the issue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Feb 2020 13:23:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SubtransControlLock and performance problems" }, { "msg_contents": "po 17. 2. 2020 v 19:23 odesílatel Tom Lane <[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n> > po 17. 2. 2020 v 17:36 odesílatel Laurenz Albe <[email protected]\n> >\n> > napsal:\n> >> Either use shorter transactions, or start fewer subtransactions.\n>\n> > it is interesting topic, but I don't see it in my example\n>\n> > CREATE OR REPLACE FUNCTION public.fx(integer)\n> > RETURNS void\n> > LANGUAGE plpgsql\n> > AS $function$\n> > begin\n> > for i in 1..$1 loop\n> > begin\n> > --raise notice 'xx';\n> > exception when others then\n> > raise notice 'yyy';\n> > end;\n> > end loop;\n> > end;\n> > $function$\n>\n> This example doesn't create or modify any table rows within the\n> subtransactions, so (I think) we won't assign XIDs to them.\n> It's consumption of subtransaction XIDs that causes the issue.\n>\n\nI tested\n\nCREATE OR REPLACE FUNCTION public.fx(integer)\n RETURNS void\n LANGUAGE plpgsql\nAS $function$\nbegin\n for i in 1..$1 loop\n begin\n insert into foo values(i);\nexception when others then\n raise notice 'yyy';\nend;\nend loop;\nend;\n$function$\n\nand I don't see any significant difference between numbers less than 64 and\nhigher\n\n\n\n> regards, tom lane\n>\n\npo 17. 2. 2020 v 19:23 odesílatel Tom Lane <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n> po 17. 2. 2020 v 17:36 odesílatel Laurenz Albe <[email protected]>\n> napsal:\n>> Either use shorter transactions, or start fewer subtransactions.\n\n> it is interesting topic, but I don't see it in my example\n\n> CREATE OR REPLACE FUNCTION public.fx(integer)\n>  RETURNS void\n>  LANGUAGE plpgsql\n> AS $function$\n> begin\n>   for i in 1..$1 loop\n>   begin\n>     --raise notice 'xx';\n> exception when others then\n>   raise notice 'yyy';\n> end;\n> end loop;\n> end;\n> $function$\n\nThis example doesn't create or modify any table rows within the\nsubtransactions, so (I think) we won't assign XIDs to them.\nIt's consumption of subtransaction XIDs that causes the issue.I tested CREATE OR REPLACE FUNCTION public.fx(integer) RETURNS void LANGUAGE plpgsqlAS $function$begin  for i in 1..$1 loop  begin    insert into foo values(i);exception when others then  raise notice 'yyy';end;end loop;end;$function$and I don't see any significant difference between numbers less than 64 and higher \n\n                        regards, tom lane", "msg_date": "Mon, 17 Feb 2020 19:41:51 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SubtransControlLock and performance problems" }, { "msg_contents": "On 2020-Feb-16, Lars Aksel Opsahl wrote:\n\n> On a server with 32 cores and 250 GB memory, with CentOS 7 and kernel\n> 4.4.214-1.el7.elrepo.x86_64, I try to run 30 parallel threads using\n> dblink. (https://github.com/larsop/postgres_execute_parallel) . I have\n> tried to disconnect and reconnect in the dblink code and that did not\n> help.\n\nI think one issue is that pg_clog has 128 buffers (per commit\n5364b357fb1) while subtrans only has 32. It might be productive to\nraise the number of subtrans buffers (see #define NUM_SUBTRANS_BUFFERS\nin src/include/access/subtrans.h; requires a recompile.) Considering\nthat each subtrans entry is 16 times larger than clog (2 bits vs. 4\nbytes), you'd require 2048 subtrans buffers to cover the same XID range\nwithout I/O if my math is right. That's only 16 MB ... though slru.c\ncode might not be prepared to deal with that many buffers. Worth some\nexperimentation, I guess.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 17 Feb 2020 16:10:05 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SubtransControlLock and performance problems" }, { "msg_contents": "On Mon, 2020-02-17 at 19:41 +0100, Pavel Stehule wrote:\n> I tested \n> \n> CREATE OR REPLACE FUNCTION public.fx(integer)\n> RETURNS void\n> LANGUAGE plpgsql\n> AS $function$\n> begin\n> for i in 1..$1 loop\n> begin\n> insert into foo values(i);\n> exception when others then\n> raise notice 'yyy';\n> end;\n> end loop;\n> end;\n> $function$\n> \n> and I don't see any significant difference between numbers less than 64 and higher\n\nDid you have several concurrent sessions accessing the rows that others created?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Tue, 18 Feb 2020 18:27:24 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SubtransControlLock and performance problems" }, { "msg_contents": "út 18. 2. 2020 v 18:27 odesílatel Laurenz Albe <[email protected]>\nnapsal:\n\n> On Mon, 2020-02-17 at 19:41 +0100, Pavel Stehule wrote:\n> > I tested\n> >\n> > CREATE OR REPLACE FUNCTION public.fx(integer)\n> > RETURNS void\n> > LANGUAGE plpgsql\n> > AS $function$\n> > begin\n> > for i in 1..$1 loop\n> > begin\n> > insert into foo values(i);\n> > exception when others then\n> > raise notice 'yyy';\n> > end;\n> > end loop;\n> > end;\n> > $function$\n> >\n> > and I don't see any significant difference between numbers less than 64\n> and higher\n>\n> Did you have several concurrent sessions accessing the rows that others\n> created?\n>\n\nno, I didn't\n\nPavel\n\n\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\nút 18. 2. 2020 v 18:27 odesílatel Laurenz Albe <[email protected]> napsal:On Mon, 2020-02-17 at 19:41 +0100, Pavel Stehule wrote:\n> I tested \n> \n> CREATE OR REPLACE FUNCTION public.fx(integer)\n>  RETURNS void\n>  LANGUAGE plpgsql\n> AS $function$\n> begin\n>   for i in 1..$1 loop\n>   begin\n>     insert into foo values(i);\n> exception when others then\n>   raise notice 'yyy';\n> end;\n> end loop;\n> end;\n> $function$\n> \n> and I don't see any significant difference between numbers less than 64 and higher\n\nDid you have several concurrent sessions accessing the rows that others created?no, I didn'tPavel\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com", "msg_date": "Wed, 19 Feb 2020 00:46:05 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SubtransControlLock and performance problems" }, { "msg_contents": "Hi\n\n________________________________\n\n>From: Laurenz Albe <[email protected]>\n\n>Sent: Tuesday, February 18, 2020 6:27 PM\n\n>ATo: Pavel Stehule <[email protected]>; Tom Lane <[email protected]>\n\n>Cc: Lars Aksel Opsahl <[email protected]>; [email protected] <[email protected]>\n\n>Subject: Re: SubtransControlLock and performance problems\n\n>\n\n>Did you have several concurrent sessions accessing the rows that others created?\n\n\nHi\n\n\nThanks every body, I have done more testing here..\n\n\n- I was not able fix this problem by increasing this values\n\nsrc/include/access/subtrans.h, define NUM_SUBTRANS_BUFFERS 8196\n\nsrc/include/storage/proc.h , PGPROC_MAX_CACHED_SUBXIDS 128\n\n\nIf tried to increase PGPROC_MAX_CACHED_SUBXIDS more than 128 Postgres core dumped. I tried to increase shared memory and other settings but I was not able to get it statble.\n\n\nWith the values above I did see same performance problems and we ended with a lot of subtransControlLock.\n\n\nSo I started to change the code based on your feedbacks.\n\n\n- What seems to work very good in combination with a catch exception and retry pattern is to insert the data in to separate table for each job. (I the current testcase we reduced the number of subtransControlLock from many hundreds to almost none.)\n\n\nThen I later can pick up these results from different the tables with another job that inserts data in to common data structure and in this job I don’t have any catch retry pattern. Then I was able to handle 534 of 592 jobs/cells with out any subtransControlLock at all.\n\n\nBut 58 jobs did not finish so for these I had to use a catch retry pattern and then then I got the subtransControlLock problems, but thats for a limited sets of the data.\n\n\nBetween each job I also close open the connections I dblink.\n\n\nIn this test I used dataset with data set 619230 surface with total of 25909671 and it did finish in 24:42.363, with NUM_SUBTRANS_BUFFERS 8196 and PGPROC_MAX_CACHED_SUBXIDS 128. When I changed this back to the original values the same test took 23:54.973.\n\n\nFor me it’s seems like in Postgres it’s better to have functions that returns an error state together with the result and not throws an exceptions, because exceptions leads performance degeneration when working with big datasets.\n\n\nThanks\n\n\nLars\n\n\n\n\n\n\n\n\n\nHi\n\n\n\n\n\n\n\n>From: \nLaurenz Albe <[email protected]>\n>Sent: Tuesday, February 18, 2020 6:27 PM\n>ATo: \nPavel Stehule <[email protected]>; Tom Lane <[email protected]>\n>Cc:\nLars \nAksel Opsahl <[email protected]>;\[email protected] <[email protected]>\n>Subject: Re: SubtransControlLock and performance problems\n>\n>Did you have several concurrent sessions accessing the rows that others created?\n\n\n\n\nHi\n\n\n\n\nThanks every body, I have done more testing here..\n\n\n\n\n- I was not able fix this problem by increasing this values\n\nsrc/include/access/subtrans.h, define NUM_SUBTRANS_BUFFERS 8196\n\nsrc/include/storage/proc.h , PGPROC_MAX_CACHED_SUBXIDS 128\n\n\n\n\nIf tried to increase PGPROC_MAX_CACHED_SUBXIDS more than 128 Postgres core dumped. I tried to increase shared memory and other settings but I was not able to get it statble.\n\n\n\n\nWith the values above I did see same performance problems and we ended with a lot of subtransControlLock.\n\n\n\n\nSo I started to change the code based on your feedbacks.\n\n\n\n\n- What seems to work very good in combination with a catch exception and retry pattern is to insert the data in to separate table for each job. (I the current testcase we reduced the number of subtransControlLock from many hundreds to almost none.)\n\n\n\n\nThen I later can pick up these results from different the tables with another job that inserts data in to common data structure and in this job I don’t have any catch retry pattern. Then I was able to handle 534 of 592 jobs/cells with out any subtransControlLock\n at all.\n\n\n\n\nBut 58 jobs did not finish so for these I had to use a catch retry pattern and then then I got the subtransControlLock problems, but thats for a limited sets of the data.\n\n\n\n\nBetween each job I also close open the connections I dblink.\n\n\n\n\nIn this test I used dataset with data set 619230 surface with total of 25909671 and it did finish in 24:42.363, with NUM_SUBTRANS_BUFFERS 8196 and PGPROC_MAX_CACHED_SUBXIDS 128. When I changed this back to the original values the same test took 23:54.973.\n\n\n\n\nFor me it’s seems like in Postgres it’s better to have functions that returns an error state together with the result and not throws an exceptions, because exceptions leads performance degeneration when working with big datasets.\n\n\n\n\nThanks\n\n\n\n\nLars", "msg_date": "Wed, 19 Feb 2020 10:49:14 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SubtransControlLock and performance problems" }, { "msg_contents": "On 2020-Feb-19, Lars Aksel Opsahl wrote:\n\n> With the values above I did see same performance problems and we ended\n> with a lot of subtransControlLock.\n> \n> So I started to change the code based on your feedbacks.\n> \n> - What seems to work very good in combination with a catch exception\n> and retry pattern is to insert the data in to separate table for each\n> job. (I the current testcase we reduced the number of\n> subtransControlLock from many hundreds to almost none.)\n\nI think at this point your only recourse is to start taking profiles to\nsee where the time is going. Without that, you're just flying blind and\nwhatever you do will not necessarily move your needle at all.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 19 Feb 2020 12:23:53 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SubtransControlLock and performance problems" }, { "msg_contents": "Hi\n\n________________________________\n\n>From: Alvaro Herrera <[email protected]>\n\n>Sent: Wednesday, February 19, 2020 4:23 PM\n\n>To: Lars Aksel Opsahl <[email protected]>\n\n>Cc: Laurenz Albe <[email protected]>; Pavel Stehule <[email protected]>; Tom Lane <[email protected]>; [email protected] <[email protected]>\n\n>Subject: Re: SubtransControlLock and performance problems\n\n>\n\n>On 2020-Feb-19, Lars Aksel Opsahl wrote:\n\n>\n\n>> With the values above I did see same performance problems and we ended\n\n>> with a lot of subtransControlLock.\n\n>>\n\n>> So I started to change the code based on your feedbacks.\n\n>>\n\n>> - What seems to work very good in combination with a catch exception\n\n>> and retry pattern is to insert the data in to separate table for each\n\n>> job. (I the current testcase we reduced the number of\n\n>> subtransControlLock from many hundreds to almost none.)\n\n>\n\n>I think at this point your only recourse is to start taking profiles to\n\n>see where the time is going. Without that, you're just flying blind and\n\n>whatever you do will not necessarily move your needle at all.\n\n\nHi\n\nYes I totally agree with you and yes I have tried to do some profiling and testing while developing.\n\n From the worst case to best case the time is reduced 15 times (from 300 minutes to 20 minutes) when testing a small dataset for with 619230 surface (25909671 total line points) with the test below “resolve_overlap_gap_run('org_jm.jm_ukomm_flate','figurid','geo',4258,false,'test_topo_jm',0.000001,31,3000); “\n\nThe reason for this seems to be related to the problems described by Laurenz Albe related to how Postgres handles try and catch and sub transactions, which I did not know about. If we don't have this is mind and we start to get subtranslocks it seems to kill the performance in some cases.\n\nIn this test I ran with 31 parallel threads which is very high on a server with only 32 cores and maybe not realistic. I just did this now see what happens when I try to push a server to it’s limits and maximise the performance increase. If I reduce this to 1 single thread, there should be now difference and if run on 16 threads the difference would much much smaller.\n\nI will now start to run on datasets which are 10 times bigger to check how thing scales, but then run with around maybe 28 parallel jobs.\n\nThe two branches I have tested on now which should show the main difference are here.\n\nhttps://github.com/larsop/resolve-overlap-and-gap/tree/add_postgis_topology_using_TopoGeo_addLinestringwhich is the faster one.\n\nhttps://github.com/larsop/resolve-overlap-and-gap/tree/add_postgis_topology which is slower one, but here I have now added a check on number of subtranslocks before I kick of new jobs and that reduced time form 9 hours to 3 hours.\n\n\nThanks.\n\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi\n\n\n\n\n\n\n\n>From: \nAlvaro Herrera <[email protected]>\n>Sent: Wednesday, February 19, 2020 4:23 PM\n>To: \nLars Aksel \nOpsahl <[email protected]>\n>Cc:\nLaurenz \nAlbe <[email protected]>;\nPavel \nStehule <[email protected]>; Tom Lane <[email protected]>;\[email protected] <[email protected]>\n>Subject: Re: SubtransControlLock and performance problems\n> \n>On 2020-Feb-19,\nLars \nAksel Opsahl wrote:\n>\n>> With the values above I did see same performance problems and we ended\n>> with a lot of subtransControlLock.\n>> \n>> So I started to change the code based on your feedbacks.\n>> \n>> - What seems to work very good in combination with a catch exception\n>> and retry pattern is to insert the data in to separate table for each\n>> job. (I the current\ntestcase we reduced the number of\n>> subtransControlLock from many hundreds to almost none.)\n>\n>I think at this point your only recourse is to start taking profiles to\n>see where the time is going. \nWithout that, you're just flying blind and\n>whatever you do will not necessarily move your needle at all.\n\n\n\n\nHi\n\nYes I totally agree with you and yes I have tried to do some profiling and testing while developing.\n\n From the worst case to best case the time is reduced 15 times (from 300 minutes to 20 minutes) when testing a small dataset for with 619230 surface (25909671 total line points) with the test below “resolve_overlap_gap_run('org_jm.jm_ukomm_flate','figurid','geo',4258,false,'test_topo_jm',0.000001,31,3000);\n “\n\nThe reason for this seems to be related to the problems described by Laurenz Albe related to how Postgres handles try and catch and sub transactions, which I did not know about. If we don't have this is mind and we start to get subtranslocks it seems to kill\n the performance in some cases.\n\nIn this test I ran with 31 parallel threads which is very high on a server with only 32 cores and maybe not realistic. I just did this now see what happens when\n I try to push a server to it’s limits and maximise the performance increase. If I reduce this to 1 single thread, there should be now difference and if run on\n 16 threads the difference would much much smaller. \n\nI will now start to run on datasets which are 10 times bigger to check how thing scales, but then run with around maybe 28 parallel jobs. \n\nThe two branches I have tested on now which should show the main difference are here.\n\n\nhttps://github.com/larsop/resolve-overlap-and-gap/tree/add_postgis_topology_using_TopoGeo_addLinestringwhich is the faster one.\n\nhttps://github.com/larsop/resolve-overlap-and-gap/tree/add_postgis_topology which is slower one, but here I have now added a check on number of subtranslocks before I\n kick of new jobs and that reduced time form 9 hours  to 3 hours.\n\n\n\n\nThanks.\n\n\n\n\nLars", "msg_date": "Thu, 20 Feb 2020 09:20:50 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SubtransControlLock and performance problems" } ]
[ { "msg_contents": "after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. \n\nspec: RAM 16gb,4vCore\nAny bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! \n\n\n\n\nI could see below error logs and due to this reason database more often going into recovery mode, \n\n\n2020-02-17 22:34:32 UTC::@:[20467]:LOG: server process (PID32731) was terminated by signal 9: Killed\n2020-02-17 22:34:32 UTC::@:[20467]:DETAIL:Failed process was running: selectinfo_starttime,info_starttimel,info_conversationid,info_status,classification_type,intentname,confidencescore,versions::text,messageidfrom salesdb.liveperson.intents where info_status='CLOSE' AND ( 1=1 ) AND ( 1=1)\n2020-02-17 22:34:32 UTC::@:[20467]:LOG:terminating any other active server processes\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC::@:[20467]:LOG: archiverprocess (PID 30799) exited with exit code 1\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:DETAIL:The postmaster has commanded this server process to roll back the current transactionand exit, because another server process exited abnormally and possiblycorrupted shared memory.\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC::@:[30798]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC::@:[30798]:DETAIL: Thepostmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC::@:[30798]:HINT: In amoment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally and possiblycorrupted shared memory.\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:DETAIL:The postmaster has commanded this server process to roll back the current transactionand exit, because another server process exited abnormally and possiblycorrupted shared memory.\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:10.65.152.155(58906):bi_user@salesdb:[17003]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32UTC:10.65.152.155(58906):bi_user@salesdb:[17003]:DETAIL: The postmaster hascommanded this server process to roll back the current transaction and exit,because another server process exited abnormally and possibly corrupted sharedmemory.\n2020-02-17 22:34:32UTC:10.65.152.155(58906):bi_user@salesdb:[17003]:HINT: In a moment you shouldbe able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally and possiblycorrupted shared memory.\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:HINT:In a moment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:HINT:In a moment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(43660):devops_user@salesdb:[30873]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(43660):devops_user@salesdb:[30873]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(43660):devops_user@salesdb:[30873]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(42966):devops_user@salesdb:[30869]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(42966):devops_user@salesdb:[30869]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(42966):devops_user@salesdb:[30869]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:32 UTC:(60818):devops_user@salesdb:[30831]:WARNING:terminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(60818):devops_user@salesdb:[30831]:DETAIL:The postmaster has commanded this server process to roll back the currenttransaction and exit, because another server process exited abnormally andpossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(60818):devops_user@salesdb:[30831]:HINT:In a moment you should be able to reconnect to the database and repeat yourcommand.\n2020-02-17 22:34:33 UTC::@:[20467]:LOG: allserver processes terminated; reinitializing\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: databasesystem was interrupted; last known up at 2020-02-17 22:33:33 UTC\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: databasesystem was not properly shut down; automatic recovery in progress\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: redostarts at 15B0/D5FCA110\n2020-02-17 22:34:34 UTC:(54556):digitaladmin@salesdb:[19637]:FATAL:the database system is in recovery mode\n2020-02-17 22:34:34 UTC:(54557):digitaladmin@salesdb:[19639]:FATAL:the database system is in recovery mode\n2020-02-17 22:34:34 UTC:(58713):devops_user@salesdb:[19638]:FATAL:the database system is in recovery mode\n2020-02-17 22:34:34 UTC:(58714):devops_user@salesdb:[19644]:FATAL:the database system is in recovery mode\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: invalidrecord length at 15B0/E4C32288: wanted 24, got 0\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: redodone at 15B0/E4C32260\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: lastcompleted transaction was at log time 2020-02-17 22:34:31.864309+00\n2020-02-17 22:34:35 UTC::@:[19633]:LOG:checkpoint starting: end-of-recovery immediate\n\n \n\n\n \nThank you.\n\n\n\n\nafter upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. spec: RAM 16gb,4vCoreAny bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! I could see below error logs and due to this reason database more often going into recovery mode, 2020-02-17 22:34:32 UTC::@:[20467]:LOG: server process (PID\n32731) was terminated by signal 9: Killed\n2020-02-17 22:34:32 UTC::@:[20467]:DETAIL:\nFailed process was running: select\ninfo_starttime,info_starttimel,info_conversationid,info_status,classification_type,intentname,confidencescore,versions::text,messageid\nfrom salesdb.liveperson.intents where info_status='CLOSE' AND ( 1=1 ) AND ( 1=1\n)\n2020-02-17 22:34:32 UTC::@:[20467]:LOG:\nterminating any other active server processes\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34548):bi_user@salesdb:[19522]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43864):devops_user@salesdb:[30919]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44484):devops_user@salesdb:[32330]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43654):devops_user@salesdb:[30866]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC::@:[20467]:LOG: archiver\nprocess (PID 30799) exited with exit code 1\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44482):devops_user@salesdb:[32328]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:DETAIL:\nThe postmaster has commanded this server process to roll back the current transaction\nand exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2020-02-17 22:34:32 UTC:(47882):devops_user@salesdb:[8005]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43876):devops_user@salesdb:[30962]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(50538):devops_user@salesdb:[21539]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(51502):devops_user@salesdb:[32651]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47162):devops_user@salesdb:[4288]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46806):devops_user@salesdb:[32316]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43862):devops_user@salesdb:[30918]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47594):devops_user@salesdb:[32313]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC::@:[30798]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC::@:[30798]:DETAIL: The\npostmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC::@:[30798]:HINT: In a\nmoment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37388):devops_user@salesdb:[32319]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(48224):devops_user@salesdb:[1227]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(33476):devops_user@salesdb:[10445]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44376):devops_user@salesdb:[32217]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(57433):digitaladmin@salesdb:[1420]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43950):devops_user@salesdb:[31217]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43877):devops_user@salesdb:[30963]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37836):devops_user@salesdb:[5267]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43330):devops_user@salesdb:[32324]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(48226):devops_user@salesdb:[1226]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(47592):devops_user@salesdb:[32314]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54594):devops_user@salesdb:[30867]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(32946):devops_user@salesdb:[13717]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43326):devops_user@salesdb:[32323]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46808):devops_user@salesdb:[32315]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54893):devops_user@salesdb:[13524]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(46812):devops_user@salesdb:[32318]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(62744):devops_user@salesdb:[26990]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37392):devops_user@salesdb:[32320]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(57834):devops_user@salesdb:[24582]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43324):devops_user@salesdb:[32326]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2020-02-17 22:34:32 UTC:(46810):devops_user@salesdb:[32317]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:DETAIL:\nThe postmaster has commanded this server process to roll back the current transaction\nand exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2020-02-17 22:34:32 UTC:(44372):devops_user@salesdb:[32216]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43670):devops_user@salesdb:[30876]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44486):devops_user@salesdb:[32329]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37390):devops_user@salesdb:[32322]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:10.65.152.155(58906):bi_user@salesdb:[17003]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32\nUTC:10.65.152.155(58906):bi_user@salesdb:[17003]:DETAIL: The postmaster has\ncommanded this server process to roll back the current transaction and exit,\nbecause another server process exited abnormally and possibly corrupted shared\nmemory.\n2020-02-17 22:34:32\nUTC:10.65.152.155(58906):bi_user@salesdb:[17003]:HINT: In a moment you should\nbe able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43174):devops_user@salesdb:[30877]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44480):devops_user@salesdb:[32327]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37394):devops_user@salesdb:[32321]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52378):devops_user@salesdb:[32215]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43328):devops_user@salesdb:[32325]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60894):devops_user@salesdb:[10444]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54892):devops_user@salesdb:[13523]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43652):devops_user@salesdb:[30865]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44370):devops_user@salesdb:[32214]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2020-02-17 22:34:32 UTC:(43168):devops_user@salesdb:[30868]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54890):devops_user@salesdb:[13522]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(62791):devops_user@salesdb:[27137]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43674):devops_user@salesdb:[30878]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37830):devops_user@salesdb:[5264]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(41912):devops_user@salesdb:[2897]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52296):devops_user@salesdb:[5263]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52946):devops_user@salesdb:[7072]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43668):devops_user@salesdb:[30875]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(32947):devops_user@salesdb:[13716]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(51368):devops_user@salesdb:[1953]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37832):devops_user@salesdb:[5265]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60830):devops_user@salesdb:[30872]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54696):digitaladmin@postgres:[18544]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(44374):devops_user@salesdb:[32218]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56706):devops_user@salesdb:[14435]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(41914):devops_user@salesdb:[2898]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52950):devops_user@salesdb:[7075]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56707):devops_user@salesdb:[14436]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34946):devops_user@salesdb:[30879]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56734):devops_user@salesdb:[7295]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(38758):devops_user@salesdb:[7297]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(38760):devops_user@salesdb:[7298]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52944):devops_user@salesdb:[7073]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(52945):devops_user@salesdb:[7074]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42962):devops_user@salesdb:[30864]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60828):devops_user@salesdb:[30871]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37834):devops_user@salesdb:[5266]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(58438):digitaladmin@salesdb:[12366]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43666):devops_user@salesdb:[30874]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42960):devops_user@salesdb:[30863]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(60826):devops_user@salesdb:[30870]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your command.\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(34940):devops_user@salesdb:[30861]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(56732):devops_user@salesdb:[7296]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43162):devops_user@salesdb:[30804]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(42954):devops_user@salesdb:[30806]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37308):devops_user@salesdb:[30862]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43184):devops_user@salesdb:[30880]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37306):devops_user@salesdb:[30860]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37296):devops_user@salesdb:[30810]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(54590):devops_user@salesdb:[30832]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(37302):devops_user@salesdb:[30859]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:32 UTC:(43642):devops_user@salesdb:[30836]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(43660):devops_user@salesdb:[30873]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(43660):devops_user@salesdb:[30873]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(43660):devops_user@salesdb:[30873]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(42966):devops_user@salesdb:[30869]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(42966):devops_user@salesdb:[30869]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(42966):devops_user@salesdb:[30869]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:32 UTC:(60818):devops_user@salesdb:[30831]:WARNING:\nterminating connection because of crash of another server process\n2020-02-17 22:34:33 UTC:(60818):devops_user@salesdb:[30831]:DETAIL:\nThe postmaster has commanded this server process to roll back the current\ntransaction and exit, because another server process exited abnormally and\npossibly corrupted shared memory.\n2020-02-17 22:34:33 UTC:(60818):devops_user@salesdb:[30831]:HINT:\nIn a moment you should be able to reconnect to the database and repeat your\ncommand.\n2020-02-17 22:34:33 UTC::@:[20467]:LOG: all\nserver processes terminated; reinitializing\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: database\nsystem was interrupted; last known up at 2020-02-17 22:33:33 UTC\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: database\nsystem was not properly shut down; automatic recovery in progress\n2020-02-17 22:34:33 UTC::@:[19633]:LOG: redo\nstarts at 15B0/D5FCA110\n2020-02-17 22:34:34 UTC:(54556):digitaladmin@salesdb:[19637]:FATAL:\nthe database system is in recovery mode\n2020-02-17 22:34:34 UTC:(54557):digitaladmin@salesdb:[19639]:FATAL:\nthe database system is in recovery mode\n2020-02-17 22:34:34 UTC:(58713):devops_user@salesdb:[19638]:FATAL:\nthe database system is in recovery mode\n2020-02-17 22:34:34 UTC:(58714):devops_user@salesdb:[19644]:FATAL:\nthe database system is in recovery mode\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: invalid\nrecord length at 15B0/E4C32288: wanted 24, got 0\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: redo\ndone at 15B0/E4C32260\n2020-02-17 22:34:35 UTC::@:[19633]:LOG: last\ncompleted transaction was at log time 2020-02-17 22:34:31.864309+00\n2020-02-17 22:34:35 UTC::@:[19633]:LOG:\ncheckpoint starting: end-of-recovery immediate\n \n\n \nThank you.", "msg_date": "Tue, 18 Feb 2020 17:46:28 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "DB running out of memory issues after upgrade" }, { "msg_contents": "On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:\n>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade.�\n>\n>spec: RAM 16gb,4vCore\n>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!!�\n>\n\nThis bug report (in fact, we don't know if it's a bug, but OK) is\nwoefully incomplete :-(\n\nThe server log is mostly useless, unfortunately - it just says a bunch\nof processes were killed (by OOM killer, most likely) so the server has\nto restart. It tells us nothing about why the backends consumed so much\nmemory etc.\n\nWhat would help us is knowing how much memory was the backend (killed by\nOOM) consuming, which should be in dmesg.\n\nAnd then MemoryContextStats output - you need to connect to a backend\nconsuming a lot of memory using gdb (before it gets killed) and do\n\n (gdb) p MemoryContextStats(TopMemoryContext)\n (gdb) q\n\nand show us the output printed into server log. If it's a backend\nrunning a query, it'd help knowing the execution plan.\n\nIt would also help knowing the non-default configuration, i.e. stuff\ntweaked in postgresql.conf.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Tue, 18 Feb 2020 18:58:57 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> This bug report (in fact, we don't know if it's a bug, but OK) is\n> woefully incomplete :-(\n\nAlso, cross-posting to ten(!) different mailing lists, most of which are\noff-topic for this, is incredibly rude.\n\nPlease read\n\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\nand try to follow its suggestions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Feb 2020 13:07:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "Below are the same configurations ins .conf file before and after updagrade\nshow max_connections; = 1743show shared_buffers = \"4057840kB\"show effective_cache_size =  \"8115688kB\"show maintenance_work_mem = \"259MB\"show checkpoint_completion_target = \"0.9\"show wal_buffers = \"16MB\"show default_statistics_target = \"100\"show random_page_cost = \"1.1\"show effective_io_concurrency =\" 200\"show work_mem = \"4MB\"show min_wal_size = \"256MB\"show max_wal_size = \"2GB\"show max_worker_processes = \"8\"show max_parallel_workers_per_gather = \"2\"\n\nhere is some sys logs,\n2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. \n\nI identified one simple select which consuming more memory and here is the query plan,\n\n\n\"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)\"\"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)\"\"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)\"\"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"\"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)\"\"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"\n\n\nThanks,\n\n\n On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <[email protected]> wrote: \n \n On Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:\n>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. \n>\n>spec: RAM 16gb,4vCore\n>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! \n>\n\nThis bug report (in fact, we don't know if it's a bug, but OK) is\nwoefully incomplete :-(\n\nThe server log is mostly useless, unfortunately - it just says a bunch\nof processes were killed (by OOM killer, most likely) so the server has\nto restart. It tells us nothing about why the backends consumed so much\nmemory etc.\n\nWhat would help us is knowing how much memory was the backend (killed by\nOOM) consuming, which should be in dmesg.\n\nAnd then MemoryContextStats output - you need to connect to a backend\nconsuming a lot of memory using gdb (before it gets killed) and do\n\n  (gdb) p MemoryContextStats(TopMemoryContext)\n  (gdb) q\n\nand show us the output printed into server log. If it's a backend\nrunning a query, it'd help knowing the execution plan.\n\nIt would also help knowing the non-default configuration, i.e. stuff\ntweaked in postgresql.conf.\n\nregards\n\n-- \nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n \n\nBelow are the same configurations ins .conf file before and after updagradeshow max_connections; = 1743show shared_buffers = \"4057840kB\"show effective_cache_size =  \"8115688kB\"show maintenance_work_mem = \"259MB\"show checkpoint_completion_target = \"0.9\"show wal_buffers = \"16MB\"show default_statistics_target = \"100\"show random_page_cost = \"1.1\"show effective_io_concurrency =\" 200\"show work_mem = \"4MB\"show min_wal_size = \"256MB\"show max_wal_size = \"2GB\"show max_worker_processes = \"8\"show max_parallel_workers_per_gather = \"2\"here is some sys logs,2020-02-16 21:01:17 UTC         [-]The database process was killed by the OS due to excessive memory consumption. 2020-02-16 13:41:16 UTC         [-]The database process was killed by the OS due to excessive memory consumption. I identified one simple select which consuming more memory and here is the query plan,\"Result  (cost=0.00..94891854.11 rows=3160784900 width=288)\"\"  ->  Append  (cost=0.00..47480080.61 rows=3160784900 width=288)\"\"        ->  Seq Scan on msghist  (cost=0.00..15682777.12 rows=3129490000 width=288)\"\"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"\"        ->  Seq Scan on msghist msghist_1  (cost=0.00..189454.50 rows=31294900 width=288)\"\"              Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"Thanks,\n\n\n\n On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra <[email protected]> wrote:\n \n\n\nOn Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:>after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade. >>spec: RAM 16gb,4vCore>Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!! >This bug report (in fact, we don't know if it's a bug, but OK) iswoefully incomplete :-(The server log is mostly useless, unfortunately - it just says a bunchof processes were killed (by OOM killer, most likely) so the server hasto restart. It tells us nothing about why the backends consumed so muchmemory etc.What would help us is knowing how much memory was the backend (killed byOOM) consuming, which should be in dmesg.And then MemoryContextStats output - you need to connect to a backendconsuming a lot of memory using gdb (before it gets killed) and do  (gdb) p MemoryContextStats(TopMemoryContext)  (gdb) qand show us the output printed into server log. If it's a backendrunning a query, it'd help knowing the execution plan.It would also help knowing the non-default configuration, i.e. stufftweaked in postgresql.conf.regards-- Tomas Vondra                  http://www.2ndQuadrant.comPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 18 Feb 2020 18:10:08 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Tomas Vondra <[email protected]> writes:\n> > This bug report (in fact, we don't know if it's a bug, but OK) is\n> > woefully incomplete :-(\n> \n> Also, cross-posting to ten(!) different mailing lists, most of which are\n> off-topic for this, is incredibly rude.\n\nNot to mention a couple -owner aliases that hit moderators directly..\n\nI continue to feel that we should disallow this kind of cross-posting in\nthe list management software.\n\nThanks,\n\nStephen", "msg_date": "Tue, 18 Feb 2020 13:33:33 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj <[email protected]> wrote:\n>\n> Below are the same configurations ins .conf file before and after updagrade\n>\n> show max_connections; = 1743\n> show shared_buffers = \"4057840kB\"\n> show effective_cache_size = \"8115688kB\"\n> show maintenance_work_mem = \"259MB\"\n> show checkpoint_completion_target = \"0.9\"\n> show wal_buffers = \"16MB\"\n> show default_statistics_target = \"100\"\n> show random_page_cost = \"1.1\"\n> show effective_io_concurrency =\" 200\"\n> show work_mem = \"4MB\"\n> show min_wal_size = \"256MB\"\n> show max_wal_size = \"2GB\"\n> show max_worker_processes = \"8\"\n> show max_parallel_workers_per_gather = \"2\"\n\nThis smells like oom killer for sure. how did you resolve some of\nthese values. In particular max_connections and effective_cache_size.\n How much memory is in this server?\n\nmerlin\n\n\n", "msg_date": "Tue, 18 Feb 2020 12:38:35 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "Please don't cross post to different lists.\n\n Pgsql-general <[email protected]>,\n PgAdmin Support <[email protected]>,\n PostgreSQL Hackers <[email protected]>,\n \"[email protected]\" <[email protected]>,\n Postgres Performance List <[email protected]>,\n Pg Bugs <[email protected]>,\n Pgsql-admin <[email protected]>,\n Pgadmin-hackers <[email protected]>,\n PostgreSQL Hackers <[email protected]>,\n Pgsql-pkg-yum <[email protected]>\n\n\nOn Tue, Feb 18, 2020 at 05:46:28PM +0000, Nagaraj Raj wrote:\n> after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues no world load has changed before and after upgrade.�\n> \n> spec: RAM 16gb,4vCore\n\nOn Tue, Feb 18, 2020 at 06:10:08PM +0000, Nagaraj Raj wrote:\n> Below are the same configurations ins .conf file before and after updagrade\n> show max_connections; = 1743\n> show shared_buffers = \"4057840kB\"\n> show work_mem = \"4MB\"\n> show maintenance_work_mem = \"259MB\"\n\n> Any bug reported like this or suggestions on how to fix this issue? I appreciate the response..!!�\n> \n> I could see below error logs and due to this reason database more often going into recovery mode,�\n\nWhat do you mean \"more often\" ? Did the crash/OOM happen before the upgrade, too ?\n\n> 2020-02-17 22:34:32 UTC::@:[20467]:LOG: server process (PID32731) was terminated by signal 9: Killed\n> 2020-02-17 22:34:32 UTC::@:[20467]:DETAIL:Failed process was running: selectinfo_starttime,info_starttimel,info_conversationid,info_status,classification_type,intentname,confidencescore,versions::text,messageidfrom salesdb.liveperson.intents where info_status='CLOSE' AND ( 1=1 ) AND ( 1=1)\n\nThat process is the one which was killed (in this case) but maybe not the\nprocess responsible for using lots of *private* RAM. Is\nsalesdb.liveperson.intents a view ? What is the query plain for that query ?\n(Run it with \"explain\").\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions#EXPLAIN_.28ANALYZE.2C_BUFFERS.29.2C_not_just_EXPLAIN\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\nOn Tue, Feb 18, 2020 at 06:10:08PM +0000, Nagaraj Raj wrote:\n> I identified one simple select which consuming more memory and here is the query plan,\n> \n> \"Result� (cost=0.00..94891854.11 rows=3160784900 width=288)\"\"� ->� Append� (cost=0.00..47480080.61 rows=3160784900 width=288)\"\"� � � � ->� Seq Scan on msghist� (cost=0.00..15682777.12 rows=3129490000 width=288)\"\"� � � � � � � Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"\"� � � � ->� Seq Scan on msghist msghist_1� (cost=0.00..189454.50 rows=31294900 width=288)\"\"� � � � � � � Filter: (((data -> 'info'::text) ->> 'status'::text) = 'CLOSE'::text)\"\n\nThis is almost certainly unrelated. It looks like that query did a seq scan\nand accessed a large number of tuples (and pages from \"shared_buffers\"), which\nthe OS then shows as part of that processes memory, even though *shared*\nbuffers are not specific to that one process.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 18 Feb 2020 12:40:37 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "On Tue, Feb 18, 2020 at 12:40 PM Justin Pryzby <[email protected]> wrote:\n> This is almost certainly unrelated. It looks like that query did a seq scan\n> and accessed a large number of tuples (and pages from \"shared_buffers\"), which\n> the OS then shows as part of that processes memory, even though *shared*\n> buffers are not specific to that one process.\n\nYeah. This server looks highly overprovisioned, I'm in particularly\nsuspicious of the high max_connections setting. To fetch this out\nI'd be tracking connections in the database, both idle and not idle,\ncontinuously. The solution is most likely to install a connection\npooler such as pgbouncer.\n\nmerlin\n\n\n", "msg_date": "Tue, 18 Feb 2020 12:49:50 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "Hi Merlin,\nIts configured high value for max_conn, but active and idle session have never crossed the count 50.\nDB Size: 20 GBTable size: 30MBRAM: 16GBvC: 4\n\nyes, its view earlier I posted and here is there query planner for new actual view,\n\"Append  (cost=0.00..47979735.57 rows=3194327000 width=288)\"\"  ->  Seq Scan on msghist  (cost=0.00..15847101.30 rows=3162700000 width=288)\"\"  ->  Seq Scan on msghist msghist_1  (cost=0.00..189364.27 rows=31627000 width=288)\"\n\nThanks,Rj On Tuesday, February 18, 2020, 10:51:02 AM PST, Merlin Moncure <[email protected]> wrote: \n \n On Tue, Feb 18, 2020 at 12:40 PM Justin Pryzby <[email protected]> wrote:\n> This is almost certainly unrelated.  It looks like that query did a seq scan\n> and accessed a large number of tuples (and pages from \"shared_buffers\"), which\n> the OS then shows as part of that processes memory, even though *shared*\n> buffers are not specific to that one process.\n\nYeah.  This server looks highly overprovisioned, I'm in particularly\nsuspicious of the high max_connections setting.  To fetch this out\nI'd be tracking connections in the database, both idle and not idle,\ncontinuously.  The solution is most likely to install a connection\npooler such as pgbouncer.\n\nmerlin\n \n\nHi Merlin,Its configured high value for max_conn, but active and idle session have never crossed the count 50.DB Size: 20 GBTable size: 30MBRAM: 16GBvC: 4yes, its view earlier I posted and here is there query planner for new actual view,\"Append  (cost=0.00..47979735.57 rows=3194327000 width=288)\"\"  ->  Seq Scan on msghist  (cost=0.00..15847101.30 rows=3162700000 width=288)\"\"  ->  Seq Scan on msghist msghist_1  (cost=0.00..189364.27 rows=31627000 width=288)\"Thanks,Rj\n\n\n\n On Tuesday, February 18, 2020, 10:51:02 AM PST, Merlin Moncure <[email protected]> wrote:\n \n\n\nOn Tue, Feb 18, 2020 at 12:40 PM Justin Pryzby <[email protected]> wrote:> This is almost certainly unrelated.  It looks like that query did a seq scan> and accessed a large number of tuples (and pages from \"shared_buffers\"), which> the OS then shows as part of that processes memory, even though *shared*> buffers are not specific to that one process.Yeah.  This server looks highly overprovisioned, I'm in particularlysuspicious of the high max_connections setting.  To fetch this outI'd be tracking connections in the database, both idle and not idle,continuously.  The solution is most likely to install a connectionpooler such as pgbouncer.merlin", "msg_date": "Tue, 18 Feb 2020 19:10:20 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "On 2020-02-18 18:10:08 +0000, Nagaraj Raj wrote:\n> Below are the same configurations ins .conf file before and after updagrade\n> \n> show max_connections; = 1743\n[...]\n> show work_mem = \"4MB\"\n\nThis is an interesting combination: So you expect a large number of\nconnections but each one should use very little RAM?\n\n[...]\n\n> here is some sys logs,\n> \n> 2020-02-16 21:01:17 UTC [-]The database process was killed by the OS\n> due to excessive memory consumption. \n> 2020-02-16 13:41:16 UTC [-]The database process was killed by the OS\n> due to excessive memory consumption. \n\nThe oom-killer produces a huge block of messages which you can find with\ndmesg or in your syslog. It looks something like this:\n\nFeb 19 19:06:53 akran kernel: [3026711.344817] platzangst invoked oom-killer: gfp_mask=0x15080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO), nodemask=(null), order=1, oom_score_adj=0\nFeb 19 19:06:53 akran kernel: [3026711.344819] platzangst cpuset=/ mems_allowed=0-1\nFeb 19 19:06:53 akran kernel: [3026711.344825] CPU: 7 PID: 2012 Comm: platzangst Tainted: G OE 4.15.0-74-generic #84-Ubuntu\nFeb 19 19:06:53 akran kernel: [3026711.344826] Hardware name: Dell Inc. PowerEdge R630/02C2CP, BIOS 2.1.7 06/16/2016\nFeb 19 19:06:53 akran kernel: [3026711.344827] Call Trace:\nFeb 19 19:06:53 akran kernel: [3026711.344835] dump_stack+0x6d/0x8e\nFeb 19 19:06:53 akran kernel: [3026711.344839] dump_header+0x71/0x285\n...\nFeb 19 19:06:53 akran kernel: [3026711.344893] RIP: 0033:0x7f292d076b1c\nFeb 19 19:06:53 akran kernel: [3026711.344894] RSP: 002b:00007fff187ef240 EFLAGS: 00000246 ORIG_RAX: 0000000000000038\nFeb 19 19:06:53 akran kernel: [3026711.344895] RAX: ffffffffffffffda RBX: 00007fff187ef240 RCX: 00007f292d076b1c\nFeb 19 19:06:53 akran kernel: [3026711.344896] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000001200011\nFeb 19 19:06:53 akran kernel: [3026711.344897] RBP: 00007fff187ef2b0 R08: 00007f292d596740 R09: 00000000009d43a0\nFeb 19 19:06:53 akran kernel: [3026711.344897] R10: 00007f292d596a10 R11: 0000000000000246 R12: 0000000000000000\nFeb 19 19:06:53 akran kernel: [3026711.344898] R13: 0000000000000020 R14: 0000000000000000 R15: 0000000000000000\nFeb 19 19:06:53 akran kernel: [3026711.344899] Mem-Info:\nFeb 19 19:06:53 akran kernel: [3026711.344905] active_anon:14862589 inactive_anon:1133875 isolated_anon:0\nFeb 19 19:06:53 akran kernel: [3026711.344905] active_file:467 inactive_file:371 isolated_file:0\nFeb 19 19:06:53 akran kernel: [3026711.344905] unevictable:0 dirty:3 writeback:0 unstable:0\n...\nFeb 19 19:06:53 akran kernel: [3026711.344985] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name\nFeb 19 19:06:53 akran kernel: [3026711.344997] [ 823] 0 823 44909 0 106496 121 0 lvmetad\nFeb 19 19:06:53 akran kernel: [3026711.344999] [ 1354] 0 1354 11901 3 135168 112 0 rpcbind\nFeb 19 19:06:53 akran kernel: [3026711.345000] [ 1485] 0 1485 69911 99 180224 159 0 accounts-daemon\n...\nFeb 19 19:06:53 akran kernel: [3026711.345345] Out of memory: Kill process 25591 (postgres) score 697 or sacrifice child\nFeb 19 19:06:53 akran kernel: [3026711.346563] Killed process 25591 (postgres) total-vm:71116948kB, anon-rss:52727552kB, file-rss:0kB, shmem-rss:3023196kB\n\nThe most interesting lines are usually the last two: In this case they\ntell us that the process killed was a postgres process and it occupied\nabout 71 GB of virtual memory at that time. That was clearly the right\nchoice since the machine has only 64 GB of RAM. Sometimes it is less\nclear and then you might want to scroll through the (usually long) list\nof processes to see if there are other processes which need suspicious\namounts of RAM or maybe if there are just more of them than you would\nexpect.\n\n\n> I identified one simple select which consuming more memory and here is the\n> query plan,\n> \n> \n> \n> \"Result (cost=0.00..94891854.11 rows=3160784900 width=288)\"\n> \" -> Append (cost=0.00..47480080.61 rows=3160784900 width=288)\"\n> \" -> Seq Scan on msghist (cost=0.00..15682777.12 rows=3129490000 width\n> =288)\"\n> \" Filter: (((data -> 'info'::text) ->> 'status'::text) =\n> 'CLOSE'::text)\"\n> \" -> Seq Scan on msghist msghist_1 (cost=0.00..189454.50 rows=31294900\n> width=288)\"\n> \" Filter: (((data -> 'info'::text) ->> 'status'::text) =\n> 'CLOSE'::text)\"\n\nSo: How much memory does that use? It produces a huge number of rows\n(more than 3 billion) but it doesn't do much with them, so I wouldn't\nexpect the postgres process itself to use much memory. Are you sure its\nthe postgres process and not the application which uses a lot of memory?\n\n hp\n\n-- \n _ | Peter J. Holzer | Story must make more sense than reality.\n|_|_) | |\n| | | [email protected] | -- Charles Stross, \"Creative writing\n__/ | http://www.hjp.at/ | challenge!\"", "msg_date": "Sun, 23 Feb 2020 11:19:28 +0100", "msg_from": "\"Peter J. Holzer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" }, { "msg_contents": "On Tue, Feb 18, 2020 at 1:10 PM Nagaraj Raj <[email protected]> wrote:\n>\n> Hi Merlin,\n>\n> Its configured high value for max_conn, but active and idle session have never crossed the count 50.\n>\n> DB Size: 20 GB\n> Table size: 30MB\n> RAM: 16GB\n> vC: 4\n>\n>\n> yes, its view earlier I posted and here is there query planner for new actual view,\n>\n> \"Append (cost=0.00..47979735.57 rows=3194327000 width=288)\"\n> \" -> Seq Scan on msghist (cost=0.00..15847101.30 rows=3162700000 width=288)\"\n> \" -> Seq Scan on msghist msghist_1 (cost=0.00..189364.27 rows=31627000 width=288)\"\n\n\nDatabase size of 20GB is not believable; you have table with 3Bil\nrows, this ought to be 60GB+ mill+ all by itself. How did you get\n20GB figure?\n\n\nmerlin\n\n\n", "msg_date": "Mon, 24 Feb 2020 10:34:03 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB running out of memory issues after upgrade" } ]
[ { "msg_contents": "Hi\n\nI have both hdd and ssd disk on the postgres server. The cluster is\nright now created on the hdd only. I am considering using a tablespace\nto put some highly used postgres object on the ssd disk. Of course the\nssd is small compared to the hdd, and I need to choose carefully what\nobjects are stored on that side.\n\nI am wondering what kind of object (indexes, data) would benefit from\nssd. The database primary/foreign keys are highly used and there is\nalmost no sequencial scan. However the server has a large amount of ram\nmemory and I suspect all of those indexes are already cached in ram.\n\nI have read that tablespaces introduce overhead of maintenance and\nintroduce complication for replication. But on the other hand I have\nthis ssd disk ready for something.\n\nAny recommandation ?\n\n-- \nnicolas paris\n\n\n", "msg_date": "Wed, 19 Feb 2020 05:42:41 +0100", "msg_from": "Nicolas PARIS <[email protected]>", "msg_from_op": true, "msg_subject": "tablespace to benefit from ssd ?" }, { "msg_contents": "Unless this is about reads exclusively I would start with putting wal on\nssd.\nWhat you might also do, is create separate filesystems (lvm). You can then\nkeep track of io with iostat per filesystem and see what would benefit\nmost. And see storage size usage also.\nAnd you could use lvm to move filesystems to and from ssd hot. So just\ndowntime once.\n\nPlease share your end findings in this thread too.\n\nOp wo 19 feb. 2020 om 04:42 schreef Nicolas PARIS <[email protected]>\n\n> Hi\n>\n> I have both hdd and ssd disk on the postgres server. The cluster is\n> right now created on the hdd only. I am considering using a tablespace\n> to put some highly used postgres object on the ssd disk. Of course the\n> ssd is small compared to the hdd, and I need to choose carefully what\n> objects are stored on that side.\n>\n> I am wondering what kind of object (indexes, data) would benefit from\n> ssd. The database primary/foreign keys are highly used and there is\n> almost no sequencial scan. However the server has a large amount of ram\n> memory and I suspect all of those indexes are already cached in ram.\n>\n> I have read that tablespaces introduce overhead of maintenance and\n> introduce complication for replication. But on the other hand I have\n> this ssd disk ready for something.\n>\n> Any recommandation ?\n>\n> --\n> nicolas paris\n>\n>\n> --\n\n\n[image: EDB Postgres] <http://www.enterprisedb.com/>\nSebastiaan Alexander Mannem\nProduct Manager\nAnthony Fokkerweg 1\n1059 CM Amsterdam, The Netherlands\n<http://maps.google.com/maps?f=q&source=embed&hl=en&geocode=&q=Anthony+Fokkerweg+1+1059+CM+Amsterdam%2C+The+Netherlands&ie=UTF8&hq=&hnear=Anthony+Fokkerweg+1+1059+CM+Amsterdam%2C+The+Netherlands&iwloc=near>\n\nT: +31 6 82521560 <+31682521560>\nwww.edbpostgres.com\n[image: Blog Feed] <http://blogs.enterprisedb.com/> [image: Facebook]\n<https://www.facebook.com/EnterpriseDB> [image: Twitter]\n<https://twitter.com/EDBPostgres> [image: LinkedIn]\n<https://www.linkedin.com/company/14958?trk=tyah> [image: Google+]\n<https://plus.google.com/108046988421677398468>\n\nUnless this is about reads exclusively I would start with putting wal on ssd. What you might also do, is create separate filesystems (lvm). You can then keep track of io with iostat per filesystem and see what would benefit most. And see storage size usage also. And you could use lvm to move filesystems to and from ssd hot. So just downtime once.Please share your end findings in this thread too.Op wo 19 feb. 2020 om 04:42 schreef Nicolas PARIS <[email protected]>Hi\n\nI have both hdd and ssd disk on the postgres server. The cluster is\nright now created on the hdd only. I am considering using a tablespace\nto put some highly used postgres object on the ssd disk. Of course the\nssd is small compared to the hdd, and I need to choose carefully what\nobjects are stored on that side.\n\nI am wondering what kind of object (indexes, data) would benefit from\nssd. The database primary/foreign keys are highly used and there is\nalmost no sequencial scan. However the server has a large amount of ram\nmemory and I suspect all of those indexes are already cached in ram.\n\nI have read that tablespaces introduce overhead of maintenance and\nintroduce complication for replication. But on the other hand I have\nthis ssd disk ready for something.\n\nAny recommandation ?\n\n-- \nnicolas paris\n\n\n--    Sebastiaan Alexander MannemProduct ManagerAnthony Fokkerweg 11059 CM Amsterdam, The NetherlandsT: +31 6 82521560www.edbpostgres.com", "msg_date": "Wed, 19 Feb 2020 08:02:08 +0000", "msg_from": "Sebastiaan Mannem <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tablespace to benefit from ssd ?" }, { "msg_contents": "On Wed, 2020-02-19 at 05:42 +0100, Nicolas PARIS wrote:\n> I have both hdd and ssd disk on the postgres server. The cluster is\n> right now created on the hdd only. I am considering using a tablespace\n> to put some highly used postgres object on the ssd disk. Of course the\n> ssd is small compared to the hdd, and I need to choose carefully what\n> objects are stored on that side.\n> \n> I am wondering what kind of object (indexes, data) would benefit from\n> ssd. The database primary/foreign keys are highly used and there is\n> almost no sequencial scan. However the server has a large amount of ram\n> memory and I suspect all of those indexes are already cached in ram.\n> \n> I have read that tablespaces introduce overhead of maintenance and\n> introduce complication for replication. But on the other hand I have\n> this ssd disk ready for something.\n> \n> Any recommandation ?\n\nPut \"pg_stat_statements\" into \"shared_preload_libraries\" and restart the server.\n\nSet \"track_io_timing\" to on.\n\nLet your workload run for at least a day.\n\nInstall the \"pg_stat_statements\" extension and run\n\n SELECT blk_read_time, query\n FROM pg_stat_statements\n ORDER BY blk_read_time DESC LIMIT 20;\n\nThat will give you the 20 queries that spent the most time reading from I/O.\n\nExamine those queries with EXPLAIN (ANALYZE, BUFFERS) and see which tables or\nindexes cause the I/O.\n\nThen you have a list of candidates for the fast tablespace.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Wed, 19 Feb 2020 11:48:38 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tablespace to benefit from ssd ?" }, { "msg_contents": "On Wed, Feb 19, 2020 at 05:42:41AM +0100, Nicolas PARIS wrote:\n> Hi\n> \n> I have both hdd and ssd disk on the postgres server. The cluster is\n> right now created on the hdd only. I am considering using a tablespace\n> to put some highly used postgres object on the ssd disk. Of course the\n> ssd is small compared to the hdd, and I need to choose carefully what\n> objects are stored on that side.\n> \n> I am wondering what kind of object (indexes, data) would benefit from\n> ssd. The database primary/foreign keys are highly used and there is\n> almost no sequencial scan. However the server has a large amount of ram\n> memory and I suspect all of those indexes are already cached in ram.\n> \n> I have read that tablespaces introduce overhead of maintenance and\n> introduce complication for replication. But on the other hand I have\n> this ssd disk ready for something.\n\nTo start with, you can:\nALTER SYSTEM SET temp_tablespaces='ssd';\n\nThat will improve speed of sorts which spill to disk (if any).\n\n+1 to using LVM for purposes of instrumentation.\n\nYou can also:\nALTER TABLESPACE ssd SET (random_page_cost=1.0);\n\nIt'd be difficult to suggest anything further without knowing about your\nworkload or performance goals or issues.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 19 Feb 2020 07:08:38 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tablespace to benefit from ssd ?" }, { "msg_contents": "On Tue, Feb 18, 2020, 11:42 PM Nicolas PARIS <[email protected]>\nwrote:\n\n> However the server has a large amount of ram\n> memory and I suspect all of those indexes are already cached in ram.\n>\n\nThen there may be no benefit to be had.\n\n>\n\n> I have read that tablespaces introduce overhead of maintenance and\n> introduce complication for replication.\n\n\nYes, they are a nuisance for the humans who need to document, maintain,\nconfigure, etc. And they can induce administrators into making mistakes\nwhich can prolong outages or cause data loss.\n\nBut on the other hand I have\n> this ssd disk ready for something.\n>\n\nThat isn't a good reason. Unless your users are complaining, or you think\nthey will be soon as things scale up, or you think they would be\ncomplaining of they weren't too apathetic to, then I would make no change\nthat adds complexity just because the hardware exists.\n\nBut I would turn on track_io_timing, and load pg_stat_statements, and\nprobably set up auto_explain. That way when problems do arrive, you will\nbe prepared to tackle them with empirical data.\n\nCheers,\n\nJeff\n\nOn Tue, Feb 18, 2020, 11:42 PM Nicolas PARIS <[email protected]> wrote: However the server has a large amount of ram\nmemory and I suspect all of those indexes are already cached in ram.Then there may be no benefit to be had.\n\nI have read that tablespaces introduce overhead of maintenance and\nintroduce complication for replication. Yes, they are a nuisance for the humans who need to document, maintain, configure, etc. And they can induce administrators into making mistakes which can prolong outages or cause data loss.But on the other hand I have\nthis ssd disk ready for something.That isn't a good reason.  Unless your users are complaining, or you think they will be soon as things scale up, or you think they would be complaining of they weren't too apathetic to, then I would make no change that adds complexity just because the hardware exists.But I would turn on track_io_timing, and load pg_stat_statements, and probably set up auto_explain.  That way when problems do arrive, you will be prepared to tackle them with empirical data.Cheers,Jeff", "msg_date": "Thu, 20 Feb 2020 12:30:43 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tablespace to benefit from ssd ?" } ]
[ { "msg_contents": "Hi Team,\n\nCan we have multiple tablespaces with in a database in postgres?\n\nCan we have a table on different tablespace same as Oracle?\n\nThanks,\n\n\n\n\n\n\n\n\n\nHi Team,\n \nCan we have multiple tablespaces with in a database in postgres?\n\n \nCan we have a table on different tablespace same as Oracle?\n \nThanks,", "msg_date": "Fri, 21 Feb 2020 05:34:21 +0000", "msg_from": "Daulat Ram <[email protected]>", "msg_from_op": true, "msg_subject": "Can we have multiple tablespaces with in a database." }, { "msg_contents": "On Fri, Feb 21, 2020 at 11:04 AM Daulat Ram <[email protected]>\nwrote:\n\n> Hi Team,\n>\n>\n>\n> Can we have multiple tablespaces with in a database in postgres?\n>\n> Yes.\n\n\n> Can we have a table on different tablespace same as Oracle?\n>\nYes -- specify TABLESPACE option while creating that table.\n\nRegards,\nAmul\n\nOn Fri, Feb 21, 2020 at 11:04 AM Daulat Ram <[email protected]> wrote:\n\n\nHi Team,\n \nCan we have multiple tablespaces with in a database in postgres?\n\nYes. \nCan we have a table on different tablespace same as Oracle?Yes -- specify TABLESPACE option while creating that table.Regards,Amul", "msg_date": "Fri, 21 Feb 2020 11:16:27 +0530", "msg_from": "amul sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can we have multiple tablespaces with in a database." }, { "msg_contents": "Please pick a single list to post to. Performance seems like the\nunnecessary one here.\n\nOn Thu, Feb 20, 2020 at 10:34 PM Daulat Ram <[email protected]>\nwrote:\n\n> Can we have multiple tablespaces with in a database in postgres?\n>\n\nI fell as if I'm missing something in your question given the presence of\nthe \"CREATE TABLESPACE\" SQL command and the related non-command\ndocumentation covered here:\n\nhttps://www.postgresql.org/docs/12/manage-ag-tablespaces.html\n\n\n> Can we have a table on different tablespace same as Oracle?\n>\n\nThere is no provision to assign two tablespaces to a single physical\ntable. To the benefit of those who don't use the other product you may\nwish to say exactly what you want to do instead of comparing it to\nsomething that many people likely have never used.\n\nDavid J.\n\nPlease pick a single list to post to.  Performance seems like the unnecessary one here.On Thu, Feb 20, 2020 at 10:34 PM Daulat Ram <[email protected]> wrote:\n\n\nCan we have multiple tablespaces with in a database in postgres?I fell as if I'm missing something in your question given the presence of the \"CREATE TABLESPACE\" SQL command and the related non-command documentation covered here:https://www.postgresql.org/docs/12/manage-ag-tablespaces.html   \nCan we have a table on different tablespace same as Oracle?There is no provision to assign two tablespaces to a single physical table.  To the benefit of those who don't use the other product you may wish to say exactly what you want to do instead of comparing it to something that many people likely have never used.David J.", "msg_date": "Thu, 20 Feb 2020 22:46:55 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can we have multiple tablespaces with in a database." }, { "msg_contents": "On 2/20/20 11:46 PM, David G. Johnston wrote:\n> Please pick a single list to post to.  Performance seems like the \n> unnecessary one here.\n>\n> On Thu, Feb 20, 2020 at 10:34 PM Daulat Ram <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Can we have multiple tablespaces with in a database in postgres?\n>\n>\n> I fell as if I'm missing something in your question given the presence of \n> the \"CREATE TABLESPACE\" SQL command and the related non-command \n> documentation covered here:\n>\n> https://www.postgresql.org/docs/12/manage-ag-tablespaces.html\n>\n> Can we have a table on different tablespace same as Oracle?\n>\n>\n> There is no provision to assign two tablespaces to a single physical \n> table.  To the benefit of those who don't use the other product you may \n> wish to say exactly what you want to do instead of comparing it to \n> something that many people likely have never used.\n\nIn some RDBMSs, you can partition tables across multiple tablespaces, but \nthey don't partition tables in anything close to the trigger-based method \nthat Postgres does (at least in 9.6).\n\n-- \nAngular momentum makes the world go 'round.\n\n\n\n\n\n\n On 2/20/20 11:46 PM, David G. Johnston wrote:\n\n\n\n\nPlease pick\n a single list to post to.  Performance seems like the\n unnecessary one here.\n\n\nOn Thu, Feb\n 20, 2020 at 10:34 PM Daulat Ram <[email protected]>\n wrote:\n\n\n\n\n\n\nCan we have multiple tablespaces\n with in a database in postgres?\n\n\n\n\n\n\n\nI fell as\n if I'm missing something in your question given the\n presence of the \"CREATE TABLESPACE\" SQL command and the\n related non-command documentation covered here:\n\n\nhttps://www.postgresql.org/docs/12/manage-ag-tablespaces.html  \n \n\n\n\n\nCan we have a table on different\n tablespace same as Oracle?\n\n\n\n\n\n\nThere is no\n provision to assign two tablespaces to a single physical\n table.  To the benefit of those who don't use the other\n product you may wish to say exactly what you want to do\n instead of comparing it to something that many people\n likely have never used.\n\n\n\n\n\n In some RDBMSs, you can partition tables across multiple\n tablespaces, but they don't partition tables in anything close to\n the trigger-based method that Postgres does (at least in 9.6).\n\n-- \n Angular momentum makes the world go 'round.", "msg_date": "Fri, 21 Feb 2020 00:00:55 -0600", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can we have multiple tablespaces with in a database." }, { "msg_contents": "Hi Amul ,\r\nPlease share the examples how we can create no. of tablespaces for a single database and how we can use them.\r\nAs I know we can create database on tablespace\r\n\r\n 1. CREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION '/mnt/pgdatatest/test/pgdata/conn_tbs';\r\n 2. Create database test tablespace ‘conn_tbs';\r\n\r\n\r\n\r\nCan we have multiple tablespaces with in a database in postgres?\r\n\r\nYes.\r\n\r\n\r\n\r\nFrom: amul sul <[email protected]>\r\nSent: Friday, February 21, 2020 11:16 AM\r\nTo: Daulat Ram <[email protected]>\r\nCc: [email protected]; [email protected]\r\nSubject: Re: Can we have multiple tablespaces with in a database.\r\n\r\n\r\n\r\nOn Fri, Feb 21, 2020 at 11:04 AM Daulat Ram <[email protected]<mailto:[email protected]>> wrote:\r\nHi Team,\r\n\r\nCan we have multiple tablespaces with in a database in postgres?\r\n\r\nYes.\r\n\r\nCan we have a table on different tablespace same as Oracle?\r\nYes -- specify TABLESPACE option while creating that table.\r\n\r\nRegards,\r\nAmul\r\n\n\n\n\n\n\n\n\n\nHi Amul ,\nPlease share the examples how we can create no. of tablespaces for a single database and how we can use them.\r\n\nAs I know we can create database on tablespace\n\n\nCREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION '/mnt/pgdatatest/test/pgdata/conn_tbs';\nCreate database test tablespace ‘conn_tbs';\n \n \n \nCan we have multiple tablespaces with in a database in postgres?\r\n\n \nYes.\n \n \n \nFrom: amul sul <[email protected]> \nSent: Friday, February 21, 2020 11:16 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]; [email protected]\nSubject: Re: Can we have multiple tablespaces with in a database.\n \n\n\n\n \n\n\n \n\n\nOn Fri, Feb 21, 2020 at 11:04 AM Daulat Ram <[email protected]> wrote:\n\n\n\n\nHi Team,\n \nCan we have multiple tablespaces with in a database in postgres?\r\n\n \n\n\n\n\nYes.\n\n\n \n\n\n\n\nCan we have a table on different tablespace same as Oracle?\n\n\n\n\nYes -- specify TABLESPACE option while creating that table.\n\n\n \n\n\nRegards,\n\n\nAmul", "msg_date": "Fri, 21 Feb 2020 06:01:30 +0000", "msg_from": "Daulat Ram <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Can we have multiple tablespaces with in a database." }, { "msg_contents": "On Fri, Feb 21, 2020 at 11:31 AM Daulat Ram <[email protected]>\nwrote:\n\n> Hi Amul ,\n>\n> Please share the examples how we can create no. of tablespaces for a\n> single database and how we can use them.\n>\n> As I know we can create database on tablespace\n>\n> 1. CREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION\n> '/mnt/pgdatatest/test/pgdata/conn_tbs';\n> 2. Create database test tablespace ‘conn_tbs';\n>\n> Maybe I have misunderstood your question; there is no option to specify\nmore\nthan one tablespace for the database, but you can place the objects of that\ndatabase to different tablespaces (if options available for that object).\nE.g. you can place a table in than conn_tbs tablespace.\n\nIf option is not specified then by default that object will be created\nin conn_tbs.\n\nRegards,\nAmul\n\nOn Fri, Feb 21, 2020 at 11:31 AM Daulat Ram <[email protected]> wrote:\n\n\nHi Amul ,\nPlease share the examples how we can create no. of tablespaces for a single database and how we can use them.\n\nAs I know we can create database on tablespace\n\n\nCREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION '/mnt/pgdatatest/test/pgdata/conn_tbs';\nCreate database test tablespace ‘conn_tbs';Maybe I have misunderstood your question; there is no option to specify morethan one tablespace for the database, but you can place the objects of thatdatabase to different tablespaces (if options available for that object).E.g. you can place a table in than conn_tbs tablespace.If option is not specified then by default that object will be createdin conn_tbs.Regards,Amul", "msg_date": "Fri, 21 Feb 2020 11:48:00 +0530", "msg_from": "amul sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can we have multiple tablespaces with in a database." }, { "msg_contents": "That will be great if you share any doc where it’s mentioned that we can’t use multiple tablespace for a single database. I have to assist my Dev team regarding tablespaces.\r\n\r\nAlso , what are the differences between Oracle and Postgres Tablespacs?\r\n\r\nThanks,\r\n\r\n\r\nFrom: amul sul <[email protected]>\r\nSent: Friday, February 21, 2020 11:48 AM\r\nTo: Daulat Ram <[email protected]>\r\nCc: [email protected]; [email protected]\r\nSubject: Re: Can we have multiple tablespaces with in a database.\r\n\r\n\r\n\r\nOn Fri, Feb 21, 2020 at 11:31 AM Daulat Ram <[email protected]<mailto:[email protected]>> wrote:\r\nHi Amul ,\r\nPlease share the examples how we can create no. of tablespaces for a single database and how we can use them.\r\nAs I know we can create database on tablespace\r\n\r\n 1. CREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION '/mnt/pgdatatest/test/pgdata/conn_tbs';\r\n 2. Create database test tablespace ‘conn_tbs';\r\nMaybe I have misunderstood your question; there is no option to specify more\r\nthan one tablespace for the database, but you can place the objects of that\r\ndatabase to different tablespaces (if options available for that object).\r\nE.g. you can place a table in than conn_tbs tablespace.\r\n\r\nIf option is not specified then by default that object will be created\r\nin conn_tbs.\r\n\r\nRegards,\r\nAmul\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nThat will be great if you  share any doc where it’s mentioned that we can’t use multiple tablespace for a single database. I have to assist my Dev team regarding tablespaces.\n \nAlso , what are the differences between Oracle and Postgres Tablespacs?\n \nThanks,\n \n \nFrom: amul sul <[email protected]> \nSent: Friday, February 21, 2020 11:48 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]; [email protected]\nSubject: Re: Can we have multiple tablespaces with in a database.\n \n\n\n\n \n\n\n \n\n\nOn Fri, Feb 21, 2020 at 11:31 AM Daulat Ram <[email protected]> wrote:\n\n\n\n\nHi\r\nAmul ,\nPlease share the examples how we can create no. of tablespaces for a single database and how we can use them.\r\n\nAs I know we can create database on tablespace\n\n\r\nCREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION '/mnt/pgdatatest/test/pgdata/conn_tbs';\r\nCreate database test tablespace ‘conn_tbs';\n\n\n\n\n\nMaybe I have misunderstood your question; there is no option to specify more\r\nthan one tablespace for the database, but you can place the objects of that\r\ndatabase to different tablespaces (if options available for that object).\r\nE.g. you can place a table in than conn_tbs tablespace.\n\r\nIf option is not specified then by default that object will be created\r\nin conn_tbs.\n\n\n \n\n\nRegards,\n\n\nAmul", "msg_date": "Fri, 21 Feb 2020 06:23:11 +0000", "msg_from": "Daulat Ram <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Can we have multiple tablespaces with in a database." }, { "msg_contents": "On Fri, Feb 21, 2020 at 11:53 AM Daulat Ram <[email protected]>\nwrote:\n\n> That will be great if you share any doc where it’s mentioned that we\n> can’t use multiple tablespace for a single database. I have to assist my\n> Dev team regarding tablespaces.\n>\n>\n>\n> Also , what are the differences between Oracle and Postgres Tablespacs?\n>\n>\n>\nTo be honest I don't know anything about Oracle.\n\nRegards,\nAmul\n\nOn Fri, Feb 21, 2020 at 11:53 AM Daulat Ram <[email protected]> wrote:\n\n\nThat will be great if you  share any doc where it’s mentioned that we can’t use multiple tablespace for a single database. I have to assist my Dev team regarding tablespaces.\n \nAlso , what are the differences between Oracle and Postgres Tablespacs?\n To be honest I don't know anything about Oracle. Regards,Amul", "msg_date": "Fri, 21 Feb 2020 11:56:33 +0530", "msg_from": "amul sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can we have multiple tablespaces with in a database." }, { "msg_contents": "\n\n> On Feb 20, 2020, at 22:23, Daulat Ram <[email protected]> wrote:\n> \n> That will be great if you share any doc where it’s mentioned that we can’t use multiple tablespace for a single database. I have to assist my Dev team regarding tablespaces.\n\nA single PostgreSQL database can have any number of tablespaces. Each table has to be in one specific tablespace, although a table can be in one tablespace and its indexes in a different one.\n\nIf a PostgreSQL table is partitioned, each partition can be in a different tablespace.\n\nOracle \"style\" tends to involve a lot of tablespaces in a database; this is much less commonly done in PostgreSQL. In general, you only need to create tablespace in a small number of circumstances:\n\n(a) You need more space than the current database volume allows, and moving the database to a larger volume is inconvenient;\n(b) You have multiple volumes with significantly different access characteristics (like an HDD array and some SSDs), and you want to distribute database objects to take advantage of that (for example, put commonly-used large indexes on the SSDs).\n\nPostgreSQL tablespaces do increase the administrative overhead of the database, and shouldn't be created unless there is a compelling need for them./\n\n--\n-- Christophe Pettus\n [email protected]\n\n\n\n", "msg_date": "Thu, 20 Feb 2020 22:27:06 -0800", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can we have multiple tablespaces with in a database." }, { "msg_contents": "You mean we can have only single default tablespace for a database but the database objects can be created on different-2 tablespaces?\r\n\r\nCan you please share the Doc URL for your suggestions given in trail mail.\r\n\r\nPlease correct me.\r\n\r\n-----Original Message-----\r\nFrom: Christophe Pettus <[email protected]> \r\nSent: Friday, February 21, 2020 11:57 AM\r\nTo: Daulat Ram <[email protected]>\r\nCc: amul sul <[email protected]>; [email protected]\r\nSubject: Re: Can we have multiple tablespaces with in a database.\r\n\r\n\r\n\r\n> On Feb 20, 2020, at 22:23, Daulat Ram <[email protected]> wrote:\r\n> \r\n> That will be great if you share any doc where it’s mentioned that we can’t use multiple tablespace for a single database. I have to assist my Dev team regarding tablespaces.\r\n\r\nA single PostgreSQL database can have any number of tablespaces. Each table has to be in one specific tablespace, although a table can be in one tablespace and its indexes in a different one.\r\n\r\nIf a PostgreSQL table is partitioned, each partition can be in a different tablespace.\r\n\r\nOracle \"style\" tends to involve a lot of tablespaces in a database; this is much less commonly done in PostgreSQL. In general, you only need to create tablespace in a small number of circumstances:\r\n\r\n(a) You need more space than the current database volume allows, and moving the database to a larger volume is inconvenient;\r\n(b) You have multiple volumes with significantly different access characteristics (like an HDD array and some SSDs), and you want to distribute database objects to take advantage of that (for example, put commonly-used large indexes on the SSDs).\r\n\r\nPostgreSQL tablespaces do increase the administrative overhead of the database, and shouldn't be created unless there is a compelling need for them./\r\n\r\n--\r\n-- Christophe Pettus\r\n [email protected]\r\n\r\n", "msg_date": "Fri, 21 Feb 2020 06:34:30 +0000", "msg_from": "Daulat Ram <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Can we have multiple tablespaces with in a database." }, { "msg_contents": "You mean we can have only single default tablespace for a database but the database objects can be created on different-2 tablespaces?\r\n\r\nFrom: amul sul <[email protected]>\r\nSent: Friday, February 21, 2020 11:48 AM\r\nTo: Daulat Ram <[email protected]>\r\nCc: [email protected]; [email protected]\r\nSubject: Re: Can we have multiple tablespaces with in a database.\r\n\r\n\r\n\r\nOn Fri, Feb 21, 2020 at 11:31 AM Daulat Ram <[email protected]<mailto:[email protected]>> wrote:\r\nHi Amul ,\r\nPlease share the examples how we can create no. of tablespaces for a single database and how we can use them.\r\nAs I know we can create database on tablespace\r\n\r\n 1. CREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION '/mnt/pgdatatest/test/pgdata/conn_tbs';\r\n 2. Create database test tablespace ‘conn_tbs';\r\nMaybe I have misunderstood your question; there is no option to specify more\r\nthan one tablespace for the database, but you can place the objects of that\r\ndatabase to different tablespaces (if options available for that object).\r\nE.g. you can place a table in than conn_tbs tablespace.\r\n\r\nIf option is not specified then by default that object will be created\r\nin conn_tbs.\r\n\r\nRegards,\r\nAmul\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n \nYou mean we can have only single default tablespace for a database but the database objects can be created on different-2 tablespaces?\n \nFrom: amul sul <[email protected]> \nSent: Friday, February 21, 2020 11:48 AM\nTo: Daulat Ram <[email protected]>\nCc: [email protected]; [email protected]\nSubject: Re: Can we have multiple tablespaces with in a database.\n \n\n\n\n \n\n\n \n\n\nOn Fri, Feb 21, 2020 at 11:31 AM Daulat Ram <[email protected]> wrote:\n\n\n\n\nHi\r\nAmul ,\nPlease share the examples how we can create no. of tablespaces for a single database and how we can use them.\r\n\nAs I know we can create database on tablespace\n\n\r\nCREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION '/mnt/pgdatatest/test/pgdata/conn_tbs';\r\nCreate database test tablespace ‘conn_tbs';\n\n\n\n\n\nMaybe I have misunderstood your question; there is no option to specify more\r\nthan one tablespace for the database, but you can place the objects of that\r\ndatabase to different tablespaces (if options available for that object).\r\nE.g. you can place a table in than conn_tbs tablespace.\n\r\nIf option is not specified then by default that object will be created\r\nin conn_tbs.\n\n\n \n\n\nRegards,\n\n\nAmul", "msg_date": "Fri, 21 Feb 2020 06:36:39 +0000", "msg_from": "Daulat Ram <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Can we have multiple tablespaces with in a database." }, { "msg_contents": "\n\n> On Feb 20, 2020, at 22:34, Daulat Ram <[email protected]> wrote:\n> \n> You mean we can have only single default tablespace for a database but the database objects can be created on different-2 tablespaces?\n\nYes.\n\n> Can you please share the Doc URL for your suggestions given in trail mail.\n\nhttps://www.postgresql.org/docs/current/manage-ag-tablespaces.html\n\n--\n-- Christophe Pettus\n [email protected]\n\n\n\n", "msg_date": "Thu, 20 Feb 2020 22:36:42 -0800", "msg_from": "Christophe Pettus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can we have multiple tablespaces with in a database." }, { "msg_contents": "What are the differences between Oracle and Postgres tablespace.\n\nCan we assign tablespace during Postgres schema creation . As I know in Oracle we assign the tablespace during user/schema creation. \n\n-----Original Message-----\nFrom: Christophe Pettus <[email protected]> \nSent: Friday, February 21, 2020 12:07 PM\nTo: Daulat Ram <[email protected]>\nCc: amul sul <[email protected]>; [email protected]\nSubject: Re: Can we have multiple tablespaces with in a database.\n\n\n\n> On Feb 20, 2020, at 22:34, Daulat Ram <[email protected]> wrote:\n> \n> You mean we can have only single default tablespace for a database but the database objects can be created on different-2 tablespaces?\n\nYes.\n\n> Can you please share the Doc URL for your suggestions given in trail mail.\n\nhttps://www.postgresql.org/docs/current/manage-ag-tablespaces.html\n\n--\n-- Christophe Pettus\n [email protected]\n\n\n\n", "msg_date": "Fri, 21 Feb 2020 07:17:50 +0000", "msg_from": "Daulat Ram <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Can we have multiple tablespaces with in a database." }, { "msg_contents": "On Fri, Feb 21, 2020 at 12:48 PM Daulat Ram <[email protected]>\nwrote:\n\n> What are the differences between Oracle and Postgres tablespace.\n>\n> I hope this[1] wiki page will help you.\n\n\n> Can we assign tablespace during Postgres schema creation . As I know in\n> Oracle we assign the tablespace during user/schema creation.\n>\nAFAIK, there is no syntax to assign tablespace to a schema.\n\nRegards,\nAmul\n\n1] https://wiki.postgresql.org/wiki/PostgreSQL_for_Oracle_DBAs\n\nOn Fri, Feb 21, 2020 at 12:48 PM Daulat Ram <[email protected]> wrote:What are the differences between Oracle and Postgres tablespace.\nI hope this[1] wiki page will help you.  \nCan we assign tablespace during Postgres schema creation . As I know in Oracle we assign the tablespace during user/schema creation. AFAIK, there is no syntax to assign tablespace to a schema.Regards,Amul1] https://wiki.postgresql.org/wiki/PostgreSQL_for_Oracle_DBAs", "msg_date": "Fri, 21 Feb 2020 17:20:39 +0530", "msg_from": "amul sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can we have multiple tablespaces with in a database." }, { "msg_contents": "Hi,\r\n\r\nYou can create more than one tablespace and assign different objects on different tablespaces.\r\nFor example :\r\nCREATE TABLESPACE test_data OWNER test LOCATION '/tmp/test_data';\r\nCREATE TABLESPACE test_idx OWNER test LOCATION '/tmp/test_idx';\r\n\r\nCREATE DATABASE test WITH TABLESPACE = test_data;\r\n\r\nThen, for example, when create table and index, you can specify\r\nCREATE TABLE test1\r\n\r\n(\r\n id int NOT NULL GENERATED ALWAYS AS IDENTITY,\r\n comment text,\r\n CONSTRAINT pk_test PRIMARY KEY\r\n (\r\n id\r\n ) USING INDEX TABLESPACE TEST_IDX\r\n) TABLESPACE TEST_DATA;\r\n\r\nCREATE INDEX sk_comment ON test1( comment ) TABLESPACE TEST_IDX;\r\n\r\nPatrick Fiche\r\nDatabase Engineer, Aqsacom Sas.\r\nc. 33 6 82 80 69 96\r\n\r\n[01-03_AQSA_Main_Corporate_Logo_JPEG_White_Low.jpg]<http://www.aqsacom.com/>\r\n\r\nFrom: Daulat Ram <[email protected]>\r\nSent: Friday, February 21, 2020 7:23 AM\r\nTo: amul sul <[email protected]>\r\nCc: [email protected]; [email protected]\r\nSubject: RE: Can we have multiple tablespaces with in a database.\r\n\r\nThat will be great if you share any doc where it’s mentioned that we can’t use multiple tablespace for a single database. I have to assist my Dev team regarding tablespaces.\r\n\r\nAlso , what are the differences between Oracle and Postgres Tablespacs?\r\n\r\nThanks,\r\n\r\n\r\nFrom: amul sul <[email protected]<mailto:[email protected]>>\r\nSent: Friday, February 21, 2020 11:48 AM\r\nTo: Daulat Ram <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>\r\nSubject: Re: Can we have multiple tablespaces with in a database.\r\n\r\n\r\n\r\nOn Fri, Feb 21, 2020 at 11:31 AM Daulat Ram <[email protected]<mailto:[email protected]>> wrote:\r\nHi Amul ,\r\nPlease share the examples how we can create no. of tablespaces for a single database and how we can use them.\r\nAs I know we can create database on tablespace\r\n\r\n 1. CREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION '/mnt/pgdatatest/test/pgdata/conn_tbs';\r\n 2. Create database test tablespace ‘conn_tbs';\r\nMaybe I have misunderstood your question; there is no option to specify more\r\nthan one tablespace for the database, but you can place the objects of that\r\ndatabase to different tablespaces (if options available for that object).\r\nE.g. you can place a table in than conn_tbs tablespace.\r\n\r\nIf option is not specified then by default that object will be created\r\nin conn_tbs.\r\n\r\nRegards,\r\nAmul", "msg_date": "Fri, 21 Feb 2020 16:24:28 +0000", "msg_from": "Patrick FICHE <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Can we have multiple tablespaces with in a database." }, { "msg_contents": "Thanks Patrick ,\r\n\r\nFrom: Patrick FICHE <[email protected]>\r\nSent: Friday, February 21, 2020 9:54 PM\r\nTo: Daulat Ram <[email protected]>; amul sul <[email protected]>\r\nCc: [email protected]\r\nSubject: RE: Can we have multiple tablespaces with in a database.\r\n\r\nHi,\r\n\r\nYou can create more than one tablespace and assign different objects on different tablespaces.\r\nFor example :\r\nCREATE TABLESPACE test_data OWNER test LOCATION '/tmp/test_data';\r\nCREATE TABLESPACE test_idx OWNER test LOCATION '/tmp/test_idx';\r\n\r\nCREATE DATABASE test WITH TABLESPACE = test_data;\r\n\r\nThen, for example, when create table and index, you can specify\r\nCREATE TABLE test1\r\n\r\n(\r\n id int NOT NULL GENERATED ALWAYS AS IDENTITY,\r\n comment text,\r\n CONSTRAINT pk_test PRIMARY KEY\r\n (\r\n id\r\n ) USING INDEX TABLESPACE TEST_IDX\r\n) TABLESPACE TEST_DATA;\r\n\r\nCREATE INDEX sk_comment ON test1( comment ) TABLESPACE TEST_IDX;\r\n\r\nPatrick Fiche\r\nDatabase Engineer, Aqsacom Sas.\r\nc. 33 6 82 80 69 96\r\n\r\n[01-03_AQSA_Main_Corporate_Logo_JPEG_White_Low.jpg]<http://www.aqsacom.com/>\r\n\r\nFrom: Daulat Ram <[email protected]<mailto:[email protected]>>\r\nSent: Friday, February 21, 2020 7:23 AM\r\nTo: amul sul <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>\r\nSubject: RE: Can we have multiple tablespaces with in a database.\r\n\r\nThat will be great if you share any doc where it’s mentioned that we can’t use multiple tablespace for a single database. I have to assist my Dev team regarding tablespaces.\r\n\r\nAlso , what are the differences between Oracle and Postgres Tablespacs?\r\n\r\nThanks,\r\n\r\n\r\nFrom: amul sul <[email protected]<mailto:[email protected]>>\r\nSent: Friday, February 21, 2020 11:48 AM\r\nTo: Daulat Ram <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>\r\nSubject: Re: Can we have multiple tablespaces with in a database.\r\n\r\n\r\n\r\nOn Fri, Feb 21, 2020 at 11:31 AM Daulat Ram <[email protected]<mailto:[email protected]>> wrote:\r\nHi Amul ,\r\nPlease share the examples how we can create no. of tablespaces for a single database and how we can use them.\r\nAs I know we can create database on tablespace\r\n\r\n 1. CREATE TABLESPACE conn_tbs OWNER enterprisedb LOCATION '/mnt/pgdatatest/test/pgdata/conn_tbs';\r\n 2. Create database test tablespace ‘conn_tbs';\r\nMaybe I have misunderstood your question; there is no option to specify more\r\nthan one tablespace for the database, but you can place the objects of that\r\ndatabase to different tablespaces (if options available for that object).\r\nE.g. you can place a table in than conn_tbs tablespace.\r\n\r\nIf option is not specified then by default that object will be created\r\nin conn_tbs.\r\n\r\nRegards,\r\nAmul", "msg_date": "Sat, 22 Feb 2020 04:45:14 +0000", "msg_from": "Daulat Ram <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Can we have multiple tablespaces with in a database." } ]
[ { "msg_contents": "Hi,\nI am looking to Range Partition one of my table (i.e. TransactionLog) in PostgreSQL 11.While evaluating query performance difference between the un-partitioned and partitioned table I am getting huge difference in planning time. Planning time is very high on partitioned table.Similarly when I query by specifying partition name directly in query the planning time is much less **0.081 ms** as compared to when I query based on partition table (parent table) name in query, where planning time **6.231 ms** (Samples below).<br>\nBelow are the details, Let me know how can I improve query performance on partitioned table.\nFollowing is the schema CREATE TABLE TransactionLog (\n    txid character varying(36) NOT NULL,    txnDetails character varying(64),    loggingtime timestamp(6) without time zone DEFAULT LOCALTIMESTAMP,) PARTITION BY RANGE(loggingtime);\nCREATE TABLE IF NOT EXISTS TransactionLog_20200223 PARTITION OF TransactionLog FOR VALUES FROM ('2020-02-23') TO ('2020-02-24');CREATE UNIQUE INDEX TransactionLog_20200223_UnqTxId ON TransactionLog_20200223 (txnid);\n\nFollowing is explain analyze result when I query Directly on partition. Planning time ~**0.080 ms** (average of 10 execution)postgres=> EXPLAIN (ANALYZE,VERBOSE,COSTS,BUFFERS,TIMING,SUMMARY) select txnDetails FROM mra_part.TransactionLog_20200223 WHERE txnid = 'febd139d-1b7f-4564-a004-1b3474e51756';                                                                             QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using TransactionLog_20200223_UnqTxId on TransactionLog_20200223 (cost=0.57..4.61 rows=1 width=10) (actual time=0.039..0.040 rows=1 loops=1)   Output: txnDetails   Index Cond: ((TransactionLog_20200223.txnid)::text = 'febd139d-1b7f-4564-a004-1b3474e51756'::text)   Buffers: shared hit=5 **Planning Time: 0.081 ms** Execution Time: 0.056 ms(6 rows)\n\nFollowing is explain analyze result when I query by parent-table. Planning time **6.198 ms** (average of 10 execution)postgres=> EXPLAIN (ANALYZE,VERBOSE,COSTS,BUFFERS,TIMING,SUMMARY)  select txnDetails FROM mtdauthlog WHERE txnid = 'febd139d-1b7f-4564-a004-1b3474e51756' AND loggingtime >= '2020-02-23'::timestamp without time zone AND loggingtime < '2020-02-24'::timestamp without time zone;                                                                                              QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------- Append  (cost=0.57..4.62 rows=1 width=10) (actual time=0.036..0.037 rows=1 loops=1)   Buffers: shared hit=5   ->  Index Scan using TransactionLog_20200223_UnqTxId on TransactionLog_20200223  (cost=0.57..4.61 rows=1 width=10) (actual time=0.035..0.036 rows=1 loops=1)         Output: TransactionLog_20200223.txnDetails         Index Cond: ((TransactionLog_20200223.txnid)::text = 'febd139d-1b7f-4564-a004-1b3474e51756'::text)         Filter: ((TransactionLog_20200223.loggingtime >= '2020-02-23 00:00:00'::timestamp without time zone) AND (TransactionLog_20200223.loggingtime < '2020-02-24 00:00:00'::timestamp without time zone))         Buffers: shared hit=5 **Planning Time: 6.231 ms** Execution Time: 0.076 ms(9 rows)\nThere are around ~200 child partitions. Partition pruning enabled.PostgreSQL Version: PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\nThanks and Regards,\nRavi Garg,\n\n\nHi,I am looking to Range Partition one of my table (i.e. TransactionLog) in PostgreSQL 11.While evaluating query performance difference between the un-partitioned and partitioned table I am getting huge difference in planning time. Planning time is very high on partitioned table.Similarly when I query by specifying partition name directly in query the planning time is much less **0.081 ms** as compared to when I query based on partition table (parent table) name in query, where planning time **6.231 ms** (Samples below).<br>Below are the details, Let me know how can I improve query performance on partitioned table.Following is the schema CREATE TABLE TransactionLog (    txid character varying(36) NOT NULL,    txnDetails character varying(64),    loggingtime timestamp(6) without time zone DEFAULT LOCALTIMESTAMP,) PARTITION BY RANGE(loggingtime);CREATE TABLE IF NOT EXISTS TransactionLog_20200223 PARTITION OF TransactionLog FOR VALUES FROM ('2020-02-23') TO ('2020-02-24');CREATE UNIQUE INDEX TransactionLog_20200223_UnqTxId ON TransactionLog_20200223 (txnid);Following is explain analyze result when I query Directly on partition. Planning time ~**0.080 ms** (average of 10 execution)postgres=> EXPLAIN (ANALYZE,VERBOSE,COSTS,BUFFERS,TIMING,SUMMARY) select txnDetails FROM mra_part.TransactionLog_20200223 WHERE txnid = 'febd139d-1b7f-4564-a004-1b3474e51756';                                                                             QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using TransactionLog_20200223_UnqTxId on TransactionLog_20200223 (cost=0.57..4.61 rows=1 width=10) (actual time=0.039..0.040 rows=1 loops=1)   Output: txnDetails   Index Cond: ((TransactionLog_20200223.txnid)::text = 'febd139d-1b7f-4564-a004-1b3474e51756'::text)   Buffers: shared hit=5 **Planning Time: 0.081 ms** Execution Time: 0.056 ms(6 rows)Following is explain analyze result when I query by parent-table. Planning time **6.198 ms** (average of 10 execution)postgres=> EXPLAIN (ANALYZE,VERBOSE,COSTS,BUFFERS,TIMING,SUMMARY)  select txnDetails FROM mtdauthlog WHERE txnid = 'febd139d-1b7f-4564-a004-1b3474e51756' AND loggingtime >= '2020-02-23'::timestamp without time zone AND loggingtime < '2020-02-24'::timestamp without time zone;                                                                                              QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------- Append  (cost=0.57..4.62 rows=1 width=10) (actual time=0.036..0.037 rows=1 loops=1)   Buffers: shared hit=5   ->  Index Scan using TransactionLog_20200223_UnqTxId on TransactionLog_20200223  (cost=0.57..4.61 rows=1 width=10) (actual time=0.035..0.036 rows=1 loops=1)         Output: TransactionLog_20200223.txnDetails         Index Cond: ((TransactionLog_20200223.txnid)::text = 'febd139d-1b7f-4564-a004-1b3474e51756'::text)         Filter: ((TransactionLog_20200223.loggingtime >= '2020-02-23 00:00:00'::timestamp without time zone) AND (TransactionLog_20200223.loggingtime < '2020-02-24 00:00:00'::timestamp without time zone))         Buffers: shared hit=5 **Planning Time: 6.231 ms** Execution Time: 0.076 ms(9 rows)There are around ~200 child partitions. Partition pruning enabled.PostgreSQL Version: PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bitThanks and Regards,Ravi Garg,", "msg_date": "Sun, 23 Feb 2020 09:56:30 +0000 (UTC)", "msg_from": "Ravi Garg <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 11 higher Planning time on Partitioned table" }, { "msg_contents": "On Sun, Feb 23, 2020 at 09:56:30AM +0000, Ravi Garg wrote:\n> Hi,\n> I am looking to Range Partition one of my table (i.e. TransactionLog) in PostgreSQL 11.While evaluating query performance difference between the un-partitioned and partitioned table I am getting huge difference in planning time. Planning time is very high on partitioned table.Similarly when I query by specifying partition name directly in query the planning time is much less **0.081 ms** as compared to when I query based on partition table (parent table) name in query, where planning time **6.231 ms** (Samples below).<br>\n\nThat's probably to be expected under pg11:\n\nhttps://www.postgresql.org/docs/11/ddl-partitioning.html\n|Too many partitions can mean longer query planning times...\n|It is also important to consider the overhead of partitioning during query planning and execution. The query planner is generally able to handle partition hierarchies with up to a few hundred partitions fairly well, provided that typical queries allow the query planner to prune all but a small number of partitions. Planning times become longer and memory consumption becomes higher as more partitions are added\n\n> There are around ~200 child partitions. Partition pruning enabled.PostgreSQL Version: PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n\nHow large are the partitions and how many indexes each, and how large are they?\nEach partition will be stat()ed and each index will be open()ed and read() for\nevery query. This was resolved in pg12:\nhttps://commitfest.postgresql.org/21/1778/\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 23 Feb 2020 04:12:09 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 11 higher Planning time on Partitioned table" }, { "msg_contents": "On Sun, Feb 23, 2020 at 04:12:09AM -0600, Justin Pryzby wrote:\n> How large are the partitions and how many indexes each, and how large are they?\n> Each partition will be stat()ed ... for every query.\n\nI should have said that's every 1GB \"segment\" is stat()ed for every query.\n\n> This was resolved in pg12:\n> https://commitfest.postgresql.org/21/1778/\n\n+ https://www.postgresql.org/about/featurematrix/detail/320/\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 23 Feb 2020 04:46:09 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 11 higher Planning time on Partitioned table" }, { "msg_contents": "Hi Justin,\nThanks for response.\nUnfortunately we will not be able to migrate to PG12 any time soon. \n - There is only one index (unique index btree) on 'txnID' (i.e. transaction ID) character varying(36). Which we are creating on each partition.\n - Our use case is limited to simple selects (we don't join with the other tables) however, we are expecting ~70 million records inserted per day and there would be couple of updates on each records where average record size would be ~ 1.5 KB. \n - Currently we are thinking to have Daily partitions and as we need to keep 6 months of data thus 180 Partitions.However we have liberty to reduce the number of partitions to weekly/fortnightly/monthly, If we get comparable performance. \n\n - We need to look current partition and previous partition for all of our use-cases/queries.\nCan you please suggest what sort of combinations/partition strategy we can test considering data-volume/vacuum etc. Also let me know if some of the pg_settings can help us tuning this (I have attached my pg_settings).\n\nThanks and Regards,\nRavi Garg,\nMob : +91-98930-66610 \n\n On Sunday, 23 February, 2020, 03:42:13 pm IST, Justin Pryzby <[email protected]> wrote: \n \n On Sun, Feb 23, 2020 at 09:56:30AM +0000, Ravi Garg wrote:\n> Hi,\n> I am looking to Range Partition one of my table (i.e. TransactionLog) in PostgreSQL 11.While evaluating query performance difference between the un-partitioned and partitioned table I am getting huge difference in planning time. Planning time is very high on partitioned table.Similarly when I query by specifying partition name directly in query the planning time is much less **0.081 ms** as compared to when I query based on partition table (parent table) name in query, where planning time **6.231 ms** (Samples below).<br>\n\nThat's probably to be expected under pg11:\n\nhttps://www.postgresql.org/docs/11/ddl-partitioning.html\n|Too many partitions can mean longer query planning times...\n|It is also important to consider the overhead of partitioning during query planning and execution. The query planner is generally able to handle partition hierarchies with up to a few hundred partitions fairly well, provided that typical queries allow the query planner to prune all but a small number of partitions. Planning times become longer and memory consumption becomes higher as more partitions are added\n\n> There are around ~200 child partitions. Partition pruning enabled.PostgreSQL Version: PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n\nHow large are the partitions and how many indexes each, and how large are they?\nEach partition will be stat()ed and each index will be open()ed and read() for\nevery query.  This was resolved in pg12:\nhttps://commitfest.postgresql.org/21/1778/\n\n-- \nJustin", "msg_date": "Sun, 23 Feb 2020 10:57:29 +0000 (UTC)", "msg_from": "Ravi Garg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 11 higher Planning time on Partitioned table" }, { "msg_contents": "On Sun, Feb 23, 2020 at 10:57:29AM +0000, Ravi Garg wrote:\n> - Currently we are thinking to have Daily partitions and as we need to keep 6 months of data thus 180 Partitions.However we have liberty to reduce the number of partitions to weekly/fortnightly/monthly, If we get comparable performance. \n\nI didn't hear how large the tables and indexes are.\n\n> - We need to look current partition and previous partition for all of our use-cases/queries.\n\nDo you mean that a given query is only going to hit 2 partitions ? Or do you\nmean that all but the most recent 2 partitions are \"archival\" and won't be\nneeded by future queries ?\n\n> Can you please suggest what sort of combinations/partition strategy we can test considering data-volume/vacuum etc. Also let me know if some of the pg_settings can help us tuning this (I have attached my pg_settings).\n\nYou should determine what an acceptable planning speed is, or the best balance\nof planning/execution time. Try to detach half your current partitions and, if\nthat gives acceptable performance, then partition by day/2 or more. You could\nmake a graph of (planning and total) time vs npartitions, since I think it's\nlikely to be nonlinear.\n\nI believe others have reported improved performance under v11 with larger\nnumbers of partitions, by using \"partitions of partitions\". So you could try\nmaking partitions by month themselves partitioned by day.\n\n> - Our use case is limited to simple selects (we don't join with the other\n> tables) however,�we are expecting ~70 million records inserted per day\n> and�there would be couple of updates on each records where average record\n> size would be ~ 1.5 KB.\n\n> shared_buffers | 1048576\n\nIf you care about INSERT performance, you probably need to make at least a\nsingle partition's index fit within shared_buffers (or set shared_buffers such\nthat it fits). Use transactions around your inserts. If your speed is not\nlimited by I/O, you could further use multiple VALUES(),() inserts, or maybe\nprepared statements. Maybe synchronous_commit=off.\n\nIf you care about (consistent) SELECT performance, you should consider\nVACUUMing the tables after bulk inserts, to set hint bits (and since\nnon-updated tuples won't be hit by autovacuum). Or maybe VACUUM FREEZE to\nfreeze tuples (since it sounds like a typical page is unlikely to ever be\nupdated).\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 23 Feb 2020 09:10:38 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 11 higher Planning time on Partitioned table" }, { "msg_contents": "> ... txid character varying(36) NOT NULL,\n> ... WHERE txnid = 'febd139d-1b7f-4564-a004-1b3474e51756'\n> There is only one index (unique index btree) on 'txnID' (i.e. transaction\nID) character varying(36). Which we are creating on each partition.\n\nIF txnid is real UUID , then you can test the\nhttps://www.postgresql.org/docs/11/datatype-uuid.html performance\nsee\nhttps://stackoverflow.com/questions/29880083/postgresql-uuid-type-performance\nimho: it should be better.\n\nbest,\n Imre\n\n\nRavi Garg <[email protected]> ezt írta (időpont: 2020. febr. 23., V,\n11:57):\n\n> Hi Justin,\n>\n> Thanks for response.\n>\n> Unfortunately we will not be able to migrate to PG12 any time soon.\n>\n> - There is only one index (unique index btree) on 'txnID' (i.e.\n> transaction ID) character varying(36). Which we are creating on each\n> partition.\n> - Our use case is limited to simple selects (we don't join with the\n> other tables) however, we are expecting ~70 million records inserted\n> per day and there would be couple of updates on each records where average\n> record size would be ~ 1.5 KB.\n> - Currently we are thinking to have Daily partitions and as we need to\n> keep 6 months of data thus 180 Partitions.However we have liberty to reduce\n> the number of partitions to weekly/fortnightly/monthly, If we get\n> comparable performance.\n> - We need to look current partition and previous partition for all of\n> our use-cases/queries.\n>\n> Can you please suggest what sort of combinations/partition strategy we can\n> test considering data-volume/vacuum etc. Also let me know if some of the\n> pg_settings can help us tuning this (I have attached my pg_settings).\n>\n>\n> Thanks and Regards,\n> Ravi Garg,\n> Mob : +91-98930-66610\n>\n>\n> On Sunday, 23 February, 2020, 03:42:13 pm IST, Justin Pryzby <\n> [email protected]> wrote:\n>\n>\n> On Sun, Feb 23, 2020 at 09:56:30AM +0000, Ravi Garg wrote:\n> > Hi,\n> > I am looking to Range Partition one of my table (i.e. TransactionLog) in\n> PostgreSQL 11.While evaluating query performance difference between the\n> un-partitioned and partitioned table I am getting huge difference in\n> planning time. Planning time is very high on partitioned table.Similarly\n> when I query by specifying partition name directly in query the planning\n> time is much less **0.081 ms** as compared to when I query based on\n> partition table (parent table) name in query, where planning time **6.231\n> ms** (Samples below).<br>\n>\n> That's probably to be expected under pg11:\n>\n> https://www.postgresql.org/docs/11/ddl-partitioning.html\n> |Too many partitions can mean longer query planning times...\n> |It is also important to consider the overhead of partitioning during\n> query planning and execution. The query planner is generally able to handle\n> partition hierarchies with up to a few hundred partitions fairly well,\n> provided that typical queries allow the query planner to prune all but a\n> small number of partitions. Planning times become longer and memory\n> consumption becomes higher as more partitions are added\n>\n>\n> > There are around ~200 child partitions. Partition pruning\n> enabled.PostgreSQL Version: PostgreSQL 11.7 on x86_64-pc-linux-gnu,\n> compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n>\n>\n> How large are the partitions and how many indexes each, and how large are\n> they?\n> Each partition will be stat()ed and each index will be open()ed and read()\n> for\n> every query. This was resolved in pg12:\n> https://commitfest.postgresql.org/21/1778/\n>\n> --\n> Justin\n>\n>\n\n> ...  txid character varying(36) NOT NULL,> ... WHERE txnid = 'febd139d-1b7f-4564-a004-1b3474e51756'> There is only one index (unique index btree) on 'txnID' (i.e. transaction ID) character varying(36). Which we are creating on each partition.IF txnid is real UUID , then you can test the https://www.postgresql.org/docs/11/datatype-uuid.html performancesee https://stackoverflow.com/questions/29880083/postgresql-uuid-type-performanceimho: it should be better.best, ImreRavi Garg <[email protected]> ezt írta (időpont: 2020. febr. 23., V, 11:57):Hi Justin,Thanks for response.Unfortunately we will not be able to migrate to PG12 any time soon.There is only one index (unique index btree) on 'txnID' (i.e. transaction ID) character varying(36). Which we are creating on each partition.Our use case is limited to simple selects (we don't join with the other tables) however, we are expecting ~70 million records inserted per day and there would be couple of updates on each records where average record size would be ~ 1.5 KB. Currently we are thinking to have Daily partitions and as we need to keep 6 months of data thus 180 Partitions.However we have liberty to reduce the number of partitions to weekly/fortnightly/monthly, If we get comparable performance.We need to look current partition and previous partition for all of our use-cases/queries.Can you please suggest what sort of combinations/partition strategy we can test considering data-volume/vacuum etc. Also let me know if some of the pg_settings can help us tuning this (I have attached my pg_settings).Thanks and Regards,Ravi Garg,Mob : +91-98930-66610\n\n\n\n\n On Sunday, 23 February, 2020, 03:42:13 pm IST, Justin Pryzby <[email protected]> wrote:\n \n\n\nOn Sun, Feb 23, 2020 at 09:56:30AM +0000, Ravi Garg wrote:> Hi,> I am looking to Range Partition one of my table (i.e. TransactionLog) in PostgreSQL 11.While evaluating query performance difference between the un-partitioned and partitioned table I am getting huge difference in planning time. Planning time is very high on partitioned table.Similarly when I query by specifying partition name directly in query the planning time is much less **0.081 ms** as compared to when I query based on partition table (parent table) name in query, where planning time **6.231 ms** (Samples below).<br>That's probably to be expected under pg11:https://www.postgresql.org/docs/11/ddl-partitioning.html|Too many partitions can mean longer query planning times...|It is also important to consider the overhead of partitioning during query planning and execution. The query planner is generally able to handle partition hierarchies with up to a few hundred partitions fairly well, provided that typical queries allow the query planner to prune all but a small number of partitions. Planning times become longer and memory consumption becomes higher as more partitions are added> There are around ~200 child partitions. Partition pruning enabled.PostgreSQL Version: PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bitHow large are the partitions and how many indexes each, and how large are they?Each partition will be stat()ed and each index will be open()ed and read() forevery query.  This was resolved in pg12:https://commitfest.postgresql.org/21/1778/-- Justin", "msg_date": "Sun, 23 Feb 2020 17:18:26 +0100", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 11 higher Planning time on Partitioned table" }, { "msg_contents": "Hi Justin,\n>I didn't hear how large the tables and indexes are.+-------------------------------------------+------------------+--------------------------------------------+|              table_name                   | pg_relation_size |  pg_total_relation_size - pg_relation_size |+-------------------------------------------+------------------+--------------------------------------------+| TransactionLog_20200213                   |      95646646272 | 4175699968                                 || TransactionLog_20200212                   |      95573344256 | 4133617664                                 || TransactionLog_20200211                   |      91477336064 | 3956457472                                 || TransactionLog_20200210                   |       8192000000 |  354344960                                 || TransactionLog_20200214                   |       6826672128 |  295288832                                 || TransactionLog_20200220                   |       1081393152 |   89497600                                 || pg_catalogpg_attribute                    |          3088384 |    2220032                                 || TransactionLog_20190925                   |          1368064 |      90112  (174 such partitions)          |+-------------------------------------------+------------------+--------------------------------------------+ > Do you mean that a given query is only going to hit 2 partitions ?  Or do you> mean that all but the most recent 2 partitions are \"archival\" and won't be\n> needed by future queries ?\nYes all queries will hit only 2 partitions (e.g. if we do daily partition, queries will hit only today's and yesterday's partition).\n> You should determine what an acceptable planning speed is, or the best balance> of planning/execution time.  Try to detach half your current partitions and, if> that gives acceptable performance, then partition by day/2 or more.  You could> make a graph of (planning and total) time vs npartitions, since I think it's> likely to be nonlinear.> I believe others have reported improved performance under v11 with larger> numbers of partitions, by using \"partitions of partitions\".  So you could try> making partitions by month themselves partitioned by day.\nFYI, these are the observations I am getting with various number of partition and a multilevel partition with respect to Un-Partitioned.+---------------+----------------------+----------------+---------------+--------------+---------------+--------------+---------------+--------------+| Testcase      | Partition Count      | Records in     | Select        | Select       | Update        | Update       | insert        | insert       ||               |                      | each Partition | planning (ms) | execute (ms) | planning (ms) | execute (ms) | planning (ms) | execute (ms) |+---------------+----------------------+----------------+---------------+--------------+---------------+--------------+---------------+--------------+| Single Level  |   6                  | 1000           |  1.162        | 0.045        |  2.112        | 0.115        | 1.261         | 0.178        || Partition     |  30                  | 1000           |  2.879        | 0.049        |  5.146        | 0.13         | 1.243         | 0.211        ||               | 200                  | 1000           | 18.479        | 0.087        | 31.385        | 0.18         | 1.253         | 0.468        |+---------------+----------------------+----------------+---------------+--------------+---------------+--------------+---------------+--------------+| Multi Level   | 6 Partition having   | 1000           | 3.6032        | 0.0695       | x             | x            | x             | x            || Partition     | 30 subpartition each |                |               |              |               |              |               |              |+---------------+----------------------+----------------+---------------+--------------+---------------+--------------+---------------+--------------+| UnPartitioned | NA                   | 430 Million    | 0.0875        | 0.0655       | x             | x            | x             | x            |+---------------+----------------------+----------------+---------------+--------------+---------------+--------------+---------------+--------------+\n> If you care about INSERT performance, you probably need to make at least a> single partition's index fit within shared_buffers (or set shared_buffers such> that it fits).  Use transactions around your inserts.  If your speed is not> limited by I/O, you could further use multiple VALUES(),() inserts, or maybe> prepared statements.  Maybe synchronous_commit=off.> > If you care about (consistent) SELECT performance, you should consider> VACUUMing the tables after bulk inserts, to set hint bits (and since> non-updated tuples won't be hit by autovacuum).  Or maybe VACUUM FREEZE to> freeze tuples (since it sounds like a typical page is unlikely to ever be> updated).\nSure, I'll evaluate these settings, thanks.\nThanks and Regards,\nRavi Garg \n\n On Sunday, 23 February, 2020, 08:40:58 pm IST, Justin Pryzby <[email protected]> wrote: \n \n On Sun, Feb 23, 2020 at 10:57:29AM +0000, Ravi Garg wrote:\n>    - Currently we are thinking to have Daily partitions and as we need to keep 6 months of data thus 180 Partitions.However we have liberty to reduce the number of partitions to weekly/fortnightly/monthly, If we get comparable performance.  \n\nI didn't hear how large the tables and indexes are.\n\n>    - We need to look current partition and previous partition for all of our use-cases/queries.\n\nDo you mean that a given query is only going to hit 2 partitions ?  Or do you\nmean that all but the most recent 2 partitions are \"archival\" and won't be\nneeded by future queries ?\n\n> Can you please suggest what sort of combinations/partition strategy we can test considering data-volume/vacuum etc. Also let me know if some of the pg_settings can help us tuning this (I have attached my pg_settings).\n\nYou should determine what an acceptable planning speed is, or the best balance\nof planning/execution time.  Try to detach half your current partitions and, if\nthat gives acceptable performance, then partition by day/2 or more.  You could\nmake a graph of (planning and total) time vs npartitions, since I think it's\nlikely to be nonlinear.\n\nI believe others have reported improved performance under v11 with larger\nnumbers of partitions, by using \"partitions of partitions\".  So you could try\nmaking partitions by month themselves partitioned by day.\n\n>    - Our use case is limited to simple selects (we don't join with the other\n>    tables) however, we are expecting ~70 million records inserted per day\n>    and there would be couple of updates on each records where average record\n>    size would be ~ 1.5 KB.\n\n>  shared_buffers                        | 1048576\n\nIf you care about INSERT performance, you probably need to make at least a\nsingle partition's index fit within shared_buffers (or set shared_buffers such\nthat it fits).  Use transactions around your inserts.  If your speed is not\nlimited by I/O, you could further use multiple VALUES(),() inserts, or maybe\nprepared statements.  Maybe synchronous_commit=off.\n\nIf you care about (consistent) SELECT performance, you should consider\nVACUUMing the tables after bulk inserts, to set hint bits (and since\nnon-updated tuples won't be hit by autovacuum).  Or maybe VACUUM FREEZE to\nfreeze tuples (since it sounds like a typical page is unlikely to ever be\nupdated).\n\n-- \nJustin\n\n\n \nHi Justin,>I didn't hear how large the tables and indexes are.+-------------------------------------------+------------------+--------------------------------------------+|              table_name                   | pg_relation_size |  pg_total_relation_size - pg_relation_size |+-------------------------------------------+------------------+--------------------------------------------+| TransactionLog_20200213                   |      95646646272 | 4175699968                                 || TransactionLog_20200212                   |      95573344256 | 4133617664                                 || TransactionLog_20200211                   |      91477336064 | 3956457472                                 || TransactionLog_20200210                   |       8192000000 |  354344960                                 || TransactionLog_20200214                   |       6826672128 |  295288832                                 || TransactionLog_20200220                   |       1081393152 |   89497600                                 || pg_catalogpg_attribute                    |          3088384 |    2220032                                 || TransactionLog_20190925                   |          1368064 |      90112  (174 such partitions)          |+-------------------------------------------+------------------+--------------------------------------------+ > Do you mean that a given query is only going to hit 2 partitions ?  Or do you> mean that all but the most recent 2 partitions are \"archival\" and won't be> needed by future queries ?Yes all queries will hit only 2 partitions (e.g. if we do daily partition, queries will hit only today's and yesterday's partition).> You should determine what an acceptable planning speed is, or the best balance> of planning/execution time.  Try to detach half your current partitions and, if> that gives acceptable performance, then partition by day/2 or more.  You could> make a graph of (planning and total) time vs npartitions, since I think it's> likely to be nonlinear.> I believe others have reported improved performance under v11 with larger> numbers of partitions, by using \"partitions of partitions\".  So you could try> making partitions by month themselves partitioned by day.FYI, these are the observations I am getting with various number of partition and a multilevel partition with respect to Un-Partitioned.+---------------+----------------------+----------------+---------------+--------------+---------------+--------------+---------------+--------------+| Testcase      | Partition Count      | Records in     | Select        | Select       | Update        | Update       | insert        | insert       ||               |                      | each Partition | planning (ms) | execute (ms) | planning (ms) | execute (ms) | planning (ms) | execute (ms) |+---------------+----------------------+----------------+---------------+--------------+---------------+--------------+---------------+--------------+| Single Level  |   6                  | 1000           |  1.162        | 0.045        |  2.112        | 0.115        | 1.261         | 0.178        || Partition     |  30                  | 1000           |  2.879        | 0.049        |  5.146        | 0.13         | 1.243         | 0.211        ||               | 200                  | 1000           | 18.479        | 0.087        | 31.385        | 0.18         | 1.253         | 0.468        |+---------------+----------------------+----------------+---------------+--------------+---------------+--------------+---------------+--------------+| Multi Level   | 6 Partition having   | 1000           | 3.6032        | 0.0695       | x             | x            | x             | x            || Partition     | 30 subpartition each |                |               |              |               |              |               |              |+---------------+----------------------+----------------+---------------+--------------+---------------+--------------+---------------+--------------+| UnPartitioned | NA                   | 430 Million    | 0.0875        | 0.0655       | x             | x            | x             | x            |+---------------+----------------------+----------------+---------------+--------------+---------------+--------------+---------------+--------------+> If you care about INSERT performance, you probably need to make at least a> single partition's index fit within shared_buffers (or set shared_buffers such> that it fits).  Use transactions around your inserts.  If your speed is not> limited by I/O, you could further use multiple VALUES(),() inserts, or maybe> prepared statements.  Maybe synchronous_commit=off.> > If you care about (consistent) SELECT performance, you should consider> VACUUMing the tables after bulk inserts, to set hint bits (and since> non-updated tuples won't be hit by autovacuum).  Or maybe VACUUM FREEZE to> freeze tuples (since it sounds like a typical page is unlikely to ever be> updated).Sure, I'll evaluate these settings, thanks.Thanks and Regards,Ravi Garg\n\n\n\n\n On Sunday, 23 February, 2020, 08:40:58 pm IST, Justin Pryzby <[email protected]> wrote:\n \n\n\nOn Sun, Feb 23, 2020 at 10:57:29AM +0000, Ravi Garg wrote:>    - Currently we are thinking to have Daily partitions and as we need to keep 6 months of data thus 180 Partitions.However we have liberty to reduce the number of partitions to weekly/fortnightly/monthly, If we get comparable performance.  I didn't hear how large the tables and indexes are.>    - We need to look current partition and previous partition for all of our use-cases/queries.Do you mean that a given query is only going to hit 2 partitions ?  Or do youmean that all but the most recent 2 partitions are \"archival\" and won't beneeded by future queries ?> Can you please suggest what sort of combinations/partition strategy we can test considering data-volume/vacuum etc. Also let me know if some of the pg_settings can help us tuning this (I have attached my pg_settings).You should determine what an acceptable planning speed is, or the best balanceof planning/execution time.  Try to detach half your current partitions and, ifthat gives acceptable performance, then partition by day/2 or more.  You couldmake a graph of (planning and total) time vs npartitions, since I think it'slikely to be nonlinear.I believe others have reported improved performance under v11 with largernumbers of partitions, by using \"partitions of partitions\".  So you could trymaking partitions by month themselves partitioned by day.>    - Our use case is limited to simple selects (we don't join with the other>    tables) however, we are expecting ~70 million records inserted per day>    and there would be couple of updates on each records where average record>    size would be ~ 1.5 KB.>  shared_buffers                        | 1048576If you care about INSERT performance, you probably need to make at least asingle partition's index fit within shared_buffers (or set shared_buffers suchthat it fits).  Use transactions around your inserts.  If your speed is notlimited by I/O, you could further use multiple VALUES(),() inserts, or maybeprepared statements.  Maybe synchronous_commit=off.If you care about (consistent) SELECT performance, you should considerVACUUMing the tables after bulk inserts, to set hint bits (and sincenon-updated tuples won't be hit by autovacuum).  Or maybe VACUUM FREEZE tofreeze tuples (since it sounds like a typical page is unlikely to ever beupdated).-- Justin", "msg_date": "Mon, 24 Feb 2020 19:40:16 +0000 (UTC)", "msg_from": "Ravi Garg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 11 higher Planning time on Partitioned table" }, { "msg_contents": "> IF txnid is real UUID , then you can test the https://www.postgresql.org/docs/11/datatype-uuid.html performance> see https://stackoverflow.com/questions/29880083/postgresql-uuid-type-performance> imho: it should be better.\nSure, thanks Imre\n\nThanks and Regards,\nRavi Garg\n\n On Sunday, 23 February, 2020, 09:49:00 pm IST, Imre Samu <[email protected]> wrote: \n \n > ...  txid character varying(36) NOT NULL,\n> ... WHERE txnid = 'febd139d-1b7f-4564-a004-1b3474e51756'> There is only one index (unique index btree) on 'txnID' (i.e. transaction ID) character varying(36). Which we are creating on each partition.\nIF txnid is real UUID , then you can test the https://www.postgresql.org/docs/11/datatype-uuid.html performancesee https://stackoverflow.com/questions/29880083/postgresql-uuid-type-performanceimho: it should be better.\n\nbest, Imre\n\nRavi Garg <[email protected]> ezt írta (időpont: 2020. febr. 23., V, 11:57):\n\nHi Justin,\nThanks for response.\nUnfortunately we will not be able to migrate to PG12 any time soon. \n - There is only one index (unique index btree) on 'txnID' (i.e. transaction ID) character varying(36). Which we are creating on each partition.\n - Our use case is limited to simple selects (we don't join with the other tables) however, we are expecting ~70 million records inserted per day and there would be couple of updates on each records where average record size would be ~ 1.5 KB. \n - Currently we are thinking to have Daily partitions and as we need to keep 6 months of data thus 180 Partitions.However we have liberty to reduce the number of partitions to weekly/fortnightly/monthly, If we get comparable performance. \n\n - We need to look current partition and previous partition for all of our use-cases/queries.\nCan you please suggest what sort of combinations/partition strategy we can test considering data-volume/vacuum etc. Also let me know if some of the pg_settings can help us tuning this (I have attached my pg_settings).\n\nThanks and Regards,\nRavi Garg,\nMob : +91-98930-66610 \n\n On Sunday, 23 February, 2020, 03:42:13 pm IST, Justin Pryzby <[email protected]> wrote: \n \n On Sun, Feb 23, 2020 at 09:56:30AM +0000, Ravi Garg wrote:\n> Hi,\n> I am looking to Range Partition one of my table (i.e. TransactionLog) in PostgreSQL 11.While evaluating query performance difference between the un-partitioned and partitioned table I am getting huge difference in planning time. Planning time is very high on partitioned table.Similarly when I query by specifying partition name directly in query the planning time is much less **0.081 ms** as compared to when I query based on partition table (parent table) name in query, where planning time **6.231 ms** (Samples below).<br>\n\nThat's probably to be expected under pg11:\n\nhttps://www.postgresql.org/docs/11/ddl-partitioning.html\n|Too many partitions can mean longer query planning times...\n|It is also important to consider the overhead of partitioning during query planning and execution. The query planner is generally able to handle partition hierarchies with up to a few hundred partitions fairly well, provided that typical queries allow the query planner to prune all but a small number of partitions. Planning times become longer and memory consumption becomes higher as more partitions are added\n\n> There are around ~200 child partitions. Partition pruning enabled.PostgreSQL Version: PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n\nHow large are the partitions and how many indexes each, and how large are they?\nEach partition will be stat()ed and each index will be open()ed and read() for\nevery query.  This was resolved in pg12:\nhttps://commitfest.postgresql.org/21/1778/\n\n-- \nJustin\n \n \n> IF txnid is real UUID , then you can test the https://www.postgresql.org/docs/11/datatype-uuid.html performance> see https://stackoverflow.com/questions/29880083/postgresql-uuid-type-performance> imho: it should be better.Sure, thanks ImreThanks and Regards,Ravi Garg\n\n\n\n On Sunday, 23 February, 2020, 09:49:00 pm IST, Imre Samu <[email protected]> wrote:\n \n\n\n> ...  txid character varying(36) NOT NULL,> ... WHERE txnid = 'febd139d-1b7f-4564-a004-1b3474e51756'> There is only one index (unique index btree) on 'txnID' (i.e. transaction ID) character varying(36). Which we are creating on each partition.IF txnid is real UUID , then you can test the https://www.postgresql.org/docs/11/datatype-uuid.html performancesee https://stackoverflow.com/questions/29880083/postgresql-uuid-type-performanceimho: it should be better.best, ImreRavi Garg <[email protected]> ezt írta (időpont: 2020. febr. 23., V, 11:57):Hi Justin,Thanks for response.Unfortunately we will not be able to migrate to PG12 any time soon.There is only one index (unique index btree) on 'txnID' (i.e. transaction ID) character varying(36). Which we are creating on each partition.Our use case is limited to simple selects (we don't join with the other tables) however, we are expecting ~70 million records inserted per day and there would be couple of updates on each records where average record size would be ~ 1.5 KB. Currently we are thinking to have Daily partitions and as we need to keep 6 months of data thus 180 Partitions.However we have liberty to reduce the number of partitions to weekly/fortnightly/monthly, If we get comparable performance.We need to look current partition and previous partition for all of our use-cases/queries.Can you please suggest what sort of combinations/partition strategy we can test considering data-volume/vacuum etc. Also let me know if some of the pg_settings can help us tuning this (I have attached my pg_settings).Thanks and Regards,Ravi Garg,Mob : +91-98930-66610\n\n\n\n\n On Sunday, 23 February, 2020, 03:42:13 pm IST, Justin Pryzby <[email protected]> wrote:\n \n\n\nOn Sun, Feb 23, 2020 at 09:56:30AM +0000, Ravi Garg wrote:> Hi,> I am looking to Range Partition one of my table (i.e. TransactionLog) in PostgreSQL 11.While evaluating query performance difference between the un-partitioned and partitioned table I am getting huge difference in planning time. Planning time is very high on partitioned table.Similarly when I query by specifying partition name directly in query the planning time is much less **0.081 ms** as compared to when I query based on partition table (parent table) name in query, where planning time **6.231 ms** (Samples below).<br>That's probably to be expected under pg11:https://www.postgresql.org/docs/11/ddl-partitioning.html|Too many partitions can mean longer query planning times...|It is also important to consider the overhead of partitioning during query planning and execution. The query planner is generally able to handle partition hierarchies with up to a few hundred partitions fairly well, provided that typical queries allow the query planner to prune all but a small number of partitions. Planning times become longer and memory consumption becomes higher as more partitions are added> There are around ~200 child partitions. Partition pruning enabled.PostgreSQL Version: PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bitHow large are the partitions and how many indexes each, and how large are they?Each partition will be stat()ed and each index will be open()ed and read() forevery query.  This was resolved in pg12:https://commitfest.postgresql.org/21/1778/-- Justin", "msg_date": "Mon, 24 Feb 2020 19:44:34 +0000 (UTC)", "msg_from": "Ravi Garg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 11 higher Planning time on Partitioned table" } ]
[ { "msg_contents": "Hi All,\n\nWe have recently upgraded our postgres servers from 9.4 version to 11.5\nversion. Post upgrade we are see delay in authentication.\n\nIssue is when we are using ldaptls=1 the authentication takes 1 second or\ngreater than that. But if I disable ldaptls it's getting authenticated\nwithin milliseconds.\n\nBut in 9.4 even if I enable ldaptls it's getting authenticated within\nmilliseconds any idea why we are facing the issue?\n\nRegards,\nMani.\n\nHi All,We have recently upgraded our postgres servers from 9.4 version to 11.5 version. Post upgrade we are see delay in authentication. Issue is when we are using ldaptls=1 the authentication takes 1 second or greater than that. But if I disable ldaptls it's getting authenticated within milliseconds.But in 9.4 even if I enable ldaptls it's getting authenticated within milliseconds any idea why we are facing the issue?Regards,Mani.", "msg_date": "Tue, 25 Feb 2020 01:20:21 +0530", "msg_from": "Mani Sankar <[email protected]>", "msg_from_op": true, "msg_subject": "LDAP with TLS is taking more time in Postgresql 11.5" }, { "msg_contents": "On 2/24/20 11:50 AM, Mani Sankar wrote:\n> Hi All,\n> \n> We have recently upgraded our postgres servers from 9.4 version to 11.5 \n> version. Post upgrade we are see delay in authentication.\n> \n> Issue is when we are using ldaptls=1 the authentication takes 1 second \n> or greater than that. But if I disable ldaptls it's getting \n> authenticated within milliseconds.\n> \n> But in 9.4 even if I enable ldaptls it's getting authenticated within \n> milliseconds any idea why we are facing the issue?\n\nThis is going to need a good deal more information:\n\n1) OS the server is running on and did the OS or OS version change with \nthe upgrade?\n\n2) How was the server installed from packages(if so from where?) or from \nsource?\n\n3) The configuration for LDAP in pg_hba.conf.\n\n4) Pertinent information from the Postgres log.\n\n5) Pertinent information from the system log.\n\n> \n> Regards,\n> Mani.\n> \n\n\n-- \nAdrian Klaver\[email protected]\n\n\n", "msg_date": "Mon, 24 Feb 2020 12:58:20 -0800", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LDAP with TLS is taking more time in Postgresql 11.5" }, { "msg_contents": "On Tue, 2020-02-25 at 01:20 +0530, Mani Sankar wrote:\n> We have recently upgraded our postgres servers from 9.4 version to 11.5 version. Post upgrade we are see delay in authentication. \n> \n> Issue is when we are using ldaptls=1 the authentication takes 1 second or greater than that. But if I disable ldaptls it's getting authenticated within milliseconds.\n> \n> But in 9.4 even if I enable ldaptls it's getting authenticated within milliseconds any idea why we are facing the issue?\n\nI would use a packet sniffer like Wireshark to examine the message flow and see where the time is spent.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Tue, 25 Feb 2020 10:20:59 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LDAP with TLS is taking more time in Postgresql 11.5" }, { "msg_contents": "On 2/24/20 9:07 PM, Mani Sankar wrote:\nPlease reply to list also.\nCcing list.\n> Hi Adrian,\n> \n> Thanks for replying. Below are the requested details.\n> \n> ################ Configuration in 9.4 PG Version\n> \n> local all all ldap ldapserver=XXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX \n> ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all all 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> ############ Configuration in 11.5 Version.\n> \n> local all all ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX \n> ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all all 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host    replication     replicator  XXXXXXXXXXXXX/22        md5\n> \n> host    replication     replicator  1XXXXXXXXXXXX/22        md5\n> \n> Linux Version: Red Hat Enterprise Linux Server release 6.10 (Santiago)\n> \n> Server Installation is Source code installation. Custom build for our \n> environment.\n> \n> Authentication logs from PG 11.5:\n> \n> 2020-02-24 00:00:15 MST [25089]: \n> application=[unknown],host=xx.xx.xxx.xx(55742),user=[unknown],db=[unknown],state=00000 \n> LOG:  connection received: host=xx.xx.xxx.xx port=55742\n> \n> 2020-02-24 00:00:16 MST [25090]: \n> application=[unknown],host=xx.xx.xxx.xx(55748),user=[unknown],db=[unknown],state=00000 \n> LOG:  connection received: host=xx.xx.xxx.xx port=55748\n> \n> 2020-02-24 00:00:16 MST [25092]: \n> application=[unknown],host=xx.xx.xxx.xx(55765),user=[unknown],db=[unknown],state=00000 \n> LOG:  connection received: host=xx.xx.xxx.xx port=55765\n> \n> 2020-02-24 00:00:16 MST [25093]: \n> application=[unknown],host=xx.xx.xxx.xx(55770),user=[unknown],db=[unknown],state=00000 \n> LOG:  connection received: host=xx.xx.xxx.xx port=55770\n> \n> 2020-02-24 00:00:17 MST [25090]: \n> application=[unknown],host=xx.xx.xxx.xx(55748),user=Someuser,db=test_db,state=00000 \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 00:00:17 MST [25089]: \n> application=[unknown],host=xx.xx.xxx.xx(55742),user=Someuser,db=test_db,state=00000 \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 00:00:17 MST [25092]: \n> application=[unknown],host=xx.xx.xxx.xx(55765),user=Someuser,db=test_db,state=00000 \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 00:00:17 MST [25093]: \n> application=[unknown],host=xx.xx.xxx.xx(55770),user=Someuser,db=test_db,state=00000 \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> Authentication logs from PG 9.4:\n> \n> 2020-02-17 22:40:01 MST [127575]: \n> application=[unknown],host=xx.xx.xx.xx(39451),user=[unknown],db=[unknown] LOG: \n> connection received: host=xx.xx.xx.xx port=39451\n> \n> 2020-02-17 22:40:01 MST [127575]: \n> application=[unknown],host=xx.xx.xx.xx(39451),user=Someuser,db=test_db \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 21:57:44 MST [117472]: \n> application=[unknown],host=xx.xx.xx.xx(58500),user=[unknown],db=[unknown] LOG: \n> connection received: host=xx.xx.xx.xx port=58500\n> \n> 2020-02-24 21:57:44 MST [117472]: \n> application=[unknown],host=xx.xx.xx.xx(58500),user=Someuser,db=test_db \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 21:58:27 MST [117620]: \n> application=[unknown],host=xx.xx.xx.xx(58520),user=[unknown],db=[unknown] LOG: \n> connection received: host=xx.xx.xx.xx port=58520\n> \n> 2020-02-24 21:58:27 MST [117620]: \n> application=[unknown],host=xx.xx.xx.xx(58520),user=Someuser,db=test_db \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 21:58:31 MST [117632]: \n> application=[unknown],host=xx.xx.xx.xx(58524),user=[unknown],db=[unknown] LOG: \n> connection received: host=xx.xx.xx.xx port=58524\n> \n> 2020-02-24 21:58:31 MST [117632]: \n> application=[unknown],host=xx.xx.xx.xx(58524),user=Someuser,db=test_db \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> We also have a local .ldaprc file with below entry\n> \n> TLS_REQCERT allow\n> \n> \n> On Tue, Feb 25, 2020 at 2:28 AM Adrian Klaver <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 2/24/20 11:50 AM, Mani Sankar wrote:\n> > Hi All,\n> >\n> > We have recently upgraded our postgres servers from 9.4 version\n> to 11.5\n> > version. Post upgrade we are see delay in authentication.\n> >\n> > Issue is when we are using ldaptls=1 the authentication takes 1\n> second\n> > or greater than that. But if I disable ldaptls it's getting\n> > authenticated within milliseconds.\n> >\n> > But in 9.4 even if I enable ldaptls it's getting authenticated\n> within\n> > milliseconds any idea why we are facing the issue?\n> \n> This is going to need a good deal more information:\n> \n> 1) OS the server is running on and did the OS or OS version change with\n> the upgrade?\n> \n> 2) How was the server installed from packages(if so from where?) or\n> from\n> source?\n> \n> 3) The configuration for LDAP in pg_hba.conf.\n> \n> 4) Pertinent information from the Postgres log.\n> \n> 5) Pertinent information from the system log.\n> \n> >\n> > Regards,\n> > Mani.\n> >\n> \n> \n> -- \n> Adrian Klaver\n> [email protected] <mailto:[email protected]>\n> \n\n\n-- \nAdrian Klaver\[email protected]\n\n\n", "msg_date": "Tue, 25 Feb 2020 07:54:31 -0800", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LDAP with TLS is taking more time in Postgresql 11.5" }, { "msg_contents": "Hi Adrian,\n\nShould I want to try this configuration?\n\nRegards,\nMani.\n\nOn Tue, 25 Feb, 2020, 9:24 pm Adrian Klaver, <[email protected]>\nwrote:\n\n> On 2/24/20 9:07 PM, Mani Sankar wrote:\n> Please reply to list also.\n> Ccing list.\n> > Hi Adrian,\n> >\n> > Thanks for replying. Below are the requested details.\n> >\n> > ################ Configuration in 9.4 PG Version\n> >\n> > local all all ldap ldapserver=XXXXXXXXXXXXXX ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX\n> > ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all all 0.0.0.0/0 <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > ############ Configuration in 11.5 Version.\n> >\n> > local all all ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX\n> > ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all all 0.0.0.0/0 <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host replication replicator XXXXXXXXXXXXX/22 md5\n> >\n> > host replication replicator 1XXXXXXXXXXXX/22 md5\n> >\n> > Linux Version: Red Hat Enterprise Linux Server release 6.10 (Santiago)\n> >\n> > Server Installation is Source code installation. Custom build for our\n> > environment.\n> >\n> > Authentication logs from PG 11.5:\n> >\n> > 2020-02-24 00:00:15 MST [25089]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55742),user=[unknown],db=[unknown],state=00000\n>\n> > LOG: connection received: host=xx.xx.xxx.xx port=55742\n> >\n> > 2020-02-24 00:00:16 MST [25090]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55748),user=[unknown],db=[unknown],state=00000\n>\n> > LOG: connection received: host=xx.xx.xxx.xx port=55748\n> >\n> > 2020-02-24 00:00:16 MST [25092]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55765),user=[unknown],db=[unknown],state=00000\n>\n> > LOG: connection received: host=xx.xx.xxx.xx port=55765\n> >\n> > 2020-02-24 00:00:16 MST [25093]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55770),user=[unknown],db=[unknown],state=00000\n>\n> > LOG: connection received: host=xx.xx.xxx.xx port=55770\n> >\n> > 2020-02-24 00:00:17 MST [25090]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55748),user=Someuser,db=test_db,state=00000\n>\n> > LOG: connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 00:00:17 MST [25089]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55742),user=Someuser,db=test_db,state=00000\n>\n> > LOG: connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 00:00:17 MST [25092]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55765),user=Someuser,db=test_db,state=00000\n>\n> > LOG: connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 00:00:17 MST [25093]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55770),user=Someuser,db=test_db,state=00000\n>\n> > LOG: connection authorized: user=Someuser database=test_db\n> >\n> > Authentication logs from PG 9.4:\n> >\n> > 2020-02-17 22:40:01 MST [127575]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(39451),user=[unknown],db=[unknown]\n> LOG:\n> > connection received: host=xx.xx.xx.xx port=39451\n> >\n> > 2020-02-17 22:40:01 MST [127575]:\n> > application=[unknown],host=xx.xx.xx.xx(39451),user=Someuser,db=test_db\n> > LOG: connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 21:57:44 MST [117472]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(58500),user=[unknown],db=[unknown]\n> LOG:\n> > connection received: host=xx.xx.xx.xx port=58500\n> >\n> > 2020-02-24 21:57:44 MST [117472]:\n> > application=[unknown],host=xx.xx.xx.xx(58500),user=Someuser,db=test_db\n> > LOG: connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 21:58:27 MST [117620]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(58520),user=[unknown],db=[unknown]\n> LOG:\n> > connection received: host=xx.xx.xx.xx port=58520\n> >\n> > 2020-02-24 21:58:27 MST [117620]:\n> > application=[unknown],host=xx.xx.xx.xx(58520),user=Someuser,db=test_db\n> > LOG: connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 21:58:31 MST [117632]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(58524),user=[unknown],db=[unknown]\n> LOG:\n> > connection received: host=xx.xx.xx.xx port=58524\n> >\n> > 2020-02-24 21:58:31 MST [117632]:\n> > application=[unknown],host=xx.xx.xx.xx(58524),user=Someuser,db=test_db\n> > LOG: connection authorized: user=Someuser database=test_db\n> >\n> > We also have a local .ldaprc file with below entry\n> >\n> > TLS_REQCERT allow\n> >\n> >\n> > On Tue, Feb 25, 2020 at 2:28 AM Adrian Klaver <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > On 2/24/20 11:50 AM, Mani Sankar wrote:\n> > > Hi All,\n> > >\n> > > We have recently upgraded our postgres servers from 9.4 version\n> > to 11.5\n> > > version. Post upgrade we are see delay in authentication.\n> > >\n> > > Issue is when we are using ldaptls=1 the authentication takes 1\n> > second\n> > > or greater than that. But if I disable ldaptls it's getting\n> > > authenticated within milliseconds.\n> > >\n> > > But in 9.4 even if I enable ldaptls it's getting authenticated\n> > within\n> > > milliseconds any idea why we are facing the issue?\n> >\n> > This is going to need a good deal more information:\n> >\n> > 1) OS the server is running on and did the OS or OS version change\n> with\n> > the upgrade?\n> >\n> > 2) How was the server installed from packages(if so from where?) or\n> > from\n> > source?\n> >\n> > 3) The configuration for LDAP in pg_hba.conf.\n> >\n> > 4) Pertinent information from the Postgres log.\n> >\n> > 5) Pertinent information from the system log.\n> >\n> > >\n> > > Regards,\n> > > Mani.\n> > >\n> >\n> >\n> > --\n> > Adrian Klaver\n> > [email protected] <mailto:[email protected]>\n> >\n>\n>\n> --\n> Adrian Klaver\n> [email protected]\n>\n\nHi Adrian,Should I want to try this configuration?Regards,Mani.On Tue, 25 Feb, 2020, 9:24 pm Adrian Klaver, <[email protected]> wrote:On 2/24/20 9:07 PM, Mani Sankar wrote:\nPlease reply to list also.\nCcing list.\n> Hi Adrian,\n> \n> Thanks for replying. Below are the requested details.\n> \n> ################ Configuration in 9.4 PG Version\n> \n> local all all ldap ldapserver=XXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX \n> ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all all 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> ############ Configuration in 11.5 Version.\n> \n> local all all ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX \n> ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host all all 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268 \n> ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> \n> host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host replication someuser 0.0.0.0/0 <http://0.0.0.0/0> ldap \n> ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" \n> ldaptls=1\n> \n> host    replication     replicator  XXXXXXXXXXXXX/22        md5\n> \n> host    replication     replicator  1XXXXXXXXXXXX/22        md5\n> \n> Linux Version: Red Hat Enterprise Linux Server release 6.10 (Santiago)\n> \n> Server Installation is Source code installation. Custom build for our \n> environment.\n> \n> Authentication logs from PG 11.5:\n> \n> 2020-02-24 00:00:15 MST [25089]: \n> application=[unknown],host=xx.xx.xxx.xx(55742),user=[unknown],db=[unknown],state=00000 \n> LOG:  connection received: host=xx.xx.xxx.xx port=55742\n> \n> 2020-02-24 00:00:16 MST [25090]: \n> application=[unknown],host=xx.xx.xxx.xx(55748),user=[unknown],db=[unknown],state=00000 \n> LOG:  connection received: host=xx.xx.xxx.xx port=55748\n> \n> 2020-02-24 00:00:16 MST [25092]: \n> application=[unknown],host=xx.xx.xxx.xx(55765),user=[unknown],db=[unknown],state=00000 \n> LOG:  connection received: host=xx.xx.xxx.xx port=55765\n> \n> 2020-02-24 00:00:16 MST [25093]: \n> application=[unknown],host=xx.xx.xxx.xx(55770),user=[unknown],db=[unknown],state=00000 \n> LOG:  connection received: host=xx.xx.xxx.xx port=55770\n> \n> 2020-02-24 00:00:17 MST [25090]: \n> application=[unknown],host=xx.xx.xxx.xx(55748),user=Someuser,db=test_db,state=00000 \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 00:00:17 MST [25089]: \n> application=[unknown],host=xx.xx.xxx.xx(55742),user=Someuser,db=test_db,state=00000 \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 00:00:17 MST [25092]: \n> application=[unknown],host=xx.xx.xxx.xx(55765),user=Someuser,db=test_db,state=00000 \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 00:00:17 MST [25093]: \n> application=[unknown],host=xx.xx.xxx.xx(55770),user=Someuser,db=test_db,state=00000 \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> Authentication logs from PG 9.4:\n> \n> 2020-02-17 22:40:01 MST [127575]: \n> application=[unknown],host=xx.xx.xx.xx(39451),user=[unknown],db=[unknown] LOG:  \n> connection received: host=xx.xx.xx.xx port=39451\n> \n> 2020-02-17 22:40:01 MST [127575]: \n> application=[unknown],host=xx.xx.xx.xx(39451),user=Someuser,db=test_db \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 21:57:44 MST [117472]: \n> application=[unknown],host=xx.xx.xx.xx(58500),user=[unknown],db=[unknown] LOG:  \n> connection received: host=xx.xx.xx.xx port=58500\n> \n> 2020-02-24 21:57:44 MST [117472]: \n> application=[unknown],host=xx.xx.xx.xx(58500),user=Someuser,db=test_db \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 21:58:27 MST [117620]: \n> application=[unknown],host=xx.xx.xx.xx(58520),user=[unknown],db=[unknown] LOG:  \n> connection received: host=xx.xx.xx.xx port=58520\n> \n> 2020-02-24 21:58:27 MST [117620]: \n> application=[unknown],host=xx.xx.xx.xx(58520),user=Someuser,db=test_db \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> 2020-02-24 21:58:31 MST [117632]: \n> application=[unknown],host=xx.xx.xx.xx(58524),user=[unknown],db=[unknown] LOG:  \n> connection received: host=xx.xx.xx.xx port=58524\n> \n> 2020-02-24 21:58:31 MST [117632]: \n> application=[unknown],host=xx.xx.xx.xx(58524),user=Someuser,db=test_db \n> LOG:  connection authorized: user=Someuser database=test_db\n> \n> We also have a local .ldaprc file with below entry\n> \n> TLS_REQCERT allow\n> \n> \n> On Tue, Feb 25, 2020 at 2:28 AM Adrian Klaver <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n>     On 2/24/20 11:50 AM, Mani Sankar wrote:\n>      > Hi All,\n>      >\n>      > We have recently upgraded our postgres servers from 9.4 version\n>     to 11.5\n>      > version. Post upgrade we are see delay in authentication.\n>      >\n>      > Issue is when we are using ldaptls=1 the authentication takes 1\n>     second\n>      > or greater than that. But if I disable ldaptls it's getting\n>      > authenticated within milliseconds.\n>      >\n>      > But in 9.4 even if I enable ldaptls it's getting authenticated\n>     within\n>      > milliseconds any idea why we are facing the issue?\n> \n>     This is going to need a good deal more information:\n> \n>     1) OS the server is running on and did the OS or OS version change with\n>     the upgrade?\n> \n>     2) How was the server installed from packages(if so from where?) or\n>     from\n>     source?\n> \n>     3) The configuration for LDAP in pg_hba.conf.\n> \n>     4) Pertinent information from the Postgres log.\n> \n>     5) Pertinent information from the system log.\n> \n>      >\n>      > Regards,\n>      > Mani.\n>      >\n> \n> \n>     -- \n>     Adrian Klaver\n>     [email protected] <mailto:[email protected]>\n> \n\n\n-- \nAdrian Klaver\[email protected]", "msg_date": "Tue, 25 Feb 2020 23:38:53 +0530", "msg_from": "Mani Sankar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LDAP with TLS is taking more time in Postgresql 11.5" }, { "msg_contents": "On 2/25/20 10:08 AM, Mani Sankar wrote:\n> Hi Adrian,\n> \n> Should I want to try this configuration?\n\nI thought you where already using this configuration?\n\nAre the 9.4 and 11.5 instances are on the same machine and/or network?\n\nIn other words is ldapserver=XXXXXXXXXXXXXXX pointing at the same thing?\n\n\n> \n> Regards,\n> Mani.\n> \n> On Tue, 25 Feb, 2020, 9:24 pm Adrian Klaver, <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On 2/24/20 9:07 PM, Mani Sankar wrote:\n> Please reply to list also.\n> Ccing list.\n> > Hi Adrian,\n> >\n> > Thanks for replying. Below are the requested details.\n> >\n> > ################ Configuration in 9.4 PG Version\n> >\n> > local all all ldap ldapserver=XXXXXXXXXXXXXX ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX\n> > ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX\n> ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all all 0.0.0.0/0 <http://0.0.0.0/0> <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n> <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n> <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > ############ Configuration in 11.5 Version.\n> >\n> > local all all ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX\n> > ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX\n> ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host all all 0.0.0.0/0 <http://0.0.0.0/0> <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n> > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> >\n> > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n> <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n> <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n> <http://0.0.0.0/0> ldap\n> > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> ldapsuffix=\"\"\n> > ldaptls=1\n> >\n> > host    replication     replicator  XXXXXXXXXXXXX/22        md5\n> >\n> > host    replication     replicator  1XXXXXXXXXXXX/22        md5\n> >\n> > Linux Version: Red Hat Enterprise Linux Server release 6.10\n> (Santiago)\n> >\n> > Server Installation is Source code installation. Custom build for\n> our\n> > environment.\n> >\n> > Authentication logs from PG 11.5:\n> >\n> > 2020-02-24 00:00:15 MST [25089]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55742),user=[unknown],db=[unknown],state=00000\n> \n> > LOG:  connection received: host=xx.xx.xxx.xx port=55742\n> >\n> > 2020-02-24 00:00:16 MST [25090]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55748),user=[unknown],db=[unknown],state=00000\n> \n> > LOG:  connection received: host=xx.xx.xxx.xx port=55748\n> >\n> > 2020-02-24 00:00:16 MST [25092]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55765),user=[unknown],db=[unknown],state=00000\n> \n> > LOG:  connection received: host=xx.xx.xxx.xx port=55765\n> >\n> > 2020-02-24 00:00:16 MST [25093]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55770),user=[unknown],db=[unknown],state=00000\n> \n> > LOG:  connection received: host=xx.xx.xxx.xx port=55770\n> >\n> > 2020-02-24 00:00:17 MST [25090]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55748),user=Someuser,db=test_db,state=00000\n> \n> > LOG:  connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 00:00:17 MST [25089]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55742),user=Someuser,db=test_db,state=00000\n> \n> > LOG:  connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 00:00:17 MST [25092]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55765),user=Someuser,db=test_db,state=00000\n> \n> > LOG:  connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 00:00:17 MST [25093]:\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55770),user=Someuser,db=test_db,state=00000\n> \n> > LOG:  connection authorized: user=Someuser database=test_db\n> >\n> > Authentication logs from PG 9.4:\n> >\n> > 2020-02-17 22:40:01 MST [127575]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(39451),user=[unknown],db=[unknown]\n> LOG:\n> > connection received: host=xx.xx.xx.xx port=39451\n> >\n> > 2020-02-17 22:40:01 MST [127575]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(39451),user=Someuser,db=test_db\n> > LOG:  connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 21:57:44 MST [117472]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(58500),user=[unknown],db=[unknown]\n> LOG:\n> > connection received: host=xx.xx.xx.xx port=58500\n> >\n> > 2020-02-24 21:57:44 MST [117472]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(58500),user=Someuser,db=test_db\n> > LOG:  connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 21:58:27 MST [117620]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(58520),user=[unknown],db=[unknown]\n> LOG:\n> > connection received: host=xx.xx.xx.xx port=58520\n> >\n> > 2020-02-24 21:58:27 MST [117620]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(58520),user=Someuser,db=test_db\n> > LOG:  connection authorized: user=Someuser database=test_db\n> >\n> > 2020-02-24 21:58:31 MST [117632]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(58524),user=[unknown],db=[unknown]\n> LOG:\n> > connection received: host=xx.xx.xx.xx port=58524\n> >\n> > 2020-02-24 21:58:31 MST [117632]:\n> >\n> application=[unknown],host=xx.xx.xx.xx(58524),user=Someuser,db=test_db\n> > LOG:  connection authorized: user=Someuser database=test_db\n> >\n> > We also have a local .ldaprc file with below entry\n> >\n> > TLS_REQCERT allow\n> >\n> >\n> > On Tue, Feb 25, 2020 at 2:28 AM Adrian Klaver\n> <[email protected] <mailto:[email protected]>\n> > <mailto:[email protected]\n> <mailto:[email protected]>>> wrote:\n> >\n> >     On 2/24/20 11:50 AM, Mani Sankar wrote:\n> >      > Hi All,\n> >      >\n> >      > We have recently upgraded our postgres servers from 9.4\n> version\n> >     to 11.5\n> >      > version. Post upgrade we are see delay in authentication.\n> >      >\n> >      > Issue is when we are using ldaptls=1 the authentication\n> takes 1\n> >     second\n> >      > or greater than that. But if I disable ldaptls it's getting\n> >      > authenticated within milliseconds.\n> >      >\n> >      > But in 9.4 even if I enable ldaptls it's getting authenticated\n> >     within\n> >      > milliseconds any idea why we are facing the issue?\n> >\n> >     This is going to need a good deal more information:\n> >\n> >     1) OS the server is running on and did the OS or OS version\n> change with\n> >     the upgrade?\n> >\n> >     2) How was the server installed from packages(if so from\n> where?) or\n> >     from\n> >     source?\n> >\n> >     3) The configuration for LDAP in pg_hba.conf.\n> >\n> >     4) Pertinent information from the Postgres log.\n> >\n> >     5) Pertinent information from the system log.\n> >\n> >      >\n> >      > Regards,\n> >      > Mani.\n> >      >\n> >\n> >\n> >     --\n> >     Adrian Klaver\n> > [email protected] <mailto:[email protected]>\n> <mailto:[email protected] <mailto:[email protected]>>\n> >\n> \n> \n> -- \n> Adrian Klaver\n> [email protected] <mailto:[email protected]>\n> \n\n\n-- \nAdrian Klaver\[email protected]\n\n\n", "msg_date": "Tue, 25 Feb 2020 10:18:13 -0800", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LDAP with TLS is taking more time in Postgresql 11.5" }, { "msg_contents": "Hi Adrian,\n\nBoth the machines are in same network and both are pointing towards the\nsame LDAP server\n\nRegards,\nMani.\n\nOn Tue, 25 Feb, 2020, 11:48 pm Adrian Klaver, <[email protected]>\nwrote:\n\n> On 2/25/20 10:08 AM, Mani Sankar wrote:\n> > Hi Adrian,\n> >\n> > Should I want to try this configuration?\n>\n> I thought you where already using this configuration?\n>\n> Are the 9.4 and 11.5 instances are on the same machine and/or network?\n>\n> In other words is ldapserver=XXXXXXXXXXXXXXX pointing at the same thing?\n>\n>\n> >\n> > Regards,\n> > Mani.\n> >\n> > On Tue, 25 Feb, 2020, 9:24 pm Adrian Klaver, <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > On 2/24/20 9:07 PM, Mani Sankar wrote:\n> > Please reply to list also.\n> > Ccing list.\n> > > Hi Adrian,\n> > >\n> > > Thanks for replying. Below are the requested details.\n> > >\n> > > ################ Configuration in 9.4 PG Version\n> > >\n> > > local all all ldap ldapserver=XXXXXXXXXXXXXX ldapport=3268\n> > > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> > >\n> > > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX\n> > > ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> > >\n> > > host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX\n> > ldapport=3268\n> > > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> > >\n> > > host all all 0.0.0.0/0 <http://0.0.0.0/0> <http://0.0.0.0/0> ldap\n> > > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> > ldapsuffix=\"\"\n> > > ldaptls=1\n> > >\n> > > host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n> > > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> > >\n> > > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n> > <http://0.0.0.0/0> ldap\n> > > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> > ldapsuffix=\"\"\n> > > ldaptls=1\n> > >\n> > > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n> > <http://0.0.0.0/0> ldap\n> > > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> > ldapsuffix=\"\"\n> > > ldaptls=1\n> > >\n> > > ############ Configuration in 11.5 Version.\n> > >\n> > > local all all ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n> > > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> > >\n> > > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX\n> > > ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> > >\n> > > host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX\n> > ldapport=3268\n> > > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> > >\n> > > host all all 0.0.0.0/0 <http://0.0.0.0/0> <http://0.0.0.0/0> ldap\n> > > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> > ldapsuffix=\"\"\n> > > ldaptls=1\n> > >\n> > > host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n> > > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n> > >\n> > > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n> > <http://0.0.0.0/0> ldap\n> > > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> > ldapsuffix=\"\"\n> > > ldaptls=1\n> > >\n> > > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n> > <http://0.0.0.0/0> ldap\n> > > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> > ldapsuffix=\"\"\n> > > ldaptls=1\n> > >\n> > > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n> > <http://0.0.0.0/0> ldap\n> > > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n> > ldapsuffix=\"\"\n> > > ldaptls=1\n> > >\n> > > host replication replicator XXXXXXXXXXXXX/22 md5\n> > >\n> > > host replication replicator 1XXXXXXXXXXXX/22 md5\n> > >\n> > > Linux Version: Red Hat Enterprise Linux Server release 6.10\n> > (Santiago)\n> > >\n> > > Server Installation is Source code installation. Custom build for\n> > our\n> > > environment.\n> > >\n> > > Authentication logs from PG 11.5:\n> > >\n> > > 2020-02-24 00:00:15 MST [25089]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55742),user=[unknown],db=[unknown],state=00000\n> >\n> > > LOG: connection received: host=xx.xx.xxx.xx port=55742\n> > >\n> > > 2020-02-24 00:00:16 MST [25090]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55748),user=[unknown],db=[unknown],state=00000\n> >\n> > > LOG: connection received: host=xx.xx.xxx.xx port=55748\n> > >\n> > > 2020-02-24 00:00:16 MST [25092]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55765),user=[unknown],db=[unknown],state=00000\n> >\n> > > LOG: connection received: host=xx.xx.xxx.xx port=55765\n> > >\n> > > 2020-02-24 00:00:16 MST [25093]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55770),user=[unknown],db=[unknown],state=00000\n> >\n> > > LOG: connection received: host=xx.xx.xxx.xx port=55770\n> > >\n> > > 2020-02-24 00:00:17 MST [25090]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55748),user=Someuser,db=test_db,state=00000\n> >\n> > > LOG: connection authorized: user=Someuser database=test_db\n> > >\n> > > 2020-02-24 00:00:17 MST [25089]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55742),user=Someuser,db=test_db,state=00000\n> >\n> > > LOG: connection authorized: user=Someuser database=test_db\n> > >\n> > > 2020-02-24 00:00:17 MST [25092]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55765),user=Someuser,db=test_db,state=00000\n> >\n> > > LOG: connection authorized: user=Someuser database=test_db\n> > >\n> > > 2020-02-24 00:00:17 MST [25093]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xxx.xx(55770),user=Someuser,db=test_db,state=00000\n> >\n> > > LOG: connection authorized: user=Someuser database=test_db\n> > >\n> > > Authentication logs from PG 9.4:\n> > >\n> > > 2020-02-17 22:40:01 MST [127575]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xx.xx(39451),user=[unknown],db=[unknown]\n> > LOG:\n> > > connection received: host=xx.xx.xx.xx port=39451\n> > >\n> > > 2020-02-17 22:40:01 MST [127575]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xx.xx(39451),user=Someuser,db=test_db\n> > > LOG: connection authorized: user=Someuser database=test_db\n> > >\n> > > 2020-02-24 21:57:44 MST [117472]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xx.xx(58500),user=[unknown],db=[unknown]\n> > LOG:\n> > > connection received: host=xx.xx.xx.xx port=58500\n> > >\n> > > 2020-02-24 21:57:44 MST [117472]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xx.xx(58500),user=Someuser,db=test_db\n> > > LOG: connection authorized: user=Someuser database=test_db\n> > >\n> > > 2020-02-24 21:58:27 MST [117620]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xx.xx(58520),user=[unknown],db=[unknown]\n> > LOG:\n> > > connection received: host=xx.xx.xx.xx port=58520\n> > >\n> > > 2020-02-24 21:58:27 MST [117620]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xx.xx(58520),user=Someuser,db=test_db\n> > > LOG: connection authorized: user=Someuser database=test_db\n> > >\n> > > 2020-02-24 21:58:31 MST [117632]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xx.xx(58524),user=[unknown],db=[unknown]\n> > LOG:\n> > > connection received: host=xx.xx.xx.xx port=58524\n> > >\n> > > 2020-02-24 21:58:31 MST [117632]:\n> > >\n> >\n> application=[unknown],host=xx.xx.xx.xx(58524),user=Someuser,db=test_db\n> > > LOG: connection authorized: user=Someuser database=test_db\n> > >\n> > > We also have a local .ldaprc file with below entry\n> > >\n> > > TLS_REQCERT allow\n> > >\n> > >\n> > > On Tue, Feb 25, 2020 at 2:28 AM Adrian Klaver\n> > <[email protected] <mailto:[email protected]>\n> > > <mailto:[email protected]\n> > <mailto:[email protected]>>> wrote:\n> > >\n> > > On 2/24/20 11:50 AM, Mani Sankar wrote:\n> > > > Hi All,\n> > > >\n> > > > We have recently upgraded our postgres servers from 9.4\n> > version\n> > > to 11.5\n> > > > version. Post upgrade we are see delay in authentication.\n> > > >\n> > > > Issue is when we are using ldaptls=1 the authentication\n> > takes 1\n> > > second\n> > > > or greater than that. But if I disable ldaptls it's getting\n> > > > authenticated within milliseconds.\n> > > >\n> > > > But in 9.4 even if I enable ldaptls it's getting\n> authenticated\n> > > within\n> > > > milliseconds any idea why we are facing the issue?\n> > >\n> > > This is going to need a good deal more information:\n> > >\n> > > 1) OS the server is running on and did the OS or OS version\n> > change with\n> > > the upgrade?\n> > >\n> > > 2) How was the server installed from packages(if so from\n> > where?) or\n> > > from\n> > > source?\n> > >\n> > > 3) The configuration for LDAP in pg_hba.conf.\n> > >\n> > > 4) Pertinent information from the Postgres log.\n> > >\n> > > 5) Pertinent information from the system log.\n> > >\n> > > >\n> > > > Regards,\n> > > > Mani.\n> > > >\n> > >\n> > >\n> > > --\n> > > Adrian Klaver\n> > > [email protected] <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]\n> >>\n> > >\n> >\n> >\n> > --\n> > Adrian Klaver\n> > [email protected] <mailto:[email protected]>\n> >\n>\n>\n> --\n> Adrian Klaver\n> [email protected]\n>\n\nHi Adrian,Both the machines are in same network and both are pointing towards the same LDAP serverRegards,Mani.On Tue, 25 Feb, 2020, 11:48 pm Adrian Klaver, <[email protected]> wrote:On 2/25/20 10:08 AM, Mani Sankar wrote:\n> Hi Adrian,\n> \n> Should I want to try this configuration?\n\nI thought you where already using this configuration?\n\nAre the 9.4 and 11.5 instances are on the same machine and/or network?\n\nIn other words is ldapserver=XXXXXXXXXXXXXXX pointing at the same thing?\n\n\n> \n> Regards,\n> Mani.\n> \n> On Tue, 25 Feb, 2020, 9:24 pm Adrian Klaver, <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n>     On 2/24/20 9:07 PM, Mani Sankar wrote:\n>     Please reply to list also.\n>     Ccing list.\n>      > Hi Adrian,\n>      >\n>      > Thanks for replying. Below are the requested details.\n>      >\n>      > ################ Configuration in 9.4 PG Version\n>      >\n>      > local all all ldap ldapserver=XXXXXXXXXXXXXX ldapport=3268\n>      > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n>      >\n>      > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX\n>      > ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n>      >\n>      > host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX\n>     ldapport=3268\n>      > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n>      >\n>      > host all all 0.0.0.0/0 <http://0.0.0.0/0> <http://0.0.0.0/0> ldap\n>      > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n>     ldapsuffix=\"\"\n>      > ldaptls=1\n>      >\n>      > host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n>      > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n>      >\n>      > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n>     <http://0.0.0.0/0> ldap\n>      > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n>     ldapsuffix=\"\"\n>      > ldaptls=1\n>      >\n>      > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n>     <http://0.0.0.0/0> ldap\n>      > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n>     ldapsuffix=\"\"\n>      > ldaptls=1\n>      >\n>      > ############ Configuration in 11.5 Version.\n>      >\n>      > local all all ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n>      > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n>      >\n>      > host all someuser xx.xx.xx.xx/32 ldap ldapserver=XXXXXXXXXXXXXXX\n>      > ldapport=3268 ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n>      >\n>      > host all someuser ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX\n>     ldapport=3268\n>      > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n>      >\n>      > host all all 0.0.0.0/0 <http://0.0.0.0/0> <http://0.0.0.0/0> ldap\n>      > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n>     ldapsuffix=\"\"\n>      > ldaptls=1\n>      >\n>      > host all all ::1/128 ldap ldapserver=XXXXXXXXXXXXXXX ldapport=3268\n>      > ldapprefix=\"ADS\\\" ldapsuffix=\"\" ldaptls=1\n>      >\n>      > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n>     <http://0.0.0.0/0> ldap\n>      > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n>     ldapsuffix=\"\"\n>      > ldaptls=1\n>      >\n>      > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n>     <http://0.0.0.0/0> ldap\n>      > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n>     ldapsuffix=\"\"\n>      > ldaptls=1\n>      >\n>      > host replication someuser 0.0.0.0/0 <http://0.0.0.0/0>\n>     <http://0.0.0.0/0> ldap\n>      > ldapserver=XXXXXXXXXXXXXXX ldapport=3268 ldapprefix=\"ADS\\\"\n>     ldapsuffix=\"\"\n>      > ldaptls=1\n>      >\n>      > host    replication     replicator  XXXXXXXXXXXXX/22        md5\n>      >\n>      > host    replication     replicator  1XXXXXXXXXXXX/22        md5\n>      >\n>      > Linux Version: Red Hat Enterprise Linux Server release 6.10\n>     (Santiago)\n>      >\n>      > Server Installation is Source code installation. Custom build for\n>     our\n>      > environment.\n>      >\n>      > Authentication logs from PG 11.5:\n>      >\n>      > 2020-02-24 00:00:15 MST [25089]:\n>      >\n>     application=[unknown],host=xx.xx.xxx.xx(55742),user=[unknown],db=[unknown],state=00000\n> \n>      > LOG:  connection received: host=xx.xx.xxx.xx port=55742\n>      >\n>      > 2020-02-24 00:00:16 MST [25090]:\n>      >\n>     application=[unknown],host=xx.xx.xxx.xx(55748),user=[unknown],db=[unknown],state=00000\n> \n>      > LOG:  connection received: host=xx.xx.xxx.xx port=55748\n>      >\n>      > 2020-02-24 00:00:16 MST [25092]:\n>      >\n>     application=[unknown],host=xx.xx.xxx.xx(55765),user=[unknown],db=[unknown],state=00000\n> \n>      > LOG:  connection received: host=xx.xx.xxx.xx port=55765\n>      >\n>      > 2020-02-24 00:00:16 MST [25093]:\n>      >\n>     application=[unknown],host=xx.xx.xxx.xx(55770),user=[unknown],db=[unknown],state=00000\n> \n>      > LOG:  connection received: host=xx.xx.xxx.xx port=55770\n>      >\n>      > 2020-02-24 00:00:17 MST [25090]:\n>      >\n>     application=[unknown],host=xx.xx.xxx.xx(55748),user=Someuser,db=test_db,state=00000\n> \n>      > LOG:  connection authorized: user=Someuser database=test_db\n>      >\n>      > 2020-02-24 00:00:17 MST [25089]:\n>      >\n>     application=[unknown],host=xx.xx.xxx.xx(55742),user=Someuser,db=test_db,state=00000\n> \n>      > LOG:  connection authorized: user=Someuser database=test_db\n>      >\n>      > 2020-02-24 00:00:17 MST [25092]:\n>      >\n>     application=[unknown],host=xx.xx.xxx.xx(55765),user=Someuser,db=test_db,state=00000\n> \n>      > LOG:  connection authorized: user=Someuser database=test_db\n>      >\n>      > 2020-02-24 00:00:17 MST [25093]:\n>      >\n>     application=[unknown],host=xx.xx.xxx.xx(55770),user=Someuser,db=test_db,state=00000\n> \n>      > LOG:  connection authorized: user=Someuser database=test_db\n>      >\n>      > Authentication logs from PG 9.4:\n>      >\n>      > 2020-02-17 22:40:01 MST [127575]:\n>      >\n>     application=[unknown],host=xx.xx.xx.xx(39451),user=[unknown],db=[unknown]\n>     LOG:\n>      > connection received: host=xx.xx.xx.xx port=39451\n>      >\n>      > 2020-02-17 22:40:01 MST [127575]:\n>      >\n>     application=[unknown],host=xx.xx.xx.xx(39451),user=Someuser,db=test_db\n>      > LOG:  connection authorized: user=Someuser database=test_db\n>      >\n>      > 2020-02-24 21:57:44 MST [117472]:\n>      >\n>     application=[unknown],host=xx.xx.xx.xx(58500),user=[unknown],db=[unknown]\n>     LOG:\n>      > connection received: host=xx.xx.xx.xx port=58500\n>      >\n>      > 2020-02-24 21:57:44 MST [117472]:\n>      >\n>     application=[unknown],host=xx.xx.xx.xx(58500),user=Someuser,db=test_db\n>      > LOG:  connection authorized: user=Someuser database=test_db\n>      >\n>      > 2020-02-24 21:58:27 MST [117620]:\n>      >\n>     application=[unknown],host=xx.xx.xx.xx(58520),user=[unknown],db=[unknown]\n>     LOG:\n>      > connection received: host=xx.xx.xx.xx port=58520\n>      >\n>      > 2020-02-24 21:58:27 MST [117620]:\n>      >\n>     application=[unknown],host=xx.xx.xx.xx(58520),user=Someuser,db=test_db\n>      > LOG:  connection authorized: user=Someuser database=test_db\n>      >\n>      > 2020-02-24 21:58:31 MST [117632]:\n>      >\n>     application=[unknown],host=xx.xx.xx.xx(58524),user=[unknown],db=[unknown]\n>     LOG:\n>      > connection received: host=xx.xx.xx.xx port=58524\n>      >\n>      > 2020-02-24 21:58:31 MST [117632]:\n>      >\n>     application=[unknown],host=xx.xx.xx.xx(58524),user=Someuser,db=test_db\n>      > LOG:  connection authorized: user=Someuser database=test_db\n>      >\n>      > We also have a local .ldaprc file with below entry\n>      >\n>      > TLS_REQCERT allow\n>      >\n>      >\n>      > On Tue, Feb 25, 2020 at 2:28 AM Adrian Klaver\n>     <[email protected] <mailto:[email protected]>\n>      > <mailto:[email protected]\n>     <mailto:[email protected]>>> wrote:\n>      >\n>      >     On 2/24/20 11:50 AM, Mani Sankar wrote:\n>      >      > Hi All,\n>      >      >\n>      >      > We have recently upgraded our postgres servers from 9.4\n>     version\n>      >     to 11.5\n>      >      > version. Post upgrade we are see delay in authentication.\n>      >      >\n>      >      > Issue is when we are using ldaptls=1 the authentication\n>     takes 1\n>      >     second\n>      >      > or greater than that. But if I disable ldaptls it's getting\n>      >      > authenticated within milliseconds.\n>      >      >\n>      >      > But in 9.4 even if I enable ldaptls it's getting authenticated\n>      >     within\n>      >      > milliseconds any idea why we are facing the issue?\n>      >\n>      >     This is going to need a good deal more information:\n>      >\n>      >     1) OS the server is running on and did the OS or OS version\n>     change with\n>      >     the upgrade?\n>      >\n>      >     2) How was the server installed from packages(if so from\n>     where?) or\n>      >     from\n>      >     source?\n>      >\n>      >     3) The configuration for LDAP in pg_hba.conf.\n>      >\n>      >     4) Pertinent information from the Postgres log.\n>      >\n>      >     5) Pertinent information from the system log.\n>      >\n>      >      >\n>      >      > Regards,\n>      >      > Mani.\n>      >      >\n>      >\n>      >\n>      >     --\n>      >     Adrian Klaver\n>      > [email protected] <mailto:[email protected]>\n>     <mailto:[email protected] <mailto:[email protected]>>\n>      >\n> \n> \n>     -- \n>     Adrian Klaver\n>     [email protected] <mailto:[email protected]>\n> \n\n\n-- \nAdrian Klaver\[email protected]", "msg_date": "Tue, 25 Feb 2020 23:53:43 +0530", "msg_from": "Mani Sankar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LDAP with TLS is taking more time in Postgresql 11.5" }, { "msg_contents": "On 2/25/20 10:23 AM, Mani Sankar wrote:\n> Hi Adrian,\n> \n> Both the machines are in same network and both are pointing towards the \n> same LDAP server\n\nI don't see any errors in the Postgres logs.\n\nYou probably should take a look at the LDAP server logs to see if there \nis anything there.\n\nYou could also turn up the logging detail in Postgres to see if it \nreveals anything.\n\n> \n> Regards,\n> Mani.\n> \n\n\n-- \nAdrian Klaver\[email protected]\n\n\n", "msg_date": "Tue, 25 Feb 2020 10:37:33 -0800", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LDAP with TLS is taking more time in Postgresql 11.5" }, { "msg_contents": "On Wed, Feb 26, 2020 at 7:37 AM Adrian Klaver <[email protected]> wrote:\n> On 2/25/20 10:23 AM, Mani Sankar wrote:\n> > Hi Adrian,\n> >\n> > Both the machines are in same network and both are pointing towards the\n> > same LDAP server\n>\n> I don't see any errors in the Postgres logs.\n>\n> You probably should take a look at the LDAP server logs to see if there\n> is anything there.\n>\n> You could also turn up the logging detail in Postgres to see if it\n> reveals anything.\n\nA couple more ideas:\n\nIf you take PostgreSQL out of the picture and run the equivalent LDAP\nqueries with the ldapsearch command line tool, do you see the same\ndifference in response time? If so, I'd trace that with strace etc\nwith timings to see where the time is spent -- for example, is it\nsimply waiting for a response from the LDAP (AD?) server? If not,\nI'd try tracing the PostgreSQL process and looking at the system calls\n(strace -tt -T for high res times and elapsed times), perhaps using\nPostgreSQL's pre_auth_delay setting to get time to attach strace.\n\nA wild stab in the dark: if it's slow from one computer and not from\nanother, perhaps the problem has something to do with a variation in\nreverse DNS lookup speed on the LDAP server side when it's verifying\nthe certificate. Or something like that.\n\n\n", "msg_date": "Wed, 26 Feb 2020 14:14:47 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LDAP with TLS is taking more time in Postgresql 11.5" } ]
[ { "msg_contents": "Greetings,\n\nI was trying to use postgresql database as a backend with Ejabberd XMPP\nserver for load test (Using TSUNG).\n\nNoticed, while using Mnesia the “simultaneous users and open TCP/UDP\nconnections” graph in Tsung report is showing consistency, but while using\nPostgres, we see drop in connections during 100 to 500 seconds of runtime,\nand then recovering and staying consistent.\n\nI have been trying to figure out what the issue could be without any\nsuccess. I am kind of a noob in this technology, and hoping for some help\nfrom the good people from the community to understand the problem and how\nto fix this. Below are some details..\n\n· Postgres server utilization is low ( Avg load 1, Highest Cpu\nutilization 26%, lowest freemem 9000)\n\n\n\nTsung graph:\n[image: image.png]\n Graph 1: Postgres 12 Backen\n[image: image.png]\n\n Graph 2: Mnesia backend\n\n\n· Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.\n\n· Postgres on remote server: same config\n\n· Errors encountered during the same time: error_connect_etimedout\n(same outcome for other 2 tests)\n\n· *Tsung Load: *512 Bytes message size, user arrival rate 50/s,\n80k registered users.\n\n· Postgres server utilization is low ( Avg load 1, Highest Cpu\nutilization 26%, lowest freemem 9000)\n\n· Same tsung.xm and userlist used for the tests in Mnesia and\nPostgres.\n\n*Postgres Configuration used:*\nshared_buffers = 4GB\neffective_cache_size = 12GB\nmaintenance_work_mem = 1GB\ncheckpoint_completion_target = 0.9\nwal_buffers = 16MB\ndefault_statistics_target = 100\nrandom_page_cost = 4\neffective_io_concurrency = 2\nwork_mem = 256MB\nmin_wal_size = 1GB\nmax_wal_size = 2GB\nmax_worker_processes = 4\nmax_parallel_workers_per_gather = 2\nmax_parallel_workers = 4\nmax_parallel_maintenance_workers = 2\nmax_connections=50000\n\n\nKindly help understanding this behavior. Some advice on how to fix this\nwill be a big help .\n\n\n\nThanks,\n\nDipanjan", "msg_date": "Tue, 25 Feb 2020 21:58:38 +0530", "msg_from": "Dipanjan Ganguly <[email protected]>", "msg_from_op": true, "msg_subject": "Connections dropping while using Postgres backend DB with Ejabberd" }, { "msg_contents": "Hi Dipanjan\n\nPlease do not post to all the postgresql mailing list lets keep this on one\nlist at a time, Keep this on general list\n\nAm i reading this correctly 10,000 to 50,000 open connections.\nPostgresql really is not meant to serve that many open connections.\nDue to design of Postgresql each client connection can use up to the\nwork_mem of 256MB plus additional for parallel processes. Memory will be\nexhausted long before 50,0000 connections is reached\n\nI'm not surprised Postgresql and the server is showing issues long before\n10K connections is reached. The OS is probably throwing everything to the\nswap file and see connections dropped or time out.\n\nShould be using a connection pooler to service this kind of load so the\nPostgresql does not exhaust resources just from the open connections.\nhttps://www.pgbouncer.org/\n\n\nOn Tue, Feb 25, 2020 at 11:29 AM Dipanjan Ganguly <[email protected]>\nwrote:\n\n> Greetings,\n>\n> I was trying to use postgresql database as a backend with Ejabberd XMPP\n> server for load test (Using TSUNG).\n>\n> Noticed, while using Mnesia the “simultaneous users and open TCP/UDP\n> connections” graph in Tsung report is showing consistency, but while using\n> Postgres, we see drop in connections during 100 to 500 seconds of runtime,\n> and then recovering and staying consistent.\n>\n> I have been trying to figure out what the issue could be without any\n> success. I am kind of a noob in this technology, and hoping for some help\n> from the good people from the community to understand the problem and how\n> to fix this. Below are some details..\n>\n> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n> utilization 26%, lowest freemem 9000)\n>\n>\n>\n> Tsung graph:\n> [image: image.png]\n> Graph 1: Postgres 12 Backen\n> [image: image.png]\n>\n> Graph 2: Mnesia backend\n>\n>\n> · Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.\n>\n> · Postgres on remote server: same config\n>\n> · Errors encountered during the same time:\n> error_connect_etimedout (same outcome for other 2 tests)\n>\n> · *Tsung Load: *512 Bytes message size, user arrival rate 50/s,\n> 80k registered users.\n>\n> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n> utilization 26%, lowest freemem 9000)\n>\n> · Same tsung.xm and userlist used for the tests in Mnesia and\n> Postgres.\n>\n> *Postgres Configuration used:*\n> shared_buffers = 4GB\n> effective_cache_size = 12GB\n> maintenance_work_mem = 1GB\n> checkpoint_completion_target = 0.9\n> wal_buffers = 16MB\n> default_statistics_target = 100\n> random_page_cost = 4\n> effective_io_concurrency = 2\n> work_mem = 256MB\n> min_wal_size = 1GB\n> max_wal_size = 2GB\n> max_worker_processes = 4\n> max_parallel_workers_per_gather = 2\n> max_parallel_workers = 4\n> max_parallel_maintenance_workers = 2\n> max_connections=50000\n>\n>\n> Kindly help understanding this behavior. Some advice on how to fix this\n> will be a big help .\n>\n>\n>\n> Thanks,\n>\n> Dipanjan\n>", "msg_date": "Tue, 25 Feb 2020 12:01:24 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connections dropping while using Postgres backend DB with\n Ejabberd" }, { "msg_contents": "work_mem can be used many times per connection given it is per sort, hash,\nor other operations and as mentioned that can be multiplied if the query is\nhandled with parallel workers. I am guessing the server has 16GB memory\ntotal given shared_buffers and effective_cache_size, and a more reasonable\nwork_mem setting might be on the order of 32-64MB.\n\nDepending on the type of work being done and how quickly the application\nreleases the db connection once it is done, max connections might be on the\norder of 4-20x the number of cores I would expect. If more simultaneous\nusers need to be serviced, a connection pooler like pgbouncer or pgpool\nwill allow those connections to be re-used quickly.\n\nThese numbers are generalizations based on my experience. Others with more\nexperience may have different configurations to recommend.\n\n>\n\nwork_mem can be used many times per connection given it is per sort, hash, or other operations and as mentioned that can be multiplied if the query is handled with parallel workers. I am guessing the server has 16GB memory total given shared_buffers and effective_cache_size, and a more reasonable work_mem setting might be on the order of 32-64MB.Depending on the type of work being done and how quickly the application releases the db connection once it is done, max connections might be on the order of 4-20x the number of cores I would expect. If more simultaneous users need to be serviced, a connection pooler like pgbouncer or pgpool will allow those connections to be re-used quickly.These numbers are generalizations based on my experience. Others with more experience may have different configurations to recommend.", "msg_date": "Tue, 25 Feb 2020 10:20:34 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connections dropping while using Postgres backend DB with\n Ejabberd" }, { "msg_contents": "Thanks Michael for the recommendation and clarification.\n\nWill try the with 32 MB on my next run.\n\nBR,\nDipanjan\n\nOn Tue, Feb 25, 2020 at 10:51 PM Michael Lewis <[email protected]> wrote:\n\n> work_mem can be used many times per connection given it is per sort, hash,\n> or other operations and as mentioned that can be multiplied if the query is\n> handled with parallel workers. I am guessing the server has 16GB memory\n> total given shared_buffers and effective_cache_size, and a more reasonable\n> work_mem setting might be on the order of 32-64MB.\n>\n> Depending on the type of work being done and how quickly the application\n> releases the db connection once it is done, max connections might be on the\n> order of 4-20x the number of cores I would expect. If more simultaneous\n> users need to be serviced, a connection pooler like pgbouncer or pgpool\n> will allow those connections to be re-used quickly.\n>\n> These numbers are generalizations based on my experience. Others with more\n> experience may have different configurations to recommend.\n>\n>>\n\nThanks Michael for the recommendation and clarification.Will try the with 32 MB on my next run.BR,DipanjanOn Tue, Feb 25, 2020 at 10:51 PM Michael Lewis <[email protected]> wrote:work_mem can be used many times per connection given it is per sort, hash, or other operations and as mentioned that can be multiplied if the query is handled with parallel workers. I am guessing the server has 16GB memory total given shared_buffers and effective_cache_size, and a more reasonable work_mem setting might be on the order of 32-64MB.Depending on the type of work being done and how quickly the application releases the db connection once it is done, max connections might be on the order of 4-20x the number of cores I would expect. If more simultaneous users need to be serviced, a connection pooler like pgbouncer or pgpool will allow those connections to be re-used quickly.These numbers are generalizations based on my experience. Others with more experience may have different configurations to recommend.", "msg_date": "Wed, 26 Feb 2020 00:46:52 +0530", "msg_from": "Dipanjan Ganguly <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Connections dropping while using Postgres backend DB with\n Ejabberd" }, { "msg_contents": "Hi Dipanjan\n\nIf the connections are not being closed and left open , you should see\n50,000 processes running on the server because postgresql creates/forks a\nnew process for each connection\n\nJust having that many processes running will exhaust resources, I would\nconfirm that the process are still running.\nyou can use the command\n\nps aux |wc -l\n\nto get a count on the number of processes\nBeyond just opening the connection are there any actions such as Select *\nfrom sometable being fired off to measure performance?\n\nAttempting to open and leave 50K connections open should exhaust the server\nresources long before reaching 50K\n\nSomething is off here I would be looking into how this test actually works,\nhow the connections are opened, and commands it sends to Postgresql\n\n\n\nOn Tue, Feb 25, 2020 at 2:12 PM Dipanjan Ganguly <[email protected]>\nwrote:\n\n> Hi Justin,\n>\n> Thanks for your insight.\n>\n> I agree with you completely, but as mentioned in my previous email, the\n> fact that Postgres server resource utilization is less *\"( Avg load 1,\n> Highest Cpu utilization 26%, lowest freemem 9000)*\" and it recovers at a\n> certain point then consistently reaches close to 50 k , is what confusing\n> me..\n>\n> Legends from the Tsung report:\n> users\n> Number of simultaneous users (it's session has started, but not yet\n> finished).connectednumber of users with an opened TCP/UDP connection\n> (example: for HTTP, during a think time, the TCP connection can be closed\n> by the server, and it won't be reopened until the thinktime has expired)\n> I have also used pgcluu to monitor the events. Sharing the stats below..*Memory\n> information*\n>\n> - 15.29 GB Total memory\n> - 8.79 GB Free memory\n> - 31.70 MB Buffers\n> - 5.63 GB Cached\n> - 953.12 MB Total swap\n> - 953.12 MB Free swap\n> - 13.30 MB Page Tables\n> - 3.19 GB Shared memory\n>\n> Any thoughts ??!! 🤔🤔\n>\n> Thanks,\n> Dipanjan\n>\n>\n> On Tue, Feb 25, 2020 at 10:31 PM Justin <[email protected]> wrote:\n>\n>> Hi Dipanjan\n>>\n>> Please do not post to all the postgresql mailing list lets keep this on\n>> one list at a time, Keep this on general list\n>>\n>> Am i reading this correctly 10,000 to 50,000 open connections.\n>> Postgresql really is not meant to serve that many open connections.\n>> Due to design of Postgresql each client connection can use up to the\n>> work_mem of 256MB plus additional for parallel processes. Memory will be\n>> exhausted long before 50,0000 connections is reached\n>>\n>> I'm not surprised Postgresql and the server is showing issues long before\n>> 10K connections is reached. The OS is probably throwing everything to the\n>> swap file and see connections dropped or time out.\n>>\n>> Should be using a connection pooler to service this kind of load so the\n>> Postgresql does not exhaust resources just from the open connections.\n>> https://www.pgbouncer.org/\n>>\n>>\n>> On Tue, Feb 25, 2020 at 11:29 AM Dipanjan Ganguly <[email protected]>\n>> wrote:\n>>\n>>> Greetings,\n>>>\n>>> I was trying to use postgresql database as a backend with Ejabberd XMPP\n>>> server for load test (Using TSUNG).\n>>>\n>>> Noticed, while using Mnesia the “simultaneous users and open TCP/UDP\n>>> connections” graph in Tsung report is showing consistency, but while using\n>>> Postgres, we see drop in connections during 100 to 500 seconds of runtime,\n>>> and then recovering and staying consistent.\n>>>\n>>> I have been trying to figure out what the issue could be without any\n>>> success. I am kind of a noob in this technology, and hoping for some help\n>>> from the good people from the community to understand the problem and how\n>>> to fix this. Below are some details..\n>>>\n>>> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n>>> utilization 26%, lowest freemem 9000)\n>>>\n>>>\n>>>\n>>> Tsung graph:\n>>> [image: image.png]\n>>> Graph 1: Postgres 12 Backen\n>>> [image: image.png]\n>>>\n>>> Graph 2: Mnesia backend\n>>>\n>>>\n>>> · Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.\n>>>\n>>> · Postgres on remote server: same config\n>>>\n>>> · Errors encountered during the same time:\n>>> error_connect_etimedout (same outcome for other 2 tests)\n>>>\n>>> · *Tsung Load: *512 Bytes message size, user arrival rate\n>>> 50/s, 80k registered users.\n>>>\n>>> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n>>> utilization 26%, lowest freemem 9000)\n>>>\n>>> · Same tsung.xm and userlist used for the tests in Mnesia and\n>>> Postgres.\n>>>\n>>> *Postgres Configuration used:*\n>>> shared_buffers = 4GB\n>>> effective_cache_size = 12GB\n>>> maintenance_work_mem = 1GB\n>>> checkpoint_completion_target = 0.9\n>>> wal_buffers = 16MB\n>>> default_statistics_target = 100\n>>> random_page_cost = 4\n>>> effective_io_concurrency = 2\n>>> work_mem = 256MB\n>>> min_wal_size = 1GB\n>>> max_wal_size = 2GB\n>>> max_worker_processes = 4\n>>> max_parallel_workers_per_gather = 2\n>>> max_parallel_workers = 4\n>>> max_parallel_maintenance_workers = 2\n>>> max_connections=50000\n>>>\n>>>\n>>> Kindly help understanding this behavior. Some advice on how to fix this\n>>> will be a big help .\n>>>\n>>>\n>>>\n>>> Thanks,\n>>>\n>>> Dipanjan\n>>>\n>>", "msg_date": "Tue, 25 Feb 2020 14:35:05 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Connections dropping while using Postgres backend DB with\n Ejabberd" }, { "msg_contents": "Hi Justin,\nI have already checked running Postgres processes and strangely never\ncounted more than 20.\n\n I'll check as you recommend on how ejabberd to postgresql connectivity\nworks. May be the answer lies there. Will get back if I find something.\n\nThanks for giving some direction to my thoughts.\n\nGood talk. 👍👍\n\nBR,\nDipanjan\n\n\nOn Wed 26 Feb, 2020 1:05 am Justin, <[email protected]> wrote:\n\n> Hi Dipanjan\n>\n> If the connections are not being closed and left open , you should see\n> 50,000 processes running on the server because postgresql creates/forks a\n> new process for each connection\n>\n> Just having that many processes running will exhaust resources, I would\n> confirm that the process are still running.\n> you can use the command\n>\n> ps aux |wc -l\n>\n> to get a count on the number of processes\n> Beyond just opening the connection are there any actions such as Select *\n> from sometable being fired off to measure performance?\n>\n> Attempting to open and leave 50K connections open should exhaust the\n> server resources long before reaching 50K\n>\n> Something is off here I would be looking into how this test actually\n> works, how the connections are opened, and commands it sends to Postgresql\n>\n>\n>\n> On Tue, Feb 25, 2020 at 2:12 PM Dipanjan Ganguly <[email protected]>\n> wrote:\n>\n>> Hi Justin,\n>>\n>> Thanks for your insight.\n>>\n>> I agree with you completely, but as mentioned in my previous email, the\n>> fact that Postgres server resource utilization is less *\"( Avg load 1,\n>> Highest Cpu utilization 26%, lowest freemem 9000)*\" and it recovers at\n>> a certain point then consistently reaches close to 50 k , is what confusing\n>> me..\n>>\n>> Legends from the Tsung report:\n>> users\n>> Number of simultaneous users (it's session has started, but not yet\n>> finished).connectednumber of users with an opened TCP/UDP connection\n>> (example: for HTTP, during a think time, the TCP connection can be closed\n>> by the server, and it won't be reopened until the thinktime has expired)\n>> I have also used pgcluu to monitor the events. Sharing the stats below..*Memory\n>> information*\n>>\n>> - 15.29 GB Total memory\n>> - 8.79 GB Free memory\n>> - 31.70 MB Buffers\n>> - 5.63 GB Cached\n>> - 953.12 MB Total swap\n>> - 953.12 MB Free swap\n>> - 13.30 MB Page Tables\n>> - 3.19 GB Shared memory\n>>\n>> Any thoughts ??!! 🤔🤔\n>>\n>> Thanks,\n>> Dipanjan\n>>\n>>\n>> On Tue, Feb 25, 2020 at 10:31 PM Justin <[email protected]> wrote:\n>>\n>>> Hi Dipanjan\n>>>\n>>> Please do not post to all the postgresql mailing list lets keep this on\n>>> one list at a time, Keep this on general list\n>>>\n>>> Am i reading this correctly 10,000 to 50,000 open connections.\n>>> Postgresql really is not meant to serve that many open connections.\n>>> Due to design of Postgresql each client connection can use up to the\n>>> work_mem of 256MB plus additional for parallel processes. Memory will be\n>>> exhausted long before 50,0000 connections is reached\n>>>\n>>> I'm not surprised Postgresql and the server is showing issues long\n>>> before 10K connections is reached. The OS is probably throwing everything\n>>> to the swap file and see connections dropped or time out.\n>>>\n>>> Should be using a connection pooler to service this kind of load so the\n>>> Postgresql does not exhaust resources just from the open connections.\n>>> https://www.pgbouncer.org/\n>>>\n>>>\n>>> On Tue, Feb 25, 2020 at 11:29 AM Dipanjan Ganguly <[email protected]>\n>>> wrote:\n>>>\n>>>> Greetings,\n>>>>\n>>>> I was trying to use postgresql database as a backend with Ejabberd XMPP\n>>>> server for load test (Using TSUNG).\n>>>>\n>>>> Noticed, while using Mnesia the “simultaneous users and open TCP/UDP\n>>>> connections” graph in Tsung report is showing consistency, but while using\n>>>> Postgres, we see drop in connections during 100 to 500 seconds of runtime,\n>>>> and then recovering and staying consistent.\n>>>>\n>>>> I have been trying to figure out what the issue could be without any\n>>>> success. I am kind of a noob in this technology, and hoping for some help\n>>>> from the good people from the community to understand the problem and how\n>>>> to fix this. Below are some details..\n>>>>\n>>>> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n>>>> utilization 26%, lowest freemem 9000)\n>>>>\n>>>>\n>>>>\n>>>> Tsung graph:\n>>>> [image: image.png]\n>>>> Graph 1: Postgres 12 Backen\n>>>> [image: image.png]\n>>>>\n>>>> Graph 2: Mnesia backend\n>>>>\n>>>>\n>>>> · Ejabberd Server: Ubuntu 16.04, 16 GB ram, 4 core CPU.\n>>>>\n>>>> · Postgres on remote server: same config\n>>>>\n>>>> · Errors encountered during the same time:\n>>>> error_connect_etimedout (same outcome for other 2 tests)\n>>>>\n>>>> · *Tsung Load: *512 Bytes message size, user arrival rate\n>>>> 50/s, 80k registered users.\n>>>>\n>>>> · Postgres server utilization is low ( Avg load 1, Highest Cpu\n>>>> utilization 26%, lowest freemem 9000)\n>>>>\n>>>> · Same tsung.xm and userlist used for the tests in Mnesia and\n>>>> Postgres.\n>>>>\n>>>> *Postgres Configuration used:*\n>>>> shared_buffers = 4GB\n>>>> effective_cache_size = 12GB\n>>>> maintenance_work_mem = 1GB\n>>>> checkpoint_completion_target = 0.9\n>>>> wal_buffers = 16MB\n>>>> default_statistics_target = 100\n>>>> random_page_cost = 4\n>>>> effective_io_concurrency = 2\n>>>> work_mem = 256MB\n>>>> min_wal_size = 1GB\n>>>> max_wal_size = 2GB\n>>>> max_worker_processes = 4\n>>>> max_parallel_workers_per_gather = 2\n>>>> max_parallel_workers = 4\n>>>> max_parallel_maintenance_workers = 2\n>>>> max_connections=50000\n>>>>\n>>>>\n>>>> Kindly help understanding this behavior. Some advice on how to fix\n>>>> this will be a big help .\n>>>>\n>>>>\n>>>>\n>>>> Thanks,\n>>>>\n>>>> Dipanjan\n>>>>\n>>>", "msg_date": "Wed, 26 Feb 2020 01:23:57 +0530", "msg_from": "Dipanjan Ganguly <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Connections dropping while using Postgres backend DB with\n Ejabberd" } ]
[ { "msg_contents": "Dear all,\n\nI am facing a much, much slower query in production than on my\ndevelopment computer using a restored production backup, and I\ndon't understand why nor I see what I could do to speedup the\nquery on production :/\n\nShort data model explanation: one table stores tickets, one table\nstores multicards; tickets can belong to a multicard through a\nmany-to-one relationship, or be independant from multicards. The\nquery I am interested in, will be used to update a new column in\nthe multicards table, containing the count of related tickets.\n\nOn production:\n\n# EXPLAIN ANALYZE SELECT transaction_uid, (SELECT COUNT(*) FROM tickets WHERE multicard_uid = multicards.uid) from multicards;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on multicards (cost=0.00..1455177.30 rows=204548 width=12) (actual time=0.178..1694987.355 rows=204548 loops=1)\n SubPlan 1\n -> Aggregate (cost=7.07..7.08 rows=1 width=8) (actual time=8.283..8.283 rows=1 loops=204548)\n -> Index Only Scan using tickets_multicard_uid on tickets (cost=0.43..7.05 rows=9 width=0) (actual time=1.350..8.280 rows=6 loops=204548)\n Index Cond: (multicard_uid = multicards.uid)\n Heap Fetches: 1174940\n Planning Time: 1.220 ms\n Execution Time: 1695029.673 ms\n\niostat in the middle of this execution:\n\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 479548 143652 79412 5609292 0 0 1986 116 999 1115 2 2 71 25\n 3 1 479548 134312 79412 5614564 0 0 2386 128 2832 3973 20 2 49 28\n 0 1 479548 142520 79428 5617708 0 0 1584 232 3440 3716 11 3 58 29\n 0 1 479548 161184 79024 5597756 0 0 1922 144 1249 1562 1 2 70 27\n 0 1 479548 161244 79048 5600804 0 0 1556 117 2138 3035 6 2 68 25\n 0 2 479548 158384 79048 5604008 0 0 1388 402 2970 4320 6 2 66 27\n\nOn my development computer:\n\n# EXPLAIN ANALYZE SELECT transaction_uid, (SELECT COUNT(*) FROM tickets WHERE multicard_uid = multicards.uid) from multicards;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on multicards (cost=0.00..538974.69 rows=204548 width=12) (actual time=0.055..451.691 rows=204548 loops=1)\n SubPlan 1\n -> Aggregate (cost=2.57..2.58 rows=1 width=8) (actual time=0.002..0.002 rows=1 loops=204548)\n -> Index Only Scan using tickets_multicard_uid on tickets (cost=0.43..2.56 rows=7 width=0) (actual time=0.001..0.001 rows=6 loops=204548)\n Index Cond: (multicard_uid = multicards.uid)\n Heap Fetches: 0\n Planning Time: 0.296 ms\n Execution Time: 456.677 ms\n\nThe execution time ratio is a huge 3700. I guess the Heap Fetches\ndifference is the most meaningful here; my understanding would\nbe, that the index would easily fit in the shared_buffers after a\nfew subselects, as configuration is 2GB shared_buffers and here's\nthe index on disk size:\n\n# SELECT relname as index, reltuples as \"rows estimate\", pg_size_pretty(pg_table_size(quote_ident(relname))) as \"on disk size\" FROM pg_class, pg_namespace WHERE pg_namespace.oid = pg_class.relnamespace AND relkind = 'i' AND nspname = 'public' AND relname = 'tickets_multicard_uid';\n index | rows estimate | on disk size \n-----------------------+---------------+--------------\n tickets_multicard_uid | 7.2136e+06 | 161 MB\n\nThough it's not too clear for me what \"heap fetches\" are. It\nseems it might be the actual table data fetches (e.g. not an\nindex fetch), but I don't really know why so many of them are\nneeded here, and how to reduce that (if that's the explanation\nfor the longer timing).\n\nIt's true that production has constant activity, but not so much\nof it, load avg is typically about 0.5, most queries run very\nquickly, pg_stat_activity frequently reports no currently running\nquery; here's iostat during normal production activity, for\nreference:\n\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 0 480712 151112 10144 5664784 1 1 310 69 0 0 9 1 88 3\n 0 0 480712 151312 10152 5664876 0 0 0 248 4911 7072 9 3 87 0\n 0 0 480720 165488 10072 5642564 0 4 76 210 2019 2685 6 1 92 1\n 0 0 480720 165332 10088 5642508 0 0 0 221 3535 17545 25 4 70 1\n 0 0 480720 144772 10108 5643324 0 0 84 378 3833 5096 11 2 80 7\n 0 0 480720 143300 10116 5644144 0 0 42 298 3446 4784 6 1 92 1\n 0 0 480720 143300 10136 5644256 0 0 10 2340 1073 1496 1 1 96 2\n\nHere's also a second susequent run on production for comparison:\n\n# EXPLAIN ANALYZE SELECT transaction_uid, (SELECT COUNT(*) FROM tickets WHERE multicard_uid = multicards.uid) from multicards;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on multicards (cost=0.00..1455177.30 rows=204548 width=12) (actual time=0.101..834176.389 rows=204548 loops=1)\n SubPlan 1\n -> Aggregate (cost=7.07..7.08 rows=1 width=8) (actual time=4.075..4.075 rows=1 loops=204548)\n -> Index Only Scan using tickets_multicard_uid on tickets (cost=0.43..7.05 rows=9 width=0) (actual time=0.624..4.072 rows=6 loops=204548)\n Index Cond: (multicard_uid = multicards.uid)\n Heap Fetches: 1174941\n Planning Time: 0.273 ms\n Execution Time: 834209.323 ms\n\nHeap fetches still almost the same, albeit timing divided by two.\n\nBoth environments are running postgresql 11.5, with 2GB\nshared_buffers. Differences I can think of: production is using\next4 on drbd on SATA and linux 3.2, dev is using ext4 (no drbd)\non SSD and linux 4.15. I can't believe SSD would explain the\ndifference alone? If positive, then I know what we should do on\nproduction..\n\nThanks for any hints/help!\n\n-- \nGuillaume Cottenceau\n\n\n", "msg_date": "Wed, 26 Feb 2020 17:17:21 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": true, "msg_subject": "much slower query in production" }, { "msg_contents": "On Wed, Feb 26, 2020 at 05:17:21PM +0100, Guillaume Cottenceau wrote:\n> On production:\n> \n> # EXPLAIN ANALYZE SELECT transaction_uid, (SELECT COUNT(*) FROM tickets WHERE multicard_uid = multicards.uid) from multicards;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on multicards (cost=0.00..1455177.30 rows=204548 width=12) (actual time=0.178..1694987.355 rows=204548 loops=1)\n> SubPlan 1\n> -> Aggregate (cost=7.07..7.08 rows=1 width=8) (actual time=8.283..8.283 rows=1 loops=204548)\n> -> Index Only Scan using tickets_multicard_uid on tickets (cost=0.43..7.05 rows=9 width=0) (actual time=1.350..8.280 rows=6 loops=204548)\n> Index Cond: (multicard_uid = multicards.uid)\n> Heap Fetches: 1174940\n> Planning Time: 1.220 ms\n> Execution Time: 1695029.673 ms\n\n> The execution time ratio is a huge 3700. I guess the Heap Fetches\n> difference is the most meaningful here;\n\nYes, it's doing an \"index only\" scan, but not very effectively.\nVacuum the tickets table to set relallvisible and see if that helps.\n\nIf so, try to keep it better vacuumed with something like\nALTER TABLE tickets SET (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005);\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 26 Feb 2020 10:28:09 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: much slower query in production" }, { "msg_contents": "By the way, I expect the time is cut in half while heap fetches stays\nsimilar because the index is now in OS cache on the second run and didn't\nneed to be fetched from disk. Definitely need to check on vacuuming as\nJustin says. If you have a fairly active system, you would need to run this\nquery many times in order to push other stuff out of shared_buffers and get\nthis query to perform more like it does on dev.\n\nDo you have the option to re-write the query or is this generated by an\nORM? You are forcing the looping as I read this query. If you aggregate\nbefore you join, then the system should be able to do a single scan of the\nindex, aggregate, then join those relatively few rows to the multicards\ntable records.\n\nSELECT transaction_uid, COALESCE( sub.count, 0 ) AS count FROM multicards\nLEFT JOIN (SELECT multicard_uid, COUNT(*) AS count FROM tickets GROUP BY\nmulticard_uid ) AS sub ON sub.multicard_uid = multicards.uid;\n\nBy the way, I expect the time is cut in half while heap fetches stays similar because the index is now in OS cache on the second run and didn't need to be fetched from disk. Definitely need to check on vacuuming as Justin says. If you have a fairly active system, you would need to run this query many times in order to push other stuff out of shared_buffers and get this query to perform more like it does on dev.Do you have the option to re-write the query or is this generated by an ORM? You are forcing the looping as I read this query. If you aggregate before you join, then the system should be able to do a single scan of the index, aggregate, then join those relatively few rows to the multicards table records.SELECT transaction_uid, COALESCE( sub.count, 0 ) AS count FROM multicards LEFT JOIN (SELECT multicard_uid, COUNT(*) AS count FROM tickets GROUP BY multicard_uid ) AS sub ON sub.multicard_uid = multicards.uid;", "msg_date": "Wed, 26 Feb 2020 09:52:37 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: much slower query in production" }, { "msg_contents": "Justin Pryzby <pryzby 'at' telsasoft.com> writes:\n\n> On Wed, Feb 26, 2020 at 05:17:21PM +0100, Guillaume Cottenceau wrote:\n>> On production:\n>> \n>> # EXPLAIN ANALYZE SELECT transaction_uid, (SELECT COUNT(*) FROM tickets WHERE multicard_uid = multicards.uid) from multicards;\n>> QUERY PLAN \n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Seq Scan on multicards (cost=0.00..1455177.30 rows=204548 width=12) (actual time=0.178..1694987.355 rows=204548 loops=1)\n>> SubPlan 1\n>> -> Aggregate (cost=7.07..7.08 rows=1 width=8) (actual time=8.283..8.283 rows=1 loops=204548)\n>> -> Index Only Scan using tickets_multicard_uid on tickets (cost=0.43..7.05 rows=9 width=0) (actual time=1.350..8.280 rows=6 loops=204548)\n>> Index Cond: (multicard_uid = multicards.uid)\n>> Heap Fetches: 1174940\n>> Planning Time: 1.220 ms\n>> Execution Time: 1695029.673 ms\n>\n>> The execution time ratio is a huge 3700. I guess the Heap Fetches\n>> difference is the most meaningful here;\n>\n> Yes, it's doing an \"index only\" scan, but not very effectively.\n> Vacuum the tickets table to set relallvisible and see if that helps.\n>\n> If so, try to keep it better vacuumed with something like\n> ALTER TABLE tickets SET (AUTOVACUUM_VACUUM_SCALE_FACTOR=0.005);\n\nThanks for your reply! The effect is huge:\n\n# vacuum analyze tickets;\nVACUUM\nTime: 182850.756 ms (03:02.851)\n\n# EXPLAIN ANALYZE SELECT transaction_uid, (SELECT COUNT(*) FROM tickets WHERE multicard_uid = multicards.uid) from multicards;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on multicards (cost=0.00..947739.65 rows=204548 width=12) (actual time=15.579..5210.869 rows=204548 loops=1)\n SubPlan 1\n -> Aggregate (cost=4.59..4.60 rows=1 width=8) (actual time=0.025..0.025 rows=1 loops=204548)\n -> Index Only Scan using tickets_multicard_uid on tickets (cost=0.43..4.57 rows=8 width=0) (actual time=0.022..0.024 rows=6 loops=204548)\n Index Cond: (multicard_uid = multicards.uid)\n Heap Fetches: 8\n Planning Time: 71.469 ms\n Execution Time: 5223.408 ms\n(8 rows)\n\nTime: 5332.361 ms (00:05.332)\n\n(and subsequent executions are below 1 second)\n\nIt is actually consistent with using a restored backup on the dev\ncomputer, as my understanding is this comes out without any\ngarbage and like a perfectly vacuumed database. Btw do you have\nany hint as to how to perform timings using production data which\nare consistent with production? Backup/restore is maybe not the\nway to go, but rather a block device level copy?\n\nSince postgresql 8, I have to say I rely entirely on autovacuum,\nand did not notice it could really run too infrequently for the\nwork and create such difference. I see in documentation a default\nautovacuum_vacuum_scale_factor = 0.2, is that something that is\ntypically lowered globally, e.g. maybe on a fairly active system?\nI am worried that changing that configuration for that table to\n0.005 would fix this query and similar ones, but later I might\nface the same situation on other tables. Or how would you elect\ntables for a lowered value configuration?\n\nThanks!\n\n-- \nGuillaume Cottenceau\n\n\n", "msg_date": "Wed, 26 Feb 2020 19:02:05 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: much slower query in production" }, { "msg_contents": "Michael Lewis <mlewis 'at' entrata.com> writes:\n\n> By the way, I expect the time is cut in half while heap fetches stays similar because the index is now in OS cache on the\n> second run and didn't need to be fetched from disk. Definitely need to check on vacuuming as Justin says. If you have a fairly\n> active system, you would need to run this query many times in order to push other stuff out of shared_buffers and get this\n> query to perform more like it does on dev.\n>\n> Do you have the option to re-write the query or is this generated by an ORM? You are forcing the looping as I read this query.\n> If you aggregate before you join, then the system should be able to do a single scan of the index, aggregate, then join those\n> relatively few rows to the multicards table records.\n>\n> SELECT transaction_uid, COALESCE( sub.count, 0 ) AS count FROM multicards LEFT JOIN (SELECT multicard_uid, COUNT(*) AS count\n> FROM tickets GROUP BY multicard_uid ) AS sub ON sub.multicard_uid = multicards.uid;\n\nThanks for this hint! I always hit this fact that I never write\ngood queries using explicit joins :/\n\nExecution time (before vacuuming the table as adviced by Justin)\ndown 38x to 44509ms using this query :)\n\nReal query was an UPDATE of the multicards table to set the count\nvalue. I rewrote this using your approach but I think I lack what\ncoalesce did in your query, this would update only the rows where\ncount >= 1 obviously:\n\nUPDATE multicards\n SET defacements = count\n FROM ( SELECT multicard_uid, COUNT(*) AS count FROM tickets GROUP BY multicard_uid ) AS sub\n WHERE uid = multicard_uid;\n\nAny hinted solution to do that in one pass? I could do a first\npass setting defacements = 0, but that would produce more garbage :/\n\nThanks!\n\n-- \nGuillaume Cottenceau\n\n\n", "msg_date": "Wed, 26 Feb 2020 19:04:02 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: much slower query in production" }, { "msg_contents": ">\n> UPDATE multicards\n> SET defacements = COALESCE( count, 0 )\n> FROM ( SELECT multicard_uid, COUNT(*) AS count FROM tickets GROUP BY\n> multicard_uid ) AS sub\n> WHERE uid = multicard_uid OR multicard_uid is null;\n>\n\nI expect this should work. Not sure of performance of course.\n\nUPDATE multicards\n   SET defacements = COALESCE( count, 0 )\n  FROM ( SELECT multicard_uid, COUNT(*) AS count FROM tickets GROUP BY multicard_uid ) AS sub\n WHERE uid = multicard_uid OR multicard_uid is null;I expect this should work. Not sure of performance of course.", "msg_date": "Wed, 26 Feb 2020 11:18:53 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: much slower query in production" }, { "msg_contents": "Vacuum everything that you restored\n\nSent from my iPhone\n\n> On Feb 26, 2020, at 1:19 PM, Michael Lewis <[email protected]> wrote:\n> \n> \n>> UPDATE multicards\n>> SET defacements = COALESCE( count, 0 )\n>> FROM ( SELECT multicard_uid, COUNT(*) AS count FROM tickets GROUP BY multicard_uid ) AS sub\n>> WHERE uid = multicard_uid OR multicard_uid is null;\n> \n> I expect this should work. Not sure of performance of course.\n\nVacuum everything that you restoredSent from my iPhoneOn Feb 26, 2020, at 1:19 PM, Michael Lewis <[email protected]> wrote:UPDATE multicards\n   SET defacements = COALESCE( count, 0 )\n  FROM ( SELECT multicard_uid, COUNT(*) AS count FROM tickets GROUP BY multicard_uid ) AS sub\n WHERE uid = multicard_uid OR multicard_uid is null;I expect this should work. Not sure of performance of course.", "msg_date": "Wed, 26 Feb 2020 14:18:31 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: much slower query in production" }, { "msg_contents": "Michael Lewis <mlewis 'at' entrata.com> writes:\n\n> UPDATE multicards\n> SET defacements = COALESCE( count, 0 )\n> FROM ( SELECT multicard_uid, COUNT(*) AS count FROM tickets GROUP BY multicard_uid ) AS sub\n> WHERE uid = multicard_uid OR multicard_uid is null;\n>\n> I expect this should work. Not sure of performance of course.\n\nThis looked great but as it seems you suspected, it's very slow :/\nI interrupted it after 5 minutes run on my dev computer.\n\n-- \nGuillaume Cottenceau\n\n\n", "msg_date": "Wed, 26 Feb 2020 20:37:48 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: much slower query in production" }, { "msg_contents": "On Wed, Feb 26, 2020 at 11:17 AM Guillaume Cottenceau <[email protected]> wrote:\n\n> Dear all,\n>\n> I am facing a much, much slower query in production than on my\n> development computer using a restored production backup, and I\n> don't understand why nor I see what I could do to speedup the\n> query on production :/\n>\n\nYou've already seen a VACUUM fixed it.\n\nThis must have been a logical restore (from pg_dump), not a physical\nrestore (from pg_basebackup for example) correct?\n\nA physical restore should have resulted in a database in the same state of\nvacuuming as its source was. A logical restore will not, and since it is\nan append-only process it will not trigger autovacuum to run, either.\n\nIf you do a logical restore, probably the first should you do afterwards is\na VACUUM ANALYZE.\n\nCheers,\n\nJeff\n\nOn Wed, Feb 26, 2020 at 11:17 AM Guillaume Cottenceau <[email protected]> wrote:Dear all,\n\nI am facing a much, much slower query in production than on my\ndevelopment computer using a restored production backup, and I\ndon't understand why nor I see what I could do to speedup the\nquery on production :/You've already seen a VACUUM fixed it.This must have been a logical restore (from pg_dump), not a physical restore (from pg_basebackup for example) correct?A physical restore should have resulted in a database in the same state of vacuuming as its source was.  A logical restore will not, and since it is an append-only process it will not trigger autovacuum to run, either.If you do a logical restore, probably the first should you do afterwards is a VACUUM ANALYZE.Cheers,Jeff", "msg_date": "Wed, 26 Feb 2020 17:59:56 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: much slower query in production" }, { "msg_contents": "On Wed, Feb 26, 2020 at 1:02 PM Guillaume Cottenceau <[email protected]> wrote:\n\n>\n> It is actually consistent with using a restored backup on the dev\n> computer, as my understanding is this comes out without any\n> garbage and like a perfectly vacuumed database.\n\n\nI think I got that backwards in my previous email. It is the dev that was\nrestored, not the prod? But unless you went out of your way to vacuum dev,\nit would not be perfectly vacuumed. If it were a logical restore, it would\nbe perfectly unvacuumed, and if a physical restore would be in the same\nstate of vacuuming as the database it was cloned from.\n\n\n> Btw do you have\n> any hint as to how to perform timings using production data which\n> are consistent with production? Backup/restore is maybe not the\n> way to go, but rather a block device level copy?\n>\n\nblock device copy seems like overkill, just using pg_basebackup should be\ngood enough.\n\n\n>\n> Since postgresql 8, I have to say I rely entirely on autovacuum,\n> and did not notice it could really run too infrequently for the\n> work and create such difference. I see in documentation a default\n> autovacuum_vacuum_scale_factor = 0.2, is that something that is\n> typically lowered globally, e.g. maybe on a fairly active system?\n> I am worried that changing that configuration for that table to\n> 0.005 would fix this query and similar ones, but later I might\n> face the same situation on other tables. Or how would you elect\n> tables for a lowered value configuration?\n>\n\nThe autovacuum system has never been redesigned with the needs of\nindex-only-scans in mind. If I have a table for which index-only scans are\nimportant, I'd set autovacuum_vacuum_scale_factor = 0 and\nset autovacuum_vacuum_threshold to about 5% of the number of blocks in the\ntable. There is no syntax to say '5% of the number of blocks in the table'\nso you have to compute it yourself and hardcode the result, which makes it\nunsuitable for a global setting. And this still only addresses UPDATE and\nDELETE operations, not INSERTs. If you have INSERT only or mostly table\nfor which index-only-scans are important, you might need to set up cron\njobs to do vacuuming.\n\nCheers,\n\nJeff\n\nOn Wed, Feb 26, 2020 at 1:02 PM Guillaume Cottenceau <[email protected]> wrote:\nIt is actually consistent with using a restored backup on the dev\ncomputer, as my understanding is this comes out without any\ngarbage and like a perfectly vacuumed database.I think I got that backwards in my previous email.  It is the dev that was restored, not the prod?  But unless you went out of your way to vacuum dev, it would not be perfectly vacuumed.  If it were a logical restore, it would be perfectly unvacuumed, and if a physical restore would be in the same state of vacuuming as the database it was cloned from.  Btw do you have\nany hint as to how to perform timings using production data which\nare consistent with production? Backup/restore is maybe not the\nway to go, but rather a block device level copy?block device copy seems like overkill, just using pg_basebackup should be good enough. \n\nSince postgresql 8, I have to say I rely entirely on autovacuum,\nand did not notice it could really run too infrequently for the\nwork and create such difference. I see in documentation a default\nautovacuum_vacuum_scale_factor = 0.2, is that something that is\ntypically lowered globally, e.g. maybe on a fairly active system?\nI am worried that changing that configuration for that table to\n0.005 would fix this query and similar ones, but later I might\nface the same situation on other tables. Or how would you elect\ntables for a lowered value configuration?The autovacuum system has never been redesigned with the needs of index-only-scans in mind.  If I have a table for which index-only scans are important, I'd set \n\nautovacuum_vacuum_scale_factor = 0 and set autovacuum_vacuum_threshold to about 5% of the number of blocks in the table.  There is no syntax to say '5% of the number of blocks in the table' so you have to compute it yourself and hardcode the result, which makes it unsuitable for a global setting.  And this still only addresses UPDATE and DELETE operations, not INSERTs.  If you have INSERT only or mostly table for which index-only-scans are important, you might need to set up cron jobs to do vacuuming.Cheers,Jeff", "msg_date": "Wed, 26 Feb 2020 18:17:58 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: much slower query in production" }, { "msg_contents": "Jeff Janes <jeff.janes 'at' gmail.com> writes:\n\n> On Wed, Feb 26, 2020 at 1:02 PM Guillaume Cottenceau <[email protected]> wrote:\n>\n> It is actually consistent with using a restored backup on the dev\n> computer, as my understanding is this comes out without any\n> garbage and like a perfectly vacuumed database.\n>\n> I think I got that backwards in my previous email. It is the\n> dev that was restored, not the prod? But unless you went out of\n\nYes (prod was also restored not so long ago, when updating to pg\n11.5 tho).\n\n> your way to vacuum dev, it would not be perfectly vacuumed. If\n> it were a logical restore, it would be perfectly unvacuumed,\n> and if a physical restore would be in the same state of\n> vacuuming as the database it was cloned from.\n>\n> Btw do you have\n> any hint as to how to perform timings using production data which\n> are consistent with production? Backup/restore is maybe not the\n> way to go, but rather a block device level copy?\n>\n> block device copy seems like overkill, just using pg_basebackup should be good enough.\n>\n> Since postgresql 8, I have to say I rely entirely on autovacuum,\n> and did not notice it could really run too infrequently for the\n> work and create such difference. I see in documentation a default\n> autovacuum_vacuum_scale_factor = 0.2, is that something that is\n> typically lowered globally, e.g. maybe on a fairly active system?\n> I am worried that changing that configuration for that table to\n> 0.005 would fix this query and similar ones, but later I might\n> face the same situation on other tables. Or how would you elect\n> tables for a lowered value configuration?\n>\n> The autovacuum system has never been redesigned with the needs of index-only-scans in mind. If I have a table for which\n> index-only scans are important, I'd set autovacuum_vacuum_scale_factor = 0 and set autovacuum_vacuum_threshold to about 5% of\n> the number of blocks in the table. There is no syntax to say '5% of the number of blocks in the table' so you have to compute\n> it yourself and hardcode the result, which makes it unsuitable for a global setting. And this still only addresses UPDATE and\n\nIt seems also difficult for us as this table grows over time (and\nis trimmed only infrequently).\n\n> DELETE operations, not INSERTs. If you have INSERT only or mostly table for which index-only-scans are important, you might\n> need to set up cron jobs to do vacuuming.\n\nThanks!\n\n-- \nGuillaume Cottenceau\n\n\n", "msg_date": "Thu, 27 Feb 2020 09:31:45 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: much slower query in production" } ]
[ { "msg_contents": "Running into a strange issue that just popped up on a few servers in my\nenvironment and was wondering if the community had any insight into as to\nwhat could be causing the issue.\n\nFirst, a bit of background. I am running Postgres 10.11 on Windows (but\nhave seen similar issue on a server running 11.6)\nWindows Version:\nMajor Minor Build Revision\n----- ----- ----- --------\n10 0 14393 0\n\nI have the following query that was on average running in ~2ms suddenly\njump up to on average ~25ms. This query is called millions of time per day\nand there were cases of the query taking 20-30 seconds. Below is the\nexplain analyze of one such example.\nWhen seeing this issue, the server was under some CPU pressure but even\nwith that, I would not think it should get as slow as shown below as we are\nusing SSDs and none of the windows disk counters (IOPS, queue length) show\nany value that would be of concern.\n\nexplain (analyze,buffers) SELECT\ntabledata.uuid_id,tabledata.int_id,tabledata.timestamp_date,tabledata.int_otherid,tabledata.float_value,tabledata.int_otherid2,tabledata.int_otherid3,tabledata.int_rowver\nFROM tabledata WHERE timestamp_date <= $1 AND int_otherid3 IN\n($2,$3,$4,$5,$6,$7) AND tabledata.int_id=$8 ORDER BY timestamp_date DESC\nLIMIT 1\n\nQUERY PLAN\n\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.71..139.67 rows=1 width=84) (actual\ntime=17719.076..17719.077 rows=1 loops=1)\n Buffers: shared hit=12102 read=13259 written=111\n -> Index Scan Backward using\nix_tabledata_intid_timestampdate_intotherid3_intotherid2 on tabledata\n (cost=0.71..2112828.54 rows=15204 width=84) (actual\ntime=17719.056..17719.057 rows=1 loops=1)\n Index Cond: ((int_id = 8149) AND (timestamp_date <=\n'2020-02-24 03:05:00.013'::timestamp without time zone))\n Filter: (int_otherid3 = ANY\n('{3ad2b707-a068-42e8-b0f2-6c8570953760,4e1b1bfa-34e1-48df-8cf8-2b59caf076e2,00d394dd-c2f4-4f3a-a8d4-dc208dafa686,baa904a6-8302-4fa3-b8ae-8adce8fe4306,3c99d61b-21a1-42ea-92a8-3cc88d79f3f1,befe0f8b-5911-47b3-bfae-faa9f8b09d08}'::uuid[]))\n Rows Removed by Filter: 91686\n Buffers: shared hit=12102 read=13259 written=111\n Planning time: 203.153 ms\n Execution time: 17719.200 ms\n(9 rows)\n\nIf I look at pg_stat_activity while the query is running all of the calls\nto this query have the same wait event.\nwait_event - DataFileRead\nwait_event_type - IO\n\nWe took a perfview during the issue and below is the call stack from a\nprocess running this query, two call paths are shown.\n---------------------------------------------------------------\n\nName\n ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire\n+ ntoskrnl!MiChargeWsles\n|+ ntoskrnl!MiObtainSystemCacheView\n||+ ntoskrnl!MmMapViewInSystemCache\n|| + ntoskrnl!CcGetVacbMiss\n|| + ntoskrnl!CcGetVirtualAddress\n|| + ntoskrnl!CcMapAndCopyFromCache\n|| + ntoskrnl!CcCopyReadEx\n|| + ntfs!NtfsCopyReadA\n|| |+ fltmgr!FltpPerformFastIoCall\n|| | + fltmgr!FltpPassThroughFastIo\n|| | + fltmgr!FltpFastIoRead\n|| | + ntoskrnl!NtReadFile\n|| | + ntdll!NtReadFile\n|| | + kernelbase!ReadFile\n|| | + msvcr120!_read_nolock\n|| | + msvcr120!_read\n|| | + postgres!PathNameOpenFile\n|| | + postgres!??mdclose\n|| | + postgres!ScheduleBufferTagForWriteback\n|| | + postgres!InitBufTable\n|| | + postgres!??PrefetchBuffer\n|| | |+ postgres!index_getnext_tid\n|| | | + postgres!index_fetch_heap\n|| | | + postgres!ExecIndexEvalRuntimeKeys\n|| | | + postgres!ExecAssignScanProjectionInfoWithVarno\n|| | | + postgres!tupledesc_match\n|| | | + postgres!recompute_limits\n|| | | + postgres!CheckValidRowMarkRel\n|| | | + postgres!list_length\n|| | | + pg_stat_statements!pgss_ExecutorRun\n|| | | + postgres!PortalRunFetch\n|| | | + postgres!PortalStart\n|| | | + postgres!exec_bind_message\n|| | | + postgres!PostgresMain\n|| | | + postgres!BackendInitialize\n|| | | + postgres!ClosePostmasterPorts\n|| | | + postgres!main\n|| | | + postgres!_onexit\n|| | | + kernel32!BaseThreadInitThunk\n|| | | + ntdll!RtlUserThreadStart\n\n\nName\n ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire\n+ ntoskrnl!MiChargeWsles\n|+ ntoskrnl!MiReleaseSystemCacheView\n| + ntoskrnl!MmUnmapViewInSystemCache\n| + ntoskrnl!CcUnmapVacb\n| + ntoskrnl!CcUnmapVacbArray\n| + ntoskrnl!CcGetVirtualAddress\n| + ntoskrnl!CcMapAndCopyFromCache\n| + ntoskrnl!CcCopyReadEx\n| + ntfs!NtfsCopyReadA\n| |+ fltmgr!FltpPerformFastIoCall\n| | + fltmgr!FltpPassThroughFastIo\n| | + fltmgr!FltpFastIoRead\n| | + ntoskrnl!NtReadFile\n| | + ntdll!NtReadFile\n| | |+ kernelbase!ReadFile\n| | | + msvcr120!_read_nolock\n| | | + msvcr120!_read\n| | | + postgres!PathNameOpenFile\n| | | + postgres!??mdclose\n| | | + postgres!ScheduleBufferTagForWriteback\n| | | + postgres!InitBufTable\n| | | + postgres!??PrefetchBuffer\n| | | |+ postgres!index_getnext_tid\n| | | | + postgres!index_fetch_heap\n| | | | + postgres!ExecIndexEvalRuntimeKeys\n| | | | + postgres!ExecAssignScanProjectionInfoWithVarno\n| | | | + postgres!tupledesc_match\n| | | | + postgres!recompute_limits\n| | | | + postgres!CheckValidRowMarkRel\n| | | | + postgres!list_length\n| | | | + pg_stat_statements!pgss_ExecutorRun\n| | | | + postgres!PortalRunFetch\n| | | | + postgres!PortalStart\n| | | | + postgres!exec_bind_message\n| | | | + postgres!PostgresMain\n| | | | + postgres!BackendInitialize\n| | | | + postgres!ClosePostmasterPorts\n| | | | + postgres!main\n| | | | + postgres!_onexit\n| | | | + kernel32!BaseThreadInitThunk\n| | | | + ntdll!RtlUserThreadStart\n\n\n\nIf I do a top down (ie from when the process started where did we spend the\nmost time) I get:\nName\n ROOT\n+ Process64 postgres (16668) Args: \"--forkbackend\" \"43216\"\n + Thread (16672) CPU=9399ms\n |+ ntdll!RtlUserThreadStart\n ||+ kernel32!BaseThreadInitThunk\n || + postgres!_onexit\n || + postgres!main\n || + postgres!ClosePostmasterPorts\n || + postgres!BackendInitialize\n || + postgres!PostgresMain\n || + postgres!exec_bind_message\n || + postgres!PortalStart\n || + postgres!PortalRunFetch\n || + pg_stat_statements!pgss_ExecutorRun\n || + postgres!list_length\n || + postgres!CheckValidRowMarkRel\n || + postgres!recompute_limits\n || + postgres!tupledesc_match\n || + postgres!ExecAssignScanProjectionInfoWithVarno\n || |+ postgres!ExecIndexEvalRuntimeKeys\n || ||+ postgres!index_fetch_heap\n || || + postgres!index_getnext_tid\n || || |+ postgres!??PrefetchBuffer\n || || ||+ postgres!InitBufTable\n || || |||+ postgres!ScheduleBufferTagForWriteback\n || || ||||+ postgres!??mdclose\n || || |||||+ postgres!PathNameOpenFile\n || || ||||||+ msvcr120!_read\n || || |||||| + msvcr120!_read_nolock\n || || |||||| |+ kernelbase!ReadFile\n || || |||||| ||+ ntdll!NtReadFile\n || || |||||| || + ntoskrnl!NtReadFile\n || || |||||| || |+ fltmgr!FltpFastIoRead\n || || |||||| || ||+ fltmgr!FltpPassThroughFastIo\n || || |||||| || |||+ fltmgr!FltpPerformFastIoCall\n || || |||||| || ||||+ ntfs!NtfsCopyReadA\n || || |||||| || |||| + ntoskrnl!CcCopyReadEx\n || || |||||| || |||| |+ ntoskrnl!CcMapAndCopyFromCache\n || || |||||| || |||| | + ntoskrnl!CcGetVirtualAddress\n || || |||||| || |||| | |+ ntoskrnl!CcUnmapVacbArray\n || || |||||| || |||| | ||+ ntoskrnl!CcUnmapVacb\n || || |||||| || |||| | |||+ ntoskrnl!MmUnmapViewInSystemCache\n || || |||||| || |||| | ||| +\nntoskrnl!ExAcquireSpinLockExclusive\n || || |||||| || |||| | ||| |+\nntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire\n\n\nAlso from this same perfview the following looks to be from the checkpoint\nprocess waiting on the same lock\n\n Name\n ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire\n+ ntoskrnl!ExAcquireSpinLockExclusive\n + ntoskrnl!MiAcquireProperVm\n + ntoskrnl!MiTrimSharedPageFromViews\n + ntoskrnl!MiTrimSection\n + ntoskrnl!MmTrimSection\n + ntoskrnl!CcCoherencyFlushAndPurgeCache\n + ntfs!NtfsFlushUserStream\n + ntfs!NtfsPerformOptimisticFlush\n |+ ntfs!NtfsCommonFlushBuffers\n | + ntfs!NtfsCommonFlushBuffersCallout\n | + ntoskrnl!KeExpandKernelStackAndCalloutInternal\n | + ntfs!NtfsCommonFlushBuffersOnNewStack\n | + ntfs!NtfsFsdFlushBuffers\n | + fltmgr!FltpLegacyProcessingAfterPreCallbacksCompleted\n | + fltmgr!FltpDispatch\n | + ntoskrnl!IopSynchronousServiceTail\n | + ntoskrnl!NtFlushBuffersFileEx\n | + ntoskrnl!NtFlushBuffersFile\n | + ntdll!NtFlushBuffersFile\n | |+ kernelbase!FlushFileBuffers\n | | + msvcr120!_commit\n | | + postgres!FileClose\n | | + postgres!mdtruncate\n | | + postgres!??ReleaseBuffer\n | | + postgres!CreateCheckPoint\n | | + postgres!CheckpointerMain\n | | + postgres!AuxiliaryProcessMain\n | | + postgres!MaxLivePostmasterChildren\n | | + postgres!main\n | | + postgres!_onexit\n | | + kernel32!BaseThreadInitThunk\n | | + ntdll!RtlUserThreadStart\n\n\nIn order to get by we increased the shared_buffers from 500MB to 50GB on\nthis server (and 10GB on another server) but in my opinion this is just\nmasking the issue. Was wondering if anyone in the community has seen\ncontention with this lock before or has other any insights as to why we\nwould suddenly run into this issue?\n\n\nBen Snaidero\n*Geotab*\nSenior Database Specialist\nDirect +1 (289) 230-7749\nToll-free +1 (877) 436-8221\nVisit www.geotab.com\nTwitter <https://twitter.com/geotab> | Facebook\n<https://www.facebook.com/Geotab> | YouTube\n<https://www.youtube.com/user/MyGeotab> | LinkedIn\n<https://www.linkedin.com/company/geotab/>\n\nJoin us at Connect 2020\n\nSan Diego\n\nJanuary 13 - 16, 2020\n\nRegister Now! <https://www.geotab.com/connect/>\n\nRunning into a strange issue that just popped up on a few servers in my environment and was wondering if the community had any insight into as to what could be causing the issue.First, a bit of background. I am running Postgres 10.11 on Windows (but have seen similar issue on a server running 11.6)Windows Version:Major  Minor  Build  Revision-----  -----  -----  --------10     0      14393  0  I have the following query that was on average running in ~2ms suddenly jump up to on average ~25ms.  This query is called millions of time per day and there were cases of the query taking 20-30 seconds.  Below is the explain analyze of one such example.When seeing this issue, the server was under some CPU pressure but even with that, I would not think it should get as slow as shown below as we are using SSDs and none of the windows disk counters (IOPS, queue length) show any value that would be of concern.\texplain (analyze,buffers) SELECT tabledata.uuid_id,tabledata.int_id,tabledata.timestamp_date,tabledata.int_otherid,tabledata.float_value,tabledata.int_otherid2,tabledata.int_otherid3,tabledata.int_rowver\tFROM tabledata WHERE timestamp_date <= $1 AND int_otherid3 IN ($2,$3,$4,$5,$6,$7) AND tabledata.int_id=$8 ORDER BY timestamp_date DESC LIMIT 1\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tQUERY PLAN                                                                       \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t  \t----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.71..139.67 rows=1 width=84) (actual time=17719.076..17719.077 rows=1 loops=1)       Buffers: shared hit=12102 read=13259 written=111       ->  Index Scan Backward using ix_tabledata_intid_timestampdate_intotherid3_intotherid2 on tabledata  (cost=0.71..2112828.54 rows=15204 width=84) (actual time=17719.056..17719.057 rows=1 loops=1)             Index Cond: ((int_id = 8149) AND (timestamp_date <= '2020-02-24 03:05:00.013'::timestamp without time zone))             Filter: (int_otherid3 = ANY ('{3ad2b707-a068-42e8-b0f2-6c8570953760,4e1b1bfa-34e1-48df-8cf8-2b59caf076e2,00d394dd-c2f4-4f3a-a8d4-dc208dafa686,baa904a6-8302-4fa3-b8ae-8adce8fe4306,3c99d61b-21a1-42ea-92a8-3cc88d79f3f1,befe0f8b-5911-47b3-bfae-faa9f8b09d08}'::uuid[]))             Rows Removed by Filter: 91686             Buffers: shared hit=12102 read=13259 written=111 Planning time: 203.153 ms Execution time: 17719.200 ms\t(9 rows)If I look at pg_stat_activity while the query is running all of the calls to this query have the same wait event.wait_event - DataFileReadwait_event_type - IOWe took a perfview during the issue and below is the call stack from a process running this query, two call paths are shown.---------------------------------------------------------------Name ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire+ ntoskrnl!MiChargeWsles|+ ntoskrnl!MiObtainSystemCacheView||+ ntoskrnl!MmMapViewInSystemCache|| + ntoskrnl!CcGetVacbMiss||  + ntoskrnl!CcGetVirtualAddress||   + ntoskrnl!CcMapAndCopyFromCache||    + ntoskrnl!CcCopyReadEx||     + ntfs!NtfsCopyReadA||     |+ fltmgr!FltpPerformFastIoCall||     | + fltmgr!FltpPassThroughFastIo||     |  + fltmgr!FltpFastIoRead||     |   + ntoskrnl!NtReadFile||     |    + ntdll!NtReadFile||     |     + kernelbase!ReadFile||     |      + msvcr120!_read_nolock||     |       + msvcr120!_read||     |        + postgres!PathNameOpenFile||     |         + postgres!??mdclose||     |          + postgres!ScheduleBufferTagForWriteback||     |           + postgres!InitBufTable||     |            + postgres!??PrefetchBuffer||     |            |+ postgres!index_getnext_tid||     |            | + postgres!index_fetch_heap||     |            |  + postgres!ExecIndexEvalRuntimeKeys||     |            |   + postgres!ExecAssignScanProjectionInfoWithVarno||     |            |    + postgres!tupledesc_match||     |            |     + postgres!recompute_limits||     |            |      + postgres!CheckValidRowMarkRel||     |            |       + postgres!list_length||     |            |        + pg_stat_statements!pgss_ExecutorRun||     |            |         + postgres!PortalRunFetch||     |            |          + postgres!PortalStart||     |            |           + postgres!exec_bind_message||     |            |            + postgres!PostgresMain||     |            |             + postgres!BackendInitialize||     |            |              + postgres!ClosePostmasterPorts||     |            |               + postgres!main||     |            |                + postgres!_onexit||     |            |                 + kernel32!BaseThreadInitThunk||     |            |                  + ntdll!RtlUserThreadStartName ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire+ ntoskrnl!MiChargeWsles|+ ntoskrnl!MiReleaseSystemCacheView| + ntoskrnl!MmUnmapViewInSystemCache|  + ntoskrnl!CcUnmapVacb|   + ntoskrnl!CcUnmapVacbArray|    + ntoskrnl!CcGetVirtualAddress|     + ntoskrnl!CcMapAndCopyFromCache|      + ntoskrnl!CcCopyReadEx|       + ntfs!NtfsCopyReadA|       |+ fltmgr!FltpPerformFastIoCall|       | + fltmgr!FltpPassThroughFastIo|       |  + fltmgr!FltpFastIoRead|       |   + ntoskrnl!NtReadFile|       |    + ntdll!NtReadFile|       |    |+ kernelbase!ReadFile|       |    | + msvcr120!_read_nolock|       |    |  + msvcr120!_read|       |    |   + postgres!PathNameOpenFile|       |    |    + postgres!??mdclose|       |    |     + postgres!ScheduleBufferTagForWriteback|       |    |      + postgres!InitBufTable|       |    |       + postgres!??PrefetchBuffer|       |    |       |+ postgres!index_getnext_tid|       |    |       | + postgres!index_fetch_heap|       |    |       |  + postgres!ExecIndexEvalRuntimeKeys|       |    |       |   + postgres!ExecAssignScanProjectionInfoWithVarno|       |    |       |    + postgres!tupledesc_match|       |    |       |     + postgres!recompute_limits|       |    |       |      + postgres!CheckValidRowMarkRel|       |    |       |       + postgres!list_length|       |    |       |        + pg_stat_statements!pgss_ExecutorRun|       |    |       |         + postgres!PortalRunFetch|       |    |       |          + postgres!PortalStart|       |    |       |           + postgres!exec_bind_message|       |    |       |            + postgres!PostgresMain|       |    |       |             + postgres!BackendInitialize|       |    |       |              + postgres!ClosePostmasterPorts|       |    |       |               + postgres!main|       |    |       |                + postgres!_onexit|       |    |       |                 + kernel32!BaseThreadInitThunk|       |    |       |                  + ntdll!RtlUserThreadStartIf I do a top down (ie from when the process started where did we spend the most time) I get:Name ROOT+ Process64 postgres (16668) Args:  \"--forkbackend\" \"43216\" + Thread (16672) CPU=9399ms |+ ntdll!RtlUserThreadStart ||+ kernel32!BaseThreadInitThunk || + postgres!_onexit ||  + postgres!main ||   + postgres!ClosePostmasterPorts ||    + postgres!BackendInitialize ||     + postgres!PostgresMain ||      + postgres!exec_bind_message ||       + postgres!PortalStart ||        + postgres!PortalRunFetch ||         + pg_stat_statements!pgss_ExecutorRun ||          + postgres!list_length ||           + postgres!CheckValidRowMarkRel ||            + postgres!recompute_limits ||             + postgres!tupledesc_match ||              + postgres!ExecAssignScanProjectionInfoWithVarno ||              |+ postgres!ExecIndexEvalRuntimeKeys ||              ||+ postgres!index_fetch_heap ||              || + postgres!index_getnext_tid ||              || |+ postgres!??PrefetchBuffer ||              || ||+ postgres!InitBufTable ||              || |||+ postgres!ScheduleBufferTagForWriteback ||              || ||||+ postgres!??mdclose ||              || |||||+ postgres!PathNameOpenFile ||              || ||||||+ msvcr120!_read ||              || |||||| + msvcr120!_read_nolock ||              || |||||| |+ kernelbase!ReadFile ||              || |||||| ||+ ntdll!NtReadFile ||              || |||||| || + ntoskrnl!NtReadFile ||              || |||||| || |+ fltmgr!FltpFastIoRead ||              || |||||| || ||+ fltmgr!FltpPassThroughFastIo ||              || |||||| || |||+ fltmgr!FltpPerformFastIoCall ||              || |||||| || ||||+ ntfs!NtfsCopyReadA ||              || |||||| || |||| + ntoskrnl!CcCopyReadEx ||              || |||||| || |||| |+ ntoskrnl!CcMapAndCopyFromCache ||              || |||||| || |||| | + ntoskrnl!CcGetVirtualAddress ||              || |||||| || |||| | |+ ntoskrnl!CcUnmapVacbArray ||              || |||||| || |||| | ||+ ntoskrnl!CcUnmapVacb ||              || |||||| || |||| | |||+ ntoskrnl!MmUnmapViewInSystemCache ||              || |||||| || |||| | ||| + ntoskrnl!ExAcquireSpinLockExclusive ||              || |||||| || |||| | ||| |+ ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire  Also from this same perfview the following looks to be from the checkpoint process waiting on the same lock   Name ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire+ ntoskrnl!ExAcquireSpinLockExclusive + ntoskrnl!MiAcquireProperVm  + ntoskrnl!MiTrimSharedPageFromViews   + ntoskrnl!MiTrimSection    + ntoskrnl!MmTrimSection     + ntoskrnl!CcCoherencyFlushAndPurgeCache      + ntfs!NtfsFlushUserStream       + ntfs!NtfsPerformOptimisticFlush       |+ ntfs!NtfsCommonFlushBuffers       | + ntfs!NtfsCommonFlushBuffersCallout       |  + ntoskrnl!KeExpandKernelStackAndCalloutInternal       |   + ntfs!NtfsCommonFlushBuffersOnNewStack       |    + ntfs!NtfsFsdFlushBuffers       |     + fltmgr!FltpLegacyProcessingAfterPreCallbacksCompleted       |      + fltmgr!FltpDispatch       |       + ntoskrnl!IopSynchronousServiceTail       |        + ntoskrnl!NtFlushBuffersFileEx       |         + ntoskrnl!NtFlushBuffersFile       |          + ntdll!NtFlushBuffersFile       |          |+ kernelbase!FlushFileBuffers       |          | + msvcr120!_commit       |          |  + postgres!FileClose       |          |   + postgres!mdtruncate       |          |    + postgres!??ReleaseBuffer       |          |     + postgres!CreateCheckPoint       |          |      + postgres!CheckpointerMain       |          |       + postgres!AuxiliaryProcessMain       |          |        + postgres!MaxLivePostmasterChildren       |          |         + postgres!main       |          |          + postgres!_onexit       |          |           + kernel32!BaseThreadInitThunk       |          |            + ntdll!RtlUserThreadStartIn order to get by we increased the shared_buffers from 500MB to 50GB on this server (and 10GB on another server) but in my opinion this is just masking the issue.  Was wondering if anyone in the community has seen contention with this lock before or has other any insights as to why we would suddenly run into this issue?Ben SnaideroGeotabSenior Database SpecialistDirect+1 (289) 230-7749Toll-free+1 (877) 436-8221Visitwww.geotab.comTwitter | Facebook | YouTube | LinkedInJoin us at Connect 2020San DiegoJanuary 13 - 16, 2020Register Now!", "msg_date": "Thu, 27 Feb 2020 11:33:25 -0500", "msg_from": "Ben Snaidero <[email protected]>", "msg_from_op": true, "msg_subject": "Many DataFileRead - IO waits" }, { "msg_contents": "How big is ix_tabledata_intid_timestampdate_intotherid3_intotherid2 on\ndisk? If you create another index with same fields, how much space does it\ntake? Real question- are you vacuuming aggressively enough for your\nworkload? Your index name seems to indicate that intotherid3 would be the\nthird key, and yet the planner chose not to scan that deep and instead\nfiltered after it found the relevant tuples based on intid and\ntimestampdate. That seems peculiar to me.\n\nThe documentation discourages multi-column indexes because they have\nlimited application unless the same fields are always used. Personally, I\ndon't love reviewing the stats of indexscans or how many tuples were\nfetched and having to guess how deeply the index was scanned for the\nvarious queries involved.\n\nI'd wonder if an index on only intid_timestampdate would be both much\nsmaller and also have a more right-leaning pattern of information being\nadded and accessed in terms of keeping frequently needing blocks in shared\nbuffers.\n\nAs a side note, that planning time seems high to me for such a simple\nquery. Have you increased default_statistics_target significantly perhaps?\n\n>\n\nHow big is ix_tabledata_intid_timestampdate_intotherid3_intotherid2 on disk? If you create another index with same fields, how much space does it take? Real question- are you vacuuming aggressively enough for your workload? Your index name seems to indicate that intotherid3 would be the third key, and yet the planner chose not to scan that deep and instead filtered after it found the relevant tuples based on intid and timestampdate. That seems peculiar to me.The documentation discourages multi-column indexes because they have limited application unless the same fields are always used. Personally, I don't love reviewing the stats of indexscans or how many tuples were fetched and having to guess how deeply the index was scanned for the various queries involved.I'd wonder if an index on only intid_timestampdate would be both much smaller and also have a more right-leaning pattern of information being added and accessed in terms of keeping frequently needing blocks in shared buffers.As a side note, that planning time seems high to me for such a simple query. Have you increased default_statistics_target significantly perhaps?", "msg_date": "Thu, 27 Feb 2020 09:54:04 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many DataFileRead - IO waits" }, { "msg_contents": "On Thu, Feb 27, 2020 at 11:54 AM Michael Lewis <[email protected]> wrote:\n\n> How big is ix_tabledata_intid_timestampdate_intotherid3_intotherid2 on\n> disk? If you create another index with same fields, how much space does it\n> take? Real question- are you vacuuming aggressively enough for your\n> workload? Your index name seems to indicate that intotherid3 would be the\n> third key, and yet the planner chose not to scan that deep and instead\n> filtered after it found the relevant tuples based on intid and\n> timestampdate. That seems peculiar to me.\n>\n> The documentation discourages multi-column indexes because they have\n> limited application unless the same fields are always used. Personally, I\n> don't love reviewing the stats of indexscans or how many tuples were\n> fetched and having to guess how deeply the index was scanned for the\n> various queries involved.\n>\n> I'd wonder if an index on only intid_timestampdate would be both much\n> smaller and also have a more right-leaning pattern of information being\n> added and accessed in terms of keeping frequently needing blocks in shared\n> buffers.\n>\n> As a side note, that planning time seems high to me for such a simple\n> query. Have you increased default_statistics_target significantly perhaps?\n>\n\nIn this case the index is quite large ~400GB but as you can see from the\nexplain plan it's doing a backward scan and not accessing that many\nbuffers. Other servers with this issue are much smaller. We have\nautovacuum set to the default setting but this table does not get any\ndeletes so I don't think that is the problem. I think the reason it does\nnot go deeper into the index keys is because it's just looking for the\nfirst occurence based on date (limit 1) not all of them although even if\nlooking for all of them I think it would still scan in the same way since\nthere would be other intotherid3 values between the ones in this search key\n\nIn regards to default_statistics_target I have not increased this value at\nall.\n\nAll this said regarding statistics and vacuum/bloat we restored a two day\nold copy of the database (on one of the servers experiencing the issue) and\nthe issue was still present. These systems are all on cloud infrastructure\nso I am leaning towards it being something hardware related (especially as\nit's only happening on a few servers) but our cloud provider says nothing\nhas changed in this respect.\n\nOn Thu, Feb 27, 2020 at 11:54 AM Michael Lewis <[email protected]> wrote:How big is ix_tabledata_intid_timestampdate_intotherid3_intotherid2 on disk? If you create another index with same fields, how much space does it take? Real question- are you vacuuming aggressively enough for your workload? Your index name seems to indicate that intotherid3 would be the third key, and yet the planner chose not to scan that deep and instead filtered after it found the relevant tuples based on intid and timestampdate. That seems peculiar to me.The documentation discourages multi-column indexes because they have limited application unless the same fields are always used. Personally, I don't love reviewing the stats of indexscans or how many tuples were fetched and having to guess how deeply the index was scanned for the various queries involved.I'd wonder if an index on only intid_timestampdate would be both much smaller and also have a more right-leaning pattern of information being added and accessed in terms of keeping frequently needing blocks in shared buffers.As a side note, that planning time seems high to me for such a simple query. Have you increased default_statistics_target significantly perhaps?In this case the index is quite large ~400GB but as you can see from the explain plan it's doing a backward scan and not accessing that many buffers.  Other servers with this issue are much smaller.  We have autovacuum set to the default setting but this table does not get any deletes so I don't think that is the problem.  I think the reason it does not go deeper into the index keys is because it's just looking for the first occurence based on date (limit 1) not all of them although even if looking for all of them I think it would still scan in the same way since there would be other intotherid3 values between the ones in this search keyIn regards to default_statistics_target I have not increased this value at all.All this said regarding statistics and vacuum/bloat we restored a two day old copy of the database (on one of the servers experiencing the issue) and the issue was still present.  These systems are all on cloud infrastructure so I am leaning towards it being something hardware related (especially as it's only happening on a few servers) but our cloud provider says nothing has changed in this respect.", "msg_date": "Fri, 28 Feb 2020 09:01:50 -0500", "msg_from": "Ben Snaidero <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Many DataFileRead - IO waits" }, { "msg_contents": "If no updates or deletes are happening on the table, it would be best\npractice to set up a scheduled manual vacuum analyze to ensure statistics\nand the visibility map is updated. Other than creating the index on the\nfirst two columns only, I'm out of ideas. Hopefully someone running\nPostgres at large scale on Windows will chime in.\n\nIf no updates or deletes are happening on the table, it would be best practice to set up a scheduled manual vacuum analyze to ensure statistics and the visibility map is updated. Other than creating the index on the first two columns only, I'm out of ideas. Hopefully someone running Postgres at large scale on Windows will chime in.", "msg_date": "Fri, 28 Feb 2020 09:40:46 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many DataFileRead - IO waits" }, { "msg_contents": "On Fri, Feb 28, 2020 at 11:41 AM Michael Lewis <[email protected]> wrote:\n\n> If no updates or deletes are happening on the table, it would be best\n> practice to set up a scheduled manual vacuum analyze to ensure statistics\n> and the visibility map is updated. Other than creating the index on the\n> first two columns only, I'm out of ideas. Hopefully someone running\n> Postgres at large scale on Windows will chime in.\n>\n\nYep. We run manual vacuum freeze analyze weekly to ensure visibility map\nis updated and statistics are up to date.\n\nThanks for taking the time to look.\n\nOn Fri, Feb 28, 2020 at 11:41 AM Michael Lewis <[email protected]> wrote:If no updates or deletes are happening on the table, it would be best practice to set up a scheduled manual vacuum analyze to ensure statistics and the visibility map is updated. Other than creating the index on the first two columns only, I'm out of ideas. Hopefully someone running Postgres at large scale on Windows will chime in.Yep.  We run manual vacuum freeze analyze weekly to ensure visibility map is updated and statistics are up to date.Thanks for taking the time to look.", "msg_date": "Fri, 28 Feb 2020 11:53:47 -0500", "msg_from": "Ben Snaidero <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Many DataFileRead - IO waits" }, { "msg_contents": "Hello,\nI'm not able to use your perfs diagrams, \nbut it seems to me that not using 3rd column of that index (int_otherid2)\ngenerates an IO problem.\n\nCould you give us the result of\n\nexplain (analyze,buffers) SELECT\ntabledata.uuid_id,tabledata.int_id,tabledata.timestamp_date,tabledata.int_otherid,tabledata.float_value,tabledata.int_otherid2,tabledata.int_otherid3,tabledata.int_rowver\nFROM tabledata \nWHERE timestamp_date <= '2020-02-24 03:05:00.013'::timestamp without time\nzone\nND int_otherid3 = '3ad2b707-a068-42e8-b0f2-6c8570953760'\nAND tabledata.int_id=8149 \nORDER BY timestamp_date DESC \nLIMIT 1\n\nand this for each value of int_otherid3 ?\nand tell us if you are able to change the sql ?\n\nThanks\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n", "msg_date": "Fri, 28 Feb 2020 12:00:09 -0700 (MST)", "msg_from": "legrand legrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many DataFileRead - IO waits" }, { "msg_contents": "On Thu, Feb 27, 2020 at 11:33 AM Ben Snaidero <[email protected]>\nwrote:\n\n\n> I have the following query that was on average running in ~2ms suddenly\n> jump up to on average ~25ms.\n>\n\nWhat are you averaging over? The plan you show us is slow enough that if\nyou were averaging over the last 1000 executions, that one execution could\nskew the entire average just by itself. When individual execution times\ncan vary over 4 powers of 10, I don't think averages are a very good way of\nanalyzing things.\n\n\n\n> This query is called millions of time per day and there were cases of the\n> query taking 20-30 seconds. Below is the explain analyze of one such\n> example.\n> When seeing this issue, the server was under some CPU pressure but even\n> with that, I would not think it should get as slow as shown below as we are\n> using SSDs and none of the windows disk counters (IOPS, queue length) show\n> any value that would be of concern.\n>\n\nWhat is the average and expected random read latency on your SSDs? Have\nyou benchmarked them (outside of the database system) to see if they are\nperforming as expected?\n\n\n> Rows Removed by Filter: 91686\n> Buffers: shared hit=12102 read=13259 written=111\n>\n\nDo the faster executions have fewer rows removed by the filter (and fewer\nbuffers read), or are they just faster despite having about the same values?\n\n\n\n> We took a perfview during the issue and below is the call stack from a\n> process running this query, two call paths are shown.\n>\n\nI've never used perfview. But if I try to naively interpret it similar to\ngdb backtrace, it doesn't make much sense to me. InitBufTable is only\ncalled by \"postmaster\" while starting the database, how could it be part of\ncall paths during regular operations? Are these views of the slow-running\nback end itself, or of some other postgresql process which was idle at the\ntime the snapshot was taken?\n\nCheers,\n\nJeff\n\n>\n\nOn Thu, Feb 27, 2020 at 11:33 AM Ben Snaidero <[email protected]> wrote: I have the following query that was on average running in ~2ms suddenly jump up to on average ~25ms.  What are you averaging over?  The plan you show us is slow enough that if you were averaging over the last 1000 executions, that one execution could skew the entire average just by itself.  When individual execution times can vary over 4 powers of 10, I don't think averages are a very good way of analyzing things. This query is called millions of time per day and there were cases of the query taking 20-30 seconds.  Below is the explain analyze of one such example.When seeing this issue, the server was under some CPU pressure but even with that, I would not think it should get as slow as shown below as we are using SSDs and none of the windows disk counters (IOPS, queue length) show any value that would be of concern.What is the average and expected random read latency on your SSDs?  Have you benchmarked them (outside of the database system) to see if they are performing as expected?              Rows Removed by Filter: 91686             Buffers: shared hit=12102 read=13259 written=111Do the faster executions have fewer rows removed by the filter (and fewer buffers read), or are they just faster despite having about the same values? We took a perfview during the issue and below is the call stack from a process running this query, two call paths are shown.I've never used perfview.  But if I try to naively interpret it similar to gdb backtrace, it doesn't make much sense to me.  InitBufTable is only called by \"postmaster\" while starting the database, how could it be part of call paths during regular operations?  Are these views of the slow-running back end itself, or of some other postgresql process which was idle at the time the snapshot was taken?Cheers,Jeff", "msg_date": "Sat, 29 Feb 2020 11:21:48 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many DataFileRead - IO waits" }, { "msg_contents": "On Fri, Feb 28, 2020 at 2:00 PM legrand legrand <[email protected]>\nwrote:\n\n> Hello,\n> I'm not able to use your perfs diagrams,\n> but it seems to me that not using 3rd column of that index (int_otherid2)\n> generates an IO problem.\n>\n> Could you give us the result of\n>\n> explain (analyze,buffers) SELECT\n>\n> tabledata.uuid_id,tabledata.int_id,tabledata.timestamp_date,tabledata.int_otherid,tabledata.float_value,tabledata.int_otherid2,tabledata.int_otherid3,tabledata.int_rowver\n> FROM tabledata\n> WHERE timestamp_date <= '2020-02-24 03:05:00.013'::timestamp without time\n> zone\n> ND int_otherid3 = '3ad2b707-a068-42e8-b0f2-6c8570953760'\n> AND tabledata.int_id=8149\n> ORDER BY timestamp_date DESC\n> LIMIT 1\n>\n> and this for each value of int_otherid3 ?\n> and tell us if you are able to change the sql ?\n>\n> Thanks\n> Regards\n> PAscal\n>\n>\n>\nThanks for the suggestion. Yes I could change the sql and when using only\none filter for int_otherid2 it does use all 3 columns as the index key.\n\nexplain (analyze,buffers) SELECT\nuuid_id,int_id,timestamp_date,int_otherid,float_value,int_otherid2,int_otherid3,int_rowver\nFROM tabledata WHERE dtdatetime <= '2020-01-20 03:05:00.013' AND\ngDiagnosticId IN ('3c99d61b-21a1-42ea-92a8-3cc88d79f3f1') AND\n ivehicleid=8149 ORDER BY dtdatetime DESC LIMIT 1\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.71..85.13 rows=1 width=84) (actual time=300.820..300.821\nrows=1 loops=1)\n Buffers: shared hit=17665 read=1\n -> Index Scan Backward using\nix_tabledata_intid_timestampdate_intotherid3_intotherid2 on tabledata\n(cost=0.71..41960.39 rows=497 width=84) (actual time=300.808..300.809\nrows=1 loops=1)\n Index Cond: ((int_id = 8149) AND (timestamp_date <= '2020-01-20\n03:05:00.013'::timestamp without time zone) AND (int_otherid2 =\n'3c99d61b-21a1-42ea-92a8-3cc88d79f3f1'::uuid))\n Buffers: shared hit=17665 read=1\n Planning time: 58.769 ms\n Execution time: 300.895 ms\n(7 rows)\n\nI still haven't been able to explain why this changed all of a sudden (I am\nworking on reproducing this error in a test environment) but this could be\na good workaround. I might be able to just make 6 calls or maybe rewrite\nthe original query some other way in order to get it to use all 3 keys of\nthe index. I'll have to do some more testing\n\nThanks again.\n\nOn Fri, Feb 28, 2020 at 2:00 PM legrand legrand <[email protected]> wrote:Hello,\nI'm not able to use your perfs diagrams, \nbut it seems to me that not using 3rd column of that index (int_otherid2)\ngenerates an IO problem.\n\nCould you give us the result of\n\nexplain (analyze,buffers) SELECT\ntabledata.uuid_id,tabledata.int_id,tabledata.timestamp_date,tabledata.int_otherid,tabledata.float_value,tabledata.int_otherid2,tabledata.int_otherid3,tabledata.int_rowver\nFROM tabledata \nWHERE timestamp_date <= '2020-02-24 03:05:00.013'::timestamp without time\nzone\nND int_otherid3 = '3ad2b707-a068-42e8-b0f2-6c8570953760'\nAND tabledata.int_id=8149 \nORDER BY timestamp_date DESC \nLIMIT 1\n\nand this for each value of int_otherid3 ?\nand tell us if you are able to change the sql ?\n\nThanks\nRegards\nPAscal\nThanks for the suggestion.  Yes I could change the sql and when using only one filter for int_otherid2 it does use all 3 columns as the index key.  explain (analyze,buffers) SELECT uuid_id,int_id,timestamp_date,int_otherid,float_value,int_otherid2,int_otherid3,int_rowverFROM tabledata WHERE dtdatetime <= '2020-01-20 03:05:00.013' AND gDiagnosticId IN ('3c99d61b-21a1-42ea-92a8-3cc88d79f3f1') AND  ivehicleid=8149 ORDER BY dtdatetime DESC LIMIT 1                                                                                              QUERY PLAN                                                                                              ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit  (cost=0.71..85.13 rows=1 width=84) (actual time=300.820..300.821 rows=1 loops=1)   Buffers: shared hit=17665 read=1   ->  Index Scan Backward using ix_tabledata_intid_timestampdate_intotherid3_intotherid2 on tabledata (cost=0.71..41960.39 rows=497 width=84) (actual time=300.808..300.809 rows=1 loops=1)         Index Cond: ((int_id = 8149) AND (timestamp_date <= '2020-01-20 03:05:00.013'::timestamp without time zone) AND (int_otherid2 = '3c99d61b-21a1-42ea-92a8-3cc88d79f3f1'::uuid))         Buffers: shared hit=17665 read=1 Planning time: 58.769 ms Execution time: 300.895 ms(7 rows) I still haven't been able to explain why this changed all of a sudden (I am working on reproducing this error in a test environment) but this could be a good workaround.  I might be able to just make 6 calls or maybe rewrite the original query some other way in order to get it to use all 3 keys of the index.  I'll have to do some more testing Thanks again.", "msg_date": "Mon, 2 Mar 2020 17:31:02 -0500", "msg_from": "Ben Snaidero <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Many DataFileRead - IO waits" }, { "msg_contents": "On Sat, Feb 29, 2020 at 11:22 AM Jeff Janes <[email protected]> wrote:\n\n> On Thu, Feb 27, 2020 at 11:33 AM Ben Snaidero <[email protected]>\n> wrote:\n>\n>\n>> I have the following query that was on average running in ~2ms suddenly\n>> jump up to on average ~25ms.\n>>\n>\n> What are you averaging over? The plan you show us is slow enough that if\n> you were averaging over the last 1000 executions, that one execution could\n> skew the entire average just by itself. When individual execution times\n> can vary over 4 powers of 10, I don't think averages are a very good way of\n> analyzing things.\n>\n>\n>\n>> This query is called millions of time per day and there were cases of the\n>> query taking 20-30 seconds. Below is the explain analyze of one such\n>> example.\n>> When seeing this issue, the server was under some CPU pressure but even\n>> with that, I would not think it should get as slow as shown below as we are\n>> using SSDs and none of the windows disk counters (IOPS, queue length) show\n>> any value that would be of concern.\n>>\n>\n> What is the average and expected random read latency on your SSDs? Have\n> you benchmarked them (outside of the database system) to see if they are\n> performing as expected?\n>\n>\n>> Rows Removed by Filter: 91686\n>> Buffers: shared hit=12102 read=13259 written=111\n>>\n>\n> Do the faster executions have fewer rows removed by the filter (and fewer\n> buffers read), or are they just faster despite having about the same values?\n>\n>\n>\n>> We took a perfview during the issue and below is the call stack from a\n>> process running this query, two call paths are shown.\n>>\n>\n> I've never used perfview. But if I try to naively interpret it similar to\n> gdb backtrace, it doesn't make much sense to me. InitBufTable is only\n> called by \"postmaster\" while starting the database, how could it be part of\n> call paths during regular operations? Are these views of the slow-running\n> back end itself, or of some other postgresql process which was idle at the\n> time the snapshot was taken?\n>\n> Cheers,\n>\n> Jeff\n>\n\nQuery statistics were averaged over ~3million calls so I don't think a\nsingle outlier would skew the results too much.\n\nThe perfview call stack is similar to gdb backtrace. I am 99% sure that\nthis call path is from the backend running this query as we queried\npg_stat_activity at the time of the perfview and cross-referenced the\nPIDs. That said I am going to try building on windows with debug symbols\nenabled and see if I can use gdb to debug and confirm.\n\nOn Sat, Feb 29, 2020 at 11:22 AM Jeff Janes <[email protected]> wrote:On Thu, Feb 27, 2020 at 11:33 AM Ben Snaidero <[email protected]> wrote: I have the following query that was on average running in ~2ms suddenly jump up to on average ~25ms.  What are you averaging over?  The plan you show us is slow enough that if you were averaging over the last 1000 executions, that one execution could skew the entire average just by itself.  When individual execution times can vary over 4 powers of 10, I don't think averages are a very good way of analyzing things. This query is called millions of time per day and there were cases of the query taking 20-30 seconds.  Below is the explain analyze of one such example.When seeing this issue, the server was under some CPU pressure but even with that, I would not think it should get as slow as shown below as we are using SSDs and none of the windows disk counters (IOPS, queue length) show any value that would be of concern.What is the average and expected random read latency on your SSDs?  Have you benchmarked them (outside of the database system) to see if they are performing as expected?              Rows Removed by Filter: 91686             Buffers: shared hit=12102 read=13259 written=111Do the faster executions have fewer rows removed by the filter (and fewer buffers read), or are they just faster despite having about the same values? We took a perfview during the issue and below is the call stack from a process running this query, two call paths are shown.I've never used perfview.  But if I try to naively interpret it similar to gdb backtrace, it doesn't make much sense to me.  InitBufTable is only called by \"postmaster\" while starting the database, how could it be part of call paths during regular operations?  Are these views of the slow-running back end itself, or of some other postgresql process which was idle at the time the snapshot was taken?Cheers,JeffQuery statistics were averaged over ~3million calls so I don't think a single outlier would skew the results too much.   The perfview call stack is similar to gdb backtrace.  I am 99% sure that this call path is from the backend running this query as we queried pg_stat_activity at the time of the perfview and cross-referenced the PIDs.  That said I am going to try building on windows with debug symbols enabled and see if I can use gdb to debug and confirm.", "msg_date": "Mon, 2 Mar 2020 17:39:37 -0500", "msg_from": "Ben Snaidero <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Many DataFileRead - IO waits" }, { "msg_contents": "> Thanks for the suggestion. Yes I could change the sql and when using only\n> one filter for int_otherid2 it does use all 3 columns as the index key.\n\nexplain (analyze,buffers) SELECT\nuuid_id,int_id,timestamp_date,int_otherid,float_value,int_otherid2,int_otherid3,int_rowver\nFROM tabledata WHERE dtdatetime <= '2020-01-20 03:05:00.013' AND\ngDiagnosticId IN ('3c99d61b-21a1-42ea-92a8-3cc88d79f3f1') AND\n ivehicleid=8149 ORDER BY dtdatetime DESC LIMIT 1\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.71..85.13 rows=1 width=84) (actual time=300.820..300.821\nrows=1 loops=1)\n Buffers: shared hit=17665 read=1\n -> Index Scan Backward using\nix_tabledata_intid_timestampdate_intotherid3_intotherid2 on tabledata\n(cost=0.71..41960.39 rows=497 width=84) (actual time=300.808..300.809\nrows=1 loops=1)\n Index Cond: ((int_id = 8149) AND (timestamp_date <= '2020-01-20\n03:05:00.013'::timestamp without time zone) AND (int_otherid2 =\n'3c99d61b-21a1-42ea-92a8-3cc88d79f3f1'::uuid))\n Buffers: shared hit=17665 read=1\n Planning time: 58.769 ms\n Execution time: 300.895 ms\n(7 rows)\n\n> I still haven't been able to explain why this changed all of a sudden (I\n> am\n> working on reproducing this error in a test environment) but this could be\n> a good workaround. I might be able to just make 6 calls or maybe rewrite\n> the original query some other way in order to get it to use all 3 keys of\n> the index. I'll have to do some more testing\n\nParsing of 58 ms and 300 ms for 17665 memory blocks read is very very bad\n...\nAre those shared buffers in memory or SWAPPED ?\nIs the server CPU bounded or limited ?\n\nMay be you should dump some data for a test case on an other platform \n(any desktop) to get a comparison point\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n", "msg_date": "Mon, 2 Mar 2020 16:09:53 -0700 (MST)", "msg_from": "legrand legrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many DataFileRead - IO waits" } ]
[ { "msg_contents": "Hey,\nI upgraded from 96 to 12 in our test env and I'm seeing that for queries\nthat involve join operation between a partition table and other tables\nthere is degradation is performance compared to pg96 performance.\n\nMy machine : 8cpu,16gb,regular hd,linux redhat 6\npg settings :\nmax_wal_size = 2GB\nmin_wal_size = 1GB\nwal_buffers = 16MB\ncheckpoint_completion_target = 0.9\ncheckpoint_timeout = 30min\nlog_checkpoints = on\nlog_lock_waits = on\nlog_temp_files = 1024\nlog_min_duration_statement = 10000\nlog_autovacuum_min_duration = 10000\nstandard_conforming_strings = off\nmax_locks_per_transaction = 5000\nmax_connections = 500\nlog_line_prefix = '%t %d %p '\nrandom_page_cost = 4\ndeadlock_timeout = 5s\nshared_preload_libraries = 'pg_stat_statements'\ntrack_activity_query_size = 32764\nlog_directory = 'pg_log'\nenable_partitionwise_join = on # for pg12\nenable_partitionwise_aggregate = on # for pg12\nlisten_addresses = '*'\nssl = on\nmaintenance_work_mem = 333MB\nwork_mem = 16MB\nshared_buffers = 4020MB\neffective_cache_size = 8040MB\n\npostgresql12.2\n\nI used this table as the joined table for both cases :\n create table iot_device(id serial primary key,name text);\ninsert into iot_device(name) select generate_series(1,100)||'a';\n\nIn pg96 I created the following regular table :\ncreate table iot_data(id serial primary key,data text,metadata\nbigint,device bigint references iot_device(id));\n\ninserted the data :\n insert into iot_data select\ngenerate_series(1,10000000),random()*10,random()*254,random()*99+1;\n\nIn pg12 I created a table with 3 hash partitiones :\ncreate table iot_data(id serial ,data text,metadata bigint,device bigint\nreferences iot_device(id),primary key(id,device)) partition by hash(device);\ncreate table iot_data_0 partition of iot_data for values with (MODULUS 3,\nremainder 0);\ncreate table iot_data_1 partition of iot_data for values with (MODULUS 3,\nremainder 1);\ncreate table iot_data_2 partition of iot_data for values with (MODULUS 3,\nremainder 2);\n\n\nI generated a dump of the data in the pg96 machine and inserted it into the\npg12 db :\npg_dump -d postgres -U postgres -a -t iot_data > iot_data.dump\npsql -d postgres -U postgres -f -h pg12_machine /tmp/iot_data.dump\n\npostgres=# select count(*) from iot_data_0;\n count\n---------\n 3028682\n(1 row)\n\npostgres =# select count(*) from iot_data_1;\n count\n---------\n 3234335\n(1 row)\n\npostgres =# select count(*) from iot_data_2;\n count\n---------\n 3736983\n(1 row)\n\ncreate index on iot_data(metadata,lower(data));\nvacuum analyze iot_data;\n\nand now for the performance:\nquery : explain analyze select * from iot_data da,iot_device de where\nde.name in ('50a') and de.id = da.device and da.metadata=50 and\nlower(da.data) like '50';\n\nPG12 :\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=5.16..773.61 rows=2 width=43) (actual time=2.858..2.858\nrows=0 loops=1)\n -> Seq Scan on iot_device de (cost=0.00..2.25 rows=1 width=7) (actual\ntime=0.014..0.020 rows=1 loops=1)\n Filter: (name = '50a'::text)\n Rows Removed by Filter: 99\n -> Append (cost=5.16..771.30 rows=6 width=36) (actual\ntime=2.835..2.835 rows=0 loops=1)\n -> Bitmap Heap Scan on iot_data_0 da (cost=5.16..233.78 rows=2\nwidth=36) (actual time=2.829..2.829 rows=0 loops=1)\n Recheck Cond: (metadata = 50)\n Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))\n -> Bitmap Index Scan on iot_data_0_metadata_lower_idx\n (cost=0.00..5.14 rows=59 width=0) (actual time=2.827..2.827 rows=0 loops=1)\n Index Cond: ((metadata = 50) AND (lower(data) =\n'50'::text))\n -> Bitmap Heap Scan on iot_data_1 da_1 (cost=5.20..249.32 rows=2\nwidth=37) (never executed)\n Recheck Cond: (metadata = 50)\n Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))\n -> Bitmap Index Scan on iot_data_1_metadata_lower_idx\n (cost=0.00..5.18 rows=63 width=0) (never executed)\n Index Cond: ((metadata = 50) AND (lower(data) =\n'50'::text))\n -> Bitmap Heap Scan on iot_data_2 da_2 (cost=5.30..288.16 rows=2\nwidth=36) (never executed)\n Recheck Cond: (metadata = 50)\n Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))\n -> Bitmap Index Scan on iot_data_2_metadata_lower_idx\n (cost=0.00..5.29 rows=73 width=0) (never executed)\n Index Cond: ((metadata = 50) AND (lower(data) =\n'50'::text))\n Planning Time: 8.157 ms\n Execution Time: 2.920 ms\n(22 rows)\n\n\nPG96 :\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=6.57..397.19 rows=2 width=44) (actual time=0.121..0.121\nrows=0 loops=1)\n Join Filter: (da.device = de.id)\n -> Seq Scan on iot_device de (cost=0.00..2.25 rows=1 width=7) (actual\ntime=0.016..0.022 rows=1 loops=1)\n Filter: (name = '50a'::text)\n Rows Removed by Filter: 99\n -> Bitmap Heap Scan on iot_data da (cost=6.57..392.49 rows=196\nwidth=37) (actual time=0.097..0.097 rows=0 loops=1)\n Recheck Cond: (metadata = 50)\n Filter: (lower(data) ~~ '50'::text)\n -> Bitmap Index Scan on iot_data_metadata_lower_idx\n (cost=0.00..6.52 rows=196 width=0) (actual time=0.095..0.095 rows=0\nloops=1)\n Index Cond: ((metadata = 50) AND (lower(data) = '50'::text))\n Planning time: 0.815 ms\n Execution time: 0.158 ms\n(12 rows)\n\n\nAs you can see, the results are better in pg96. This example only shows the\nresults for a small data set. In bigger data sets I get a bigger diff...\n\nI tried changing many postgresql.conf parameters that were added\n(max_workers_per_gather,enable_partitionwise_join and so on..).\nI dont understand why in pg12 it scans all the partitions instead of the\nrelevant one..\n\nI added all the commands to recreate the test, please feel free to share\nany useful notes.\n\nHey,I upgraded from 96 to 12 in our test env and I'm seeing that for queries that involve join operation between a partition table and other tables there is degradation is performance compared to pg96 performance.My machine : 8cpu,16gb,regular hd,linux redhat 6pg settings : max_wal_size = 2GBmin_wal_size = 1GBwal_buffers = 16MBcheckpoint_completion_target = 0.9checkpoint_timeout = 30minlog_checkpoints = onlog_lock_waits = onlog_temp_files = 1024log_min_duration_statement = 10000log_autovacuum_min_duration = 10000standard_conforming_strings = offmax_locks_per_transaction = 5000max_connections = 500log_line_prefix = '%t %d %p  'random_page_cost = 4deadlock_timeout = 5sshared_preload_libraries = 'pg_stat_statements'track_activity_query_size = 32764log_directory = 'pg_log'enable_partitionwise_join = on # for pg12enable_partitionwise_aggregate = on \n\n # for pg12\n\nlisten_addresses =  '*'ssl =  onmaintenance_work_mem = 333MBwork_mem = 16MBshared_buffers = 4020MBeffective_cache_size = 8040MBpostgresql12.2I used this table as the joined table for both cases :  create table iot_device(id serial primary key,name text);insert into iot_device(name) select generate_series(1,100)||'a';    In pg96 I created the following regular table : create table iot_data(id serial primary key,data text,metadata bigint,device bigint references iot_device(id));inserted the data :   insert into iot_data select generate_series(1,10000000),random()*10,random()*254,random()*99+1;  In pg12 I created  a table with 3 hash partitiones : create table iot_data(id serial ,data text,metadata bigint,device bigint references iot_device(id),primary key(id,device)) partition by hash(device);create table iot_data_0 partition of iot_data for values with (MODULUS 3, remainder 0);create table iot_data_1 partition of iot_data for values with (MODULUS 3, remainder 1);create table iot_data_2 partition of iot_data for values with (MODULUS 3, remainder 2);I generated a dump of the data in the pg96 machine and inserted it into the pg12 db : pg_dump -d postgres -U \n\npostgres \n\n-a -t iot_data > iot_data.dumppsql -d \n\npostgres \n\n-U \n\npostgres \n\n-f  -h pg12_machine /tmp/iot_data.dumppostgres=# select count(*) from iot_data_0;  count--------- 3028682(1 row)\n\npostgres\n\n=# select count(*) from iot_data_1;  count--------- 3234335(1 row)\n\npostgres\n\n=# select count(*) from iot_data_2;  count--------- 3736983(1 row)create index on iot_data(metadata,lower(data));vacuum analyze iot_data;and now for the performance:query : \n\nexplain analyze select * from iot_data da,iot_device de where de.name in ('50a') and de.id = da.device and da.metadata=50 and lower(da.data) like '50';\n\n PG12 :                                                                     QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=5.16..773.61 rows=2 width=43) (actual time=2.858..2.858 rows=0 loops=1)   ->  Seq Scan on iot_device de  (cost=0.00..2.25 rows=1 width=7) (actual time=0.014..0.020 rows=1 loops=1)         Filter: (name = '50a'::text)         Rows Removed by Filter: 99   ->  Append  (cost=5.16..771.30 rows=6 width=36) (actual time=2.835..2.835 rows=0 loops=1)         ->  Bitmap Heap Scan on iot_data_0 da  (cost=5.16..233.78 rows=2 width=36) (actual time=2.829..2.829 rows=0 loops=1)               Recheck Cond: (metadata = 50)               Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))               ->  Bitmap Index Scan on iot_data_0_metadata_lower_idx  (cost=0.00..5.14 rows=59 width=0) (actual time=2.827..2.827 rows=0 loops=1)                     Index Cond: ((metadata = 50) AND (lower(data) = '50'::text))         ->  Bitmap Heap Scan on iot_data_1 da_1  (cost=5.20..249.32 rows=2 width=37) (never executed)               Recheck Cond: (metadata = 50)               Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))               ->  Bitmap Index Scan on iot_data_1_metadata_lower_idx  (cost=0.00..5.18 rows=63 width=0) (never executed)                     Index Cond: ((metadata = 50) AND (lower(data) = '50'::text))         ->  Bitmap Heap Scan on iot_data_2 da_2  (cost=5.30..288.16 rows=2 width=36) (never executed)               Recheck Cond: (metadata = 50)               Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))               ->  Bitmap Index Scan on iot_data_2_metadata_lower_idx  (cost=0.00..5.29 rows=73 width=0) (never executed)                     Index Cond: ((metadata = 50) AND (lower(data) = '50'::text)) Planning Time: 8.157 ms Execution Time: 2.920 ms(22 rows)PG96 :                                                                 QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=6.57..397.19 rows=2 width=44) (actual time=0.121..0.121 rows=0 loops=1)   Join Filter: (da.device = de.id)   ->  Seq Scan on iot_device de  (cost=0.00..2.25 rows=1 width=7) (actual time=0.016..0.022 rows=1 loops=1)         Filter: (name = '50a'::text)         Rows Removed by Filter: 99   ->  Bitmap Heap Scan on iot_data da  (cost=6.57..392.49 rows=196 width=37) (actual time=0.097..0.097 rows=0 loops=1)         Recheck Cond: (metadata = 50)         Filter: (lower(data) ~~ '50'::text)         ->  Bitmap Index Scan on iot_data_metadata_lower_idx  (cost=0.00..6.52 rows=196 width=0) (actual time=0.095..0.095 rows=0 loops=1)               Index Cond: ((metadata = 50) AND (lower(data) = '50'::text)) Planning time: 0.815 ms Execution time: 0.158 ms(12 rows)As you can see, the results are better in pg96. This example only shows the results for a small data set. In bigger data sets I get a bigger diff...I tried changing many postgresql.conf parameters that were added (max_workers_per_gather,enable_partitionwise_join and so on..).I dont understand why in pg12 it scans all the partitions instead of the relevant one..I added all the commands to recreate the test, please feel free to share any useful notes.", "msg_date": "Sun, 8 Mar 2020 18:05:26 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "pg12 partitions show bad performance vs pg96" }, { "msg_contents": "I realized that the planner goes to the right partition because \"(never\nexecuted)\" is mentioned near the scan of the other partitions. However,\nstill i'm not sure why performance is better in pg96.\n\n‫בתאריך יום א׳, 8 במרץ 2020 ב-18:05 מאת ‪Mariel Cherkassky‬‏ <‪\[email protected]‬‏>:‬\n\n> Hey,\n> I upgraded from 96 to 12 in our test env and I'm seeing that for queries\n> that involve join operation between a partition table and other tables\n> there is degradation is performance compared to pg96 performance.\n>\n> My machine : 8cpu,16gb,regular hd,linux redhat 6\n> pg settings :\n> max_wal_size = 2GB\n> min_wal_size = 1GB\n> wal_buffers = 16MB\n> checkpoint_completion_target = 0.9\n> checkpoint_timeout = 30min\n> log_checkpoints = on\n> log_lock_waits = on\n> log_temp_files = 1024\n> log_min_duration_statement = 10000\n> log_autovacuum_min_duration = 10000\n> standard_conforming_strings = off\n> max_locks_per_transaction = 5000\n> max_connections = 500\n> log_line_prefix = '%t %d %p '\n> random_page_cost = 4\n> deadlock_timeout = 5s\n> shared_preload_libraries = 'pg_stat_statements'\n> track_activity_query_size = 32764\n> log_directory = 'pg_log'\n> enable_partitionwise_join = on # for pg12\n> enable_partitionwise_aggregate = on # for pg12\n> listen_addresses = '*'\n> ssl = on\n> maintenance_work_mem = 333MB\n> work_mem = 16MB\n> shared_buffers = 4020MB\n> effective_cache_size = 8040MB\n>\n> postgresql12.2\n>\n> I used this table as the joined table for both cases :\n> create table iot_device(id serial primary key,name text);\n> insert into iot_device(name) select generate_series(1,100)||'a';\n>\n> In pg96 I created the following regular table :\n> create table iot_data(id serial primary key,data text,metadata\n> bigint,device bigint references iot_device(id));\n>\n> inserted the data :\n> insert into iot_data select\n> generate_series(1,10000000),random()*10,random()*254,random()*99+1;\n>\n> In pg12 I created a table with 3 hash partitiones :\n> create table iot_data(id serial ,data text,metadata bigint,device bigint\n> references iot_device(id),primary key(id,device)) partition by hash(device);\n> create table iot_data_0 partition of iot_data for values with (MODULUS 3,\n> remainder 0);\n> create table iot_data_1 partition of iot_data for values with (MODULUS 3,\n> remainder 1);\n> create table iot_data_2 partition of iot_data for values with (MODULUS 3,\n> remainder 2);\n>\n>\n> I generated a dump of the data in the pg96 machine and inserted it into\n> the pg12 db :\n> pg_dump -d postgres -U postgres -a -t iot_data > iot_data.dump\n> psql -d postgres -U postgres -f -h pg12_machine /tmp/iot_data.dump\n>\n> postgres=# select count(*) from iot_data_0;\n> count\n> ---------\n> 3028682\n> (1 row)\n>\n> postgres =# select count(*) from iot_data_1;\n> count\n> ---------\n> 3234335\n> (1 row)\n>\n> postgres =# select count(*) from iot_data_2;\n> count\n> ---------\n> 3736983\n> (1 row)\n>\n> create index on iot_data(metadata,lower(data));\n> vacuum analyze iot_data;\n>\n> and now for the performance:\n> query : explain analyze select * from iot_data da,iot_device de where\n> de.name in ('50a') and de.id = da.device and da.metadata=50 and\n> lower(da.data) like '50';\n>\n> PG12 :\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=5.16..773.61 rows=2 width=43) (actual\n> time=2.858..2.858 rows=0 loops=1)\n> -> Seq Scan on iot_device de (cost=0.00..2.25 rows=1 width=7) (actual\n> time=0.014..0.020 rows=1 loops=1)\n> Filter: (name = '50a'::text)\n> Rows Removed by Filter: 99\n> -> Append (cost=5.16..771.30 rows=6 width=36) (actual\n> time=2.835..2.835 rows=0 loops=1)\n> -> Bitmap Heap Scan on iot_data_0 da (cost=5.16..233.78 rows=2\n> width=36) (actual time=2.829..2.829 rows=0 loops=1)\n> Recheck Cond: (metadata = 50)\n> Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))\n> -> Bitmap Index Scan on iot_data_0_metadata_lower_idx\n> (cost=0.00..5.14 rows=59 width=0) (actual time=2.827..2.827 rows=0 loops=1)\n> Index Cond: ((metadata = 50) AND (lower(data) =\n> '50'::text))\n> -> Bitmap Heap Scan on iot_data_1 da_1 (cost=5.20..249.32\n> rows=2 width=37) (never executed)\n> Recheck Cond: (metadata = 50)\n> Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))\n> -> Bitmap Index Scan on iot_data_1_metadata_lower_idx\n> (cost=0.00..5.18 rows=63 width=0) (never executed)\n> Index Cond: ((metadata = 50) AND (lower(data) =\n> '50'::text))\n> -> Bitmap Heap Scan on iot_data_2 da_2 (cost=5.30..288.16\n> rows=2 width=36) (never executed)\n> Recheck Cond: (metadata = 50)\n> Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))\n> -> Bitmap Index Scan on iot_data_2_metadata_lower_idx\n> (cost=0.00..5.29 rows=73 width=0) (never executed)\n> Index Cond: ((metadata = 50) AND (lower(data) =\n> '50'::text))\n> Planning Time: 8.157 ms\n> Execution Time: 2.920 ms\n> (22 rows)\n>\n>\n> PG96 :\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=6.57..397.19 rows=2 width=44) (actual\n> time=0.121..0.121 rows=0 loops=1)\n> Join Filter: (da.device = de.id)\n> -> Seq Scan on iot_device de (cost=0.00..2.25 rows=1 width=7) (actual\n> time=0.016..0.022 rows=1 loops=1)\n> Filter: (name = '50a'::text)\n> Rows Removed by Filter: 99\n> -> Bitmap Heap Scan on iot_data da (cost=6.57..392.49 rows=196\n> width=37) (actual time=0.097..0.097 rows=0 loops=1)\n> Recheck Cond: (metadata = 50)\n> Filter: (lower(data) ~~ '50'::text)\n> -> Bitmap Index Scan on iot_data_metadata_lower_idx\n> (cost=0.00..6.52 rows=196 width=0) (actual time=0.095..0.095 rows=0\n> loops=1)\n> Index Cond: ((metadata = 50) AND (lower(data) = '50'::text))\n> Planning time: 0.815 ms\n> Execution time: 0.158 ms\n> (12 rows)\n>\n>\n> As you can see, the results are better in pg96. This example only shows\n> the results for a small data set. In bigger data sets I get a bigger diff...\n>\n> I tried changing many postgresql.conf parameters that were added\n> (max_workers_per_gather,enable_partitionwise_join and so on..).\n> I dont understand why in pg12 it scans all the partitions instead of the\n> relevant one..\n>\n> I added all the commands to recreate the test, please feel free to share\n> any useful notes.\n>\n\nI realized that the planner goes to the right partition because \"(never executed)\" is mentioned near the scan of the other partitions. However, still i'm not sure why performance is better in pg96.‫בתאריך יום א׳, 8 במרץ 2020 ב-18:05 מאת ‪Mariel Cherkassky‬‏ <‪[email protected]‬‏>:‬Hey,I upgraded from 96 to 12 in our test env and I'm seeing that for queries that involve join operation between a partition table and other tables there is degradation is performance compared to pg96 performance.My machine : 8cpu,16gb,regular hd,linux redhat 6pg settings : max_wal_size = 2GBmin_wal_size = 1GBwal_buffers = 16MBcheckpoint_completion_target = 0.9checkpoint_timeout = 30minlog_checkpoints = onlog_lock_waits = onlog_temp_files = 1024log_min_duration_statement = 10000log_autovacuum_min_duration = 10000standard_conforming_strings = offmax_locks_per_transaction = 5000max_connections = 500log_line_prefix = '%t %d %p  'random_page_cost = 4deadlock_timeout = 5sshared_preload_libraries = 'pg_stat_statements'track_activity_query_size = 32764log_directory = 'pg_log'enable_partitionwise_join = on # for pg12enable_partitionwise_aggregate = on \n\n # for pg12\n\nlisten_addresses =  '*'ssl =  onmaintenance_work_mem = 333MBwork_mem = 16MBshared_buffers = 4020MBeffective_cache_size = 8040MBpostgresql12.2I used this table as the joined table for both cases :  create table iot_device(id serial primary key,name text);insert into iot_device(name) select generate_series(1,100)||'a';    In pg96 I created the following regular table : create table iot_data(id serial primary key,data text,metadata bigint,device bigint references iot_device(id));inserted the data :   insert into iot_data select generate_series(1,10000000),random()*10,random()*254,random()*99+1;  In pg12 I created  a table with 3 hash partitiones : create table iot_data(id serial ,data text,metadata bigint,device bigint references iot_device(id),primary key(id,device)) partition by hash(device);create table iot_data_0 partition of iot_data for values with (MODULUS 3, remainder 0);create table iot_data_1 partition of iot_data for values with (MODULUS 3, remainder 1);create table iot_data_2 partition of iot_data for values with (MODULUS 3, remainder 2);I generated a dump of the data in the pg96 machine and inserted it into the pg12 db : pg_dump -d postgres -U \n\npostgres \n\n-a -t iot_data > iot_data.dumppsql -d \n\npostgres \n\n-U \n\npostgres \n\n-f  -h pg12_machine /tmp/iot_data.dumppostgres=# select count(*) from iot_data_0;  count--------- 3028682(1 row)\n\npostgres\n\n=# select count(*) from iot_data_1;  count--------- 3234335(1 row)\n\npostgres\n\n=# select count(*) from iot_data_2;  count--------- 3736983(1 row)create index on iot_data(metadata,lower(data));vacuum analyze iot_data;and now for the performance:query : \n\nexplain analyze select * from iot_data da,iot_device de where de.name in ('50a') and de.id = da.device and da.metadata=50 and lower(da.data) like '50';\n\n PG12 :                                                                     QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=5.16..773.61 rows=2 width=43) (actual time=2.858..2.858 rows=0 loops=1)   ->  Seq Scan on iot_device de  (cost=0.00..2.25 rows=1 width=7) (actual time=0.014..0.020 rows=1 loops=1)         Filter: (name = '50a'::text)         Rows Removed by Filter: 99   ->  Append  (cost=5.16..771.30 rows=6 width=36) (actual time=2.835..2.835 rows=0 loops=1)         ->  Bitmap Heap Scan on iot_data_0 da  (cost=5.16..233.78 rows=2 width=36) (actual time=2.829..2.829 rows=0 loops=1)               Recheck Cond: (metadata = 50)               Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))               ->  Bitmap Index Scan on iot_data_0_metadata_lower_idx  (cost=0.00..5.14 rows=59 width=0) (actual time=2.827..2.827 rows=0 loops=1)                     Index Cond: ((metadata = 50) AND (lower(data) = '50'::text))         ->  Bitmap Heap Scan on iot_data_1 da_1  (cost=5.20..249.32 rows=2 width=37) (never executed)               Recheck Cond: (metadata = 50)               Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))               ->  Bitmap Index Scan on iot_data_1_metadata_lower_idx  (cost=0.00..5.18 rows=63 width=0) (never executed)                     Index Cond: ((metadata = 50) AND (lower(data) = '50'::text))         ->  Bitmap Heap Scan on iot_data_2 da_2  (cost=5.30..288.16 rows=2 width=36) (never executed)               Recheck Cond: (metadata = 50)               Filter: ((de.id = device) AND (lower(data) ~~ '50'::text))               ->  Bitmap Index Scan on iot_data_2_metadata_lower_idx  (cost=0.00..5.29 rows=73 width=0) (never executed)                     Index Cond: ((metadata = 50) AND (lower(data) = '50'::text)) Planning Time: 8.157 ms Execution Time: 2.920 ms(22 rows)PG96 :                                                                 QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=6.57..397.19 rows=2 width=44) (actual time=0.121..0.121 rows=0 loops=1)   Join Filter: (da.device = de.id)   ->  Seq Scan on iot_device de  (cost=0.00..2.25 rows=1 width=7) (actual time=0.016..0.022 rows=1 loops=1)         Filter: (name = '50a'::text)         Rows Removed by Filter: 99   ->  Bitmap Heap Scan on iot_data da  (cost=6.57..392.49 rows=196 width=37) (actual time=0.097..0.097 rows=0 loops=1)         Recheck Cond: (metadata = 50)         Filter: (lower(data) ~~ '50'::text)         ->  Bitmap Index Scan on iot_data_metadata_lower_idx  (cost=0.00..6.52 rows=196 width=0) (actual time=0.095..0.095 rows=0 loops=1)               Index Cond: ((metadata = 50) AND (lower(data) = '50'::text)) Planning time: 0.815 ms Execution time: 0.158 ms(12 rows)As you can see, the results are better in pg96. This example only shows the results for a small data set. In bigger data sets I get a bigger diff...I tried changing many postgresql.conf parameters that were added (max_workers_per_gather,enable_partitionwise_join and so on..).I dont understand why in pg12 it scans all the partitions instead of the relevant one..I added all the commands to recreate the test, please feel free to share any useful notes.", "msg_date": "Sun, 8 Mar 2020 18:14:37 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg12 partitions show bad performance vs pg96" }, { "msg_contents": "On Mon, 9 Mar 2020 at 05:05, Mariel Cherkassky\n<[email protected]> wrote:\n> PG12 :\n> Planning Time: 8.157 ms\n> Execution Time: 2.920 ms\n> (22 rows)\n>\n>\n> PG96 :\n> Planning time: 0.815 ms\n> Execution time: 0.158 ms\n> (12 rows)\n\n8 ms seems pretty slow to planning that query. Does the planning time\ndrop if you execute this multiple times in the same session? Does the\ntime change if you try again without any foreign keys?\n\nThe planning time for the partitioned case is certainly going to take\nlonger. Partitioned tables are never going to improve the times of\nquery planning. It's only possible that they'll improve the\nperformance during query execution.\n\nFor such small fast queries as the ones you've shown, it's important\nto remember that more complex query plans (ones with more nodes) do\nlead to longer times for executor startup and shutdown. EXPLAIN\n(without ANALYZE), will perform query planning and executor\nstartup/shutdown. If you enable \\timing on in psql and test the\nEXPLAIN performance of these queries in each version, then you might\nget an idea of where the overheads are.\n\nAdditionally, you're unlikely to see performance improvements with\ntable partitioning unless you're accessing many rows and partitioning\nallows the data locality of the rows that you are accessing to\nimprove. i.e accesses fewer buffers and/or improves cache hit ratios.\nIn PG12, if the partition pruning can be done during query planning\nthen the planning and executor startup overhead is much lower since\nthere are fewer relations to generate access paths for and fewer nodes\nin the final plan. This also improves the situation during execution\nas it means fewer locks to take and fewer nodes to startup/shutdown.\n\n> As you can see, the results are better in pg96. This example only shows the results for a small data set. In bigger data sets I get a bigger diff...\n\nCan you share the results of that?\n\n> I tried changing many postgresql.conf parameters that were added (max_workers_per_gather,enable_partitionwise_join and so on..).\n\nThe former only does anything for parallel queries. None of the plans\nyou've shown are parallel ones. The latter also does not count in\nthis case. It only counts when joining two identically partitioned\ntables.\n\n> I dont understand why in pg12 it scans all the partitions instead of the relevant one..\n\nIf you'd specified a specific \"device\" in the query SQL, then the\nquery planner would know which partition to scan for that particular\ndevice. However, since you're looking up the device in another table\nand performing a join, the device is only known during query\nexecution. The plan nodes for the non-matching partitions do go\nthrough executor startup, but they're not scanned during execution, as\nyou've seen with the \"(never executed)\" appearing in the EXPLAIN\nANALYZE output. Since executor startup occurs before execution, the\ndevice you mean is still unknown during executor startup, so the\nexecutor must startup the nodes for all partitions that are in the\nplan. Starting up a plan node is not free, but not really very\nexpensive either. However, the overhead of it might be quite large\nproportionally in your case since the executor is doing so little\nwork.\n\nThe most basic guidelines for table partitioning are, don't partition\nyour tables unless it's a net win. If partitioning was always\nfaster, we'd just have designed Postgres to implicitly partition all\nof your tables for you. There are some other guidelines in [1].\n\n[1] https://www.postgresql.org/docs/12/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-BEST-PRACTICES\n\nDavid\n\n\n", "msg_date": "Mon, 9 Mar 2020 15:47:16 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg12 partitions show bad performance vs pg96" }, { "msg_contents": "On Sun, Mar 08, 2020 at 06:05:26PM +0200, Mariel Cherkassky wrote:\n> In pg12 I created a table with 3 hash partitiones :\n> create table iot_data(id serial ,data text,metadata bigint,device bigint\n> references iot_device(id),primary key(id,device)) partition by hash(device);\n\n> and now for the performance:\n> query : explain analyze select * from iot_data da,iot_device de where\n> de.name in ('50a') and de.id = da.device and da.metadata=50 and\n> lower(da.data) like '50';\n\n> I dont understand why in pg12 it scans all the partitions instead of the\n> relevant one..\n\nAs you noticed, it doesn't actually scan them. I believe v11 \"partition\nelimination during query execution\" is coming into play here. There's no\noption to disable that, but as a quick test, you could possibly try under PG10\n(note, that doesn't support inherited indexes). Or you could try to profile\nunder PG12 (and consider comparing with pg13dev).\n\nYou partitioned on hash(iot_data.device), but your query doesn't specify\ndevice, except that da.device=de.id AND de.name IN ('50'). If that's a typical\nquery, maybe it'd work better to partition on metadata or lower(name) (or\npossibly both).\n\nOn Sun, Mar 08, 2020 at 06:05:26PM +0200, Mariel Cherkassky wrote:\n> PG12 :\n> Nested Loop (cost=5.16..773.61 rows=2 width=43) (actual time=2.858..2.858\n> rows=0 loops=1)\n...\n> -> Bitmap Heap Scan on iot_data_1 da_1 (cost=5.20..249.32 rows=2\n> width=37) (NEVER EXECUTED)\n...\n> Planning Time: 8.157 ms\n> Execution Time: 2.920 ms\n\n> PG96 :\n> Nested Loop (cost=6.57..397.19 rows=2 width=44) (actual time=0.121..0.121\n> rows=0 loops=1)\n...\n> Planning time: 0.815 ms\n> Execution time: 0.158 ms\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 8 Mar 2020 22:10:05 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg12 partitions show bad performance vs pg96" }, { "msg_contents": "*8 ms seems pretty slow to planning that query. Does the planning timedrop\nif you execute this multiple times in the same session? Does thetime change\nif you try again without any foreign keys? *\n\nNo one is using the system besides me, therefore after running the query\none time\nmost of the data is in the cache... If I run it multiple times the query\ntime is reduced :\n Planning Time: 0.361 ms\n Execution Time: 0.110 ms\n\n\n*Can you share the results of that?*\nSure. I did the same procedure but this time I inserted 100m records\ninstead of 10m. This time the results were by far worse in pg12 :\n\nPG12 :\npostgres=# explain analyze select * from iot_data da,iot_device de where\nde.name in ('50a') and de.id = da.device and da.metadata=50 and\nlower(da.data) like '50';\n\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=1002.26..1563512.35 rows=10 width=44) (actual\ntime=95161.056..95218.764 rows=0 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Hash Join (cost=2.26..1562511.35 rows=4 width=44) (actual\ntime=94765.860..94765.861 rows=0 loops=3)\n Hash Cond: (da_2.device = de.id)\n -> Parallel Append (cost=0.00..1562506.90 rows=814 width=37)\n(actual time=94765.120..94765.120 rows=0 loops=3)\n -> Parallel Seq Scan on iot_data_2 da_2\n (cost=0.00..584064.14 rows=305 width=37) (actual time=36638.829..36638.829\nrows=0 loops=3)\n Filter: ((metadata = 50) AND (lower(data) ~~\n'50'::text))\n Rows Removed by Filter: 12460009\n -> Parallel Seq Scan on iot_data_1 da_1\n (cost=0.00..504948.69 rows=262 width=36) (actual time=43990.427..43990.427\nrows=0 loops=2)\n Filter: ((metadata = 50) AND (lower(data) ~~\n'50'::text))\n Rows Removed by Filter: 16158316\n -> Parallel Seq Scan on iot_data_0 da\n (cost=0.00..473490.00 rows=247 width=37) (actual time=86396.665..86396.665\nrows=0 loops=1)\n Filter: ((metadata = 50) AND (lower(data) ~~\n'50'::text))\n Rows Removed by Filter: 30303339\n -> Hash (cost=2.25..2.25 rows=1 width=7) (never executed)\n -> Seq Scan on iot_device de (cost=0.00..2.25 rows=1\nwidth=7) (never executed)\n Filter: (name = '50a'::text)\n Planning Time: 45.724 ms\n Execution Time: 95252.712 ms\n(20 rows)\n\nPG96 :\npostgres=# explain analyze select * from iot_data da,iot_device de where\nde.name in ('50a') and de.id = da.device and da.metadata=50 and\nlower(da.data) like '50';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2583361.51 rows=20 width=44) (actual\ntime=18345.229..18345.229 rows=0 loops=1)\n Join Filter: (da.device = de.id)\n -> Seq Scan on iot_device de (cost=0.00..2.25 rows=1 width=7) (actual\ntime=0.022..0.037 rows=1 loops=1)\n Filter: (name = '50a'::text)\n Rows Removed by Filter: 99\n -> Seq Scan on iot_data da (cost=0.00..2583334.84 rows=1954 width=37)\n(actual time=18345.187..18345.187 rows=0 loops=1)\n Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))\n Rows Removed by Filter: 100000000\n Planning time: 35.450 ms\n Execution time: 18345.301 ms\n(10 rows)\n\n\n\n\n\n*The most basic guidelines for table partitioning are, don't partitionyour\ntables unless it's a net win. If partitioning was alwaysfaster, we'd just\nhave designed Postgres to implicitly partition allof your tables for you.\nThere are some other guidelines in [1].*\n\nIsnt creating partition should increase the execution time ? I mean,\ninstead of running on a table with 10m records, I can run over a partition\nwith 3m records. isnt less data means better performance for simple queries\nlike the one I used ?\nI read the best practice for the docs, and I think that I met most of them\n- I chose the right partition key(in this case it was device),\nRegarding the amount of partitions - I choose 3 just to test the results. I\ndidnt create a lot of partitions, and my logic tells me that querying a\ntable with 3m records should be faster than 10m records.. Am I missing\nsomething ?\n\n8 ms seems pretty slow to planning that query. Does the planning timedrop if you execute this multiple times in the same session? Does thetime change if you try again without any foreign keys? No one is using the system besides me, therefore after running the query one timemost of the data is in the cache... If I run it multiple times the query time is reduced :  Planning Time: 0.361 ms Execution Time: 0.110 msCan you share the results of that?Sure. I did the same procedure but this time I inserted 100m records instead of 10m. This time the results were by far worse in pg12 : PG12 : postgres=# explain analyze select * from iot_data da,iot_device de where de.name in ('50a') and de.id = da.device and da.metadata=50 and lower(da.data) like '50';                                                                     QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------- Gather  (cost=1002.26..1563512.35 rows=10 width=44) (actual time=95161.056..95218.764 rows=0 loops=1)   Workers Planned: 2   Workers Launched: 2   ->  Hash Join  (cost=2.26..1562511.35 rows=4 width=44) (actual time=94765.860..94765.861 rows=0 loops=3)         Hash Cond: (da_2.device = de.id)         ->  Parallel Append  (cost=0.00..1562506.90 rows=814 width=37) (actual time=94765.120..94765.120 rows=0 loops=3)               ->  Parallel Seq Scan on iot_data_2 da_2  (cost=0.00..584064.14 rows=305 width=37) (actual time=36638.829..36638.829 rows=0 loops=3)                     Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))                     Rows Removed by Filter: 12460009               ->  Parallel Seq Scan on iot_data_1 da_1  (cost=0.00..504948.69 rows=262 width=36) (actual time=43990.427..43990.427 rows=0 loops=2)                     Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))                     Rows Removed by Filter: 16158316               ->  Parallel Seq Scan on iot_data_0 da  (cost=0.00..473490.00 rows=247 width=37) (actual time=86396.665..86396.665 rows=0 loops=1)                     Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))                     Rows Removed by Filter: 30303339         ->  Hash  (cost=2.25..2.25 rows=1 width=7) (never executed)               ->  Seq Scan on iot_device de  (cost=0.00..2.25 rows=1 width=7) (never executed)                     Filter: (name = '50a'::text) Planning Time: 45.724 ms Execution Time: 95252.712 ms(20 rows)PG96 : postgres=# explain analyze select * from iot_data da,iot_device de where de.name in ('50a') and de.id = da.device and da.metadata=50 and lower(da.data) like '50';                                                         QUERY PLAN----------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.00..2583361.51 rows=20 width=44) (actual time=18345.229..18345.229 rows=0 loops=1)   Join Filter: (da.device = de.id)   ->  Seq Scan on iot_device de  (cost=0.00..2.25 rows=1 width=7) (actual time=0.022..0.037 rows=1 loops=1)         Filter: (name = '50a'::text)         Rows Removed by Filter: 99   ->  Seq Scan on iot_data da  (cost=0.00..2583334.84 rows=1954 width=37) (actual time=18345.187..18345.187 rows=0 loops=1)         Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))         Rows Removed by Filter: 100000000 Planning time: 35.450 ms Execution time: 18345.301 ms(10 rows)The most basic guidelines for table partitioning are, don't partitionyour tables unless it's a net win.   If partitioning was alwaysfaster, we'd just have designed Postgres to implicitly partition allof your tables for you. There are some other guidelines in [1].Isnt creating partition should increase the execution time ? I mean, instead of running on a table with 10m records, I can run over a partition with 3m records. isnt less data means better performance for simple queries like the one I used ?I read the best practice for the docs, and I think that I met most of them - I chose the right partition key(in this case it was device),Regarding the amount of partitions - I choose 3 just to test the results. I didnt create a lot of partitions, and my logic tells me that querying a table with 3m records should be faster than 10m records.. Am I missing something ?", "msg_date": "Mon, 9 Mar 2020 12:15:50 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg12 partitions show bad performance vs pg96" }, { "msg_contents": ">\n> I tried to do even something simpler, run the query with only the\n> partition column in the where clause and the results werent good for pg12 :\n>\nPG12 :\npostgres=# explain analyze select * from iot_data where device=51;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=1000.00..514086.40 rows=1027284 width=37) (actual\ntime=6.777..61558.272 rows=1010315 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on iot_data_0 (cost=0.00..410358.00 rows=428035\nwidth=37) (actual time=1.152..61414.483 rows=336772 loops=3)\n Filter: (device = 51)\n Rows Removed by Filter: 9764341\n Planning Time: 15.720 ms\n Execution Time: 61617.851 ms\n(8 rows)\n\n\n\nPG9.6\npostgres=# explain analyze select * from iot_data where device=51;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Seq Scan on iot_data (cost=0.00..2083334.60 rows=976667 width=37) (actual\ntime=21.922..16753.492 rows=1010315 loops=1)\n Filter: (device = 51)\n Rows Removed by Filter: 98989685\n Planning time: 0.119 ms\n Execution time: 16810.787 ms\n(5 rows)\n\n\n> Besides hardware, anything else worth checking ? the machine are\n> identical in aspect of resources.\n>\n\nI tried to do even something simpler, run the query with only the partition column in the where clause and the results werent good for pg12 : PG12 : postgres=# explain analyze select * from iot_data where device=51;                                                              QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------- Gather  (cost=1000.00..514086.40 rows=1027284 width=37) (actual time=6.777..61558.272 rows=1010315 loops=1)   Workers Planned: 2   Workers Launched: 2   ->  Parallel Seq Scan on iot_data_0  (cost=0.00..410358.00 rows=428035 width=37) (actual time=1.152..61414.483 rows=336772 loops=3)         Filter: (device = 51)         Rows Removed by Filter: 9764341 Planning Time: 15.720 ms Execution Time: 61617.851 ms(8 rows)PG9.6postgres=# explain analyze select * from iot_data where device=51;                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------- Seq Scan on iot_data  (cost=0.00..2083334.60 rows=976667 width=37) (actual time=21.922..16753.492 rows=1010315 loops=1)   Filter: (device = 51)   Rows Removed by Filter: 98989685 Planning time: 0.119 ms Execution time: 16810.787 ms(5 rows)  Besides hardware, anything else worth checking ? the machine are identical in aspect of resources.", "msg_date": "Mon, 9 Mar 2020 12:31:15 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg12 partitions show bad performance vs pg96" }, { "msg_contents": "On Mon, Mar 09, 2020 at 12:31:15PM +0200, Mariel Cherkassky wrote:\n> > I tried to do even something simpler, run the query with only the\n> > partition column in the where clause and the results werent good for pg12 :\n>\n> PG12 :\n> postgres=# explain analyze select * from iot_data where device=51;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------\n> Gather (cost=1000.00..514086.40 rows=1027284 width=37) (actual time=6.777..61558.272 rows=1010315 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Parallel Seq Scan on iot_data_0 (cost=0.00..410358.00 rows=428035 width=37) (actual time=1.152..61414.483 rows=336772 loops=3)\n\nFor whatever reason, your storage/OS seem to be handling parallel reads poorly.\nI would SET max_parallel_workers_per_gather=0 and retest (but also look into\nimproving the storage).\n\nAlso, it's not required, but I think a typical partitioning schema would have\nan index on the column being partitioned. I see you have an index on\niot_data(metadata,lower(data)), so I still wonder whether you'd have better\nresults partitioned on metadata, or otherwise maybe adding an index on\n\"device\". But I don't know what your typical queries are.\n\n> PG9.6\n> postgres=# explain analyze select * from iot_data where device=51;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on iot_data (cost=0.00..2083334.60 rows=976667 width=37) (actual time=21.922..16753.492 rows=1010315 loops=1)\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 9 Mar 2020 07:12:13 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg12 partitions show bad performance vs pg96" }, { "msg_contents": "OK so I found the problem but other problem appeared.\nI found out that the pg12 machine had some limits on the vm settings in\naspect of cpu and memory. Now both machines are exactly the same in aspect\nof all hardware and dont have any limit.\nCPU - 8\nRAM - 32GB.\n\nI tested it with cold cache :\nservice postgresql stop;\necho 1 > /proc/sys/vm/drop_caches;\nservice postgresql start;\npsql -d postgres -U postgres;\n\nI used two simples queries, one that implicitly comparing the partition key\nwith a const value and another one that joins other table by the partition\ncolumn(and in this query the problem).\n\nThe first query : results are better with pg12 :\nexplain analyze select * from iot_data where device=51;\n\nPG96\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Seq Scan on iot_data (cost=0.00..2083334.60 rows=976667 width=37) (actual\ntime=1.560..67144.164 rows=1010315 loops=1)\n Filter: (device = 51)\n Rows Removed by Filter: 98989685\n Planning time: 9.219 ms\n Execution time: 67,228.431 ms\n(5 rows)\n\n\n\nPG12 - 3 PARTITIONS\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=1000.00..514086.40 rows=1027284 width=37) (actual\ntime=3.871..15022.118 rows=1010315 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Parallel Seq Scan on iot_data_0 (cost=0.00..410358.00 rows=428035\nwidth=37) (actual time=1.670..14815.480 rows=336772 loops=3)\n Filter: (device = 51)\n Rows Removed by Filter: 9764341\n Planning Time: 27.292 ms\n Execution Time: 15085.317 ms\n(8 rows)\n\nThe second query with pg12 :\nQUERY : explain analyze select * from iot_data da,iot_device de where\nde.name in ('50a') and de.id = da.device and da.metadata=50 and\nlower(da.data) like '50';\n\nPG96\n\npostgres=# explain analyze select * from iot_data da,iot_device de where\nde.name in ('50a') and de.id = da.device and da.metadata=50 and\nlower(da.data) like '50';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2583361.51 rows=20 width=44) (actual\ntime=44894.312..44894.312 rows=0 loops=1)\n Join Filter: (da.device = de.id)\n -> Seq Scan on iot_device de (cost=0.00..2.25 rows=1 width=7) (actual\ntime=0.018..0.028 rows=1 loops=1)\n Filter: (name = '50a'::text)\n Rows Removed by Filter: 99\n -> Seq Scan on iot_data da (cost=0.00..2583334.84 rows=1954 width=37)\n(actual time=44894.279..44894.279 rows=0 loops=1)\n Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))\n Rows Removed by Filter: 100000000\n Planning time: 11.313 ms\n Execution time: 44894.357 ms\n(10 rows)\n\n\n\nPG12 - 3 PARTITIONS\n\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=1002.26..1563512.35 rows=10 width=44) (actual\ntime=22306.091..22309.209 rows=0 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Hash Join (cost=2.26..1562511.35 rows=4 width=44) (actual\ntime=22299.412..22299.413 rows=0 loops=3)\n Hash Cond: (da_2.device = de.id)\n -> Parallel Append (cost=0.00..1562506.90 rows=814 width=37)\n(actual time=22299.411..22299.411 rows=0 loops=3)\n -> Parallel Seq Scan on iot_data_2 da_2\n (cost=0.00..584064.14 rows=305 width=37) (actual time=9076.535..9076.535\nrows=0 loops=3)\n Filter: ((metadata = 50) AND (lower(data) ~~\n'50'::text))\n Rows Removed by Filter: 12460009\n -> Parallel Seq Scan on iot_data_1 da_1\n (cost=0.00..504948.69 rows=262 width=36) (actual time=10296.751..10296.751\nrows=0 loops=2)\n Filter: ((metadata = 50) AND (lower(data) ~~\n'50'::text))\n Rows Removed by Filter: 16158316\n -> Parallel Seq Scan on iot_data_0 da\n (cost=0.00..473490.00 rows=247 width=37) (actual time=19075.081..19075.081\nrows=0 loops=1)\n Filter: ((metadata = 50) AND (lower(data) ~~\n'50'::text))\n Rows Removed by Filter: 30303339\n -> Hash (cost=2.25..2.25 rows=1 width=7) (never executed)\n -> Seq Scan on iot_device de (cost=0.00..2.25 rows=1\nwidth=7) (never executed)\n Filter: (name = '50a'::text)\n Planning Time: 30.429 ms\n Execution Time: 22309.364 ms\n(20 rows)\n\nI tried disabling max_parallel_workers_gathers but It just decreased the\ndb`s performance.\nNow regarding the main issue here - as u can see when I used the second\nquery, I didnt mentioned the partition column specificly but I joined\nanother table based on it( where de.name in ('50a') and de.id = da.device)\nThis condition should be enough for the optimizer to understand that it\nneeds to scan a specific partition but it scans all the partitions. The\n\"never executed\" tag isnt added to the partitions scans but it is added to\nthe joined table.\n\nJustin - Regarding adding index on the parittion column - I dont understand\nwhy ? the value in that column is the same for all rows in the partition,\nwhen exactly the index will be used ?\n\nOK so I found the problem but other problem appeared.I found out that the pg12 machine had some limits on the vm settings in aspect of cpu and memory. Now both machines are exactly the same in aspect of all hardware and dont have any limit.CPU - 8RAM - 32GB.I tested it with cold cache :service postgresql stop;echo 1 > /proc/sys/vm/drop_caches;service postgresql start;psql -d postgres -U postgres;I used two simples queries, one that implicitly comparing the partition key with a const value and another one that joins other table by the partition column(and in this query the problem).The first query : results are better with pg12 : explain analyze select * from iot_data where device=51;PG96                                                       QUERY PLAN------------------------------------------------------------------------------------------------------------------------ Seq Scan on iot_data  (cost=0.00..2083334.60 rows=976667 width=37) (actual time=1.560..67144.164 rows=1010315 loops=1)   Filter: (device = 51)   Rows Removed by Filter: 98989685 Planning time: 9.219 ms Execution time: 67,228.431 ms(5 rows)PG12 - 3 PARTITIONS                                                              QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------- Gather  (cost=1000.00..514086.40 rows=1027284 width=37) (actual time=3.871..15022.118 rows=1010315 loops=1)   Workers Planned: 2   Workers Launched: 2   ->  Parallel Seq Scan on iot_data_0  (cost=0.00..410358.00 rows=428035 width=37) (actual time=1.670..14815.480 rows=336772 loops=3)         Filter: (device = 51)         Rows Removed by Filter: 9764341 Planning Time: 27.292 ms Execution Time: 15085.317 ms(8 rows)The second query with pg12 : QUERY : explain analyze select * from iot_data da,iot_device de where de.name in ('50a') and de.id = da.device and da.metadata=50 and lower(da.data) like '50';  PG96postgres=# explain analyze select * from iot_data da,iot_device de where de.name in ('50a') and de.id = da.device and da.metadata=50 and lower(da.data) like '50';                                                         QUERY PLAN----------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.00..2583361.51 rows=20 width=44) (actual time=44894.312..44894.312 rows=0 loops=1)   Join Filter: (da.device = de.id)   ->  Seq Scan on iot_device de  (cost=0.00..2.25 rows=1 width=7) (actual time=0.018..0.028 rows=1 loops=1)         Filter: (name = '50a'::text)         Rows Removed by Filter: 99   ->  Seq Scan on iot_data da  (cost=0.00..2583334.84 rows=1954 width=37) (actual time=44894.279..44894.279 rows=0 loops=1)         Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))         Rows Removed by Filter: 100000000 Planning time: 11.313 ms Execution time: 44894.357 ms(10 rows)PG12 - 3 PARTITIONS                                                                      QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------- Gather  (cost=1002.26..1563512.35 rows=10 width=44) (actual time=22306.091..22309.209 rows=0 loops=1)   Workers Planned: 2   Workers Launched: 2   ->  Hash Join  (cost=2.26..1562511.35 rows=4 width=44) (actual time=22299.412..22299.413 rows=0 loops=3)         Hash Cond: (da_2.device = de.id)         ->  Parallel Append  (cost=0.00..1562506.90 rows=814 width=37) (actual time=22299.411..22299.411 rows=0 loops=3)               ->  Parallel Seq Scan on iot_data_2 da_2  (cost=0.00..584064.14 rows=305 width=37) (actual time=9076.535..9076.535 rows=0 loops=3)                     Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))                     Rows Removed by Filter: 12460009               ->  Parallel Seq Scan on iot_data_1 da_1  (cost=0.00..504948.69 rows=262 width=36) (actual time=10296.751..10296.751 rows=0 loops=2)                     Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))                     Rows Removed by Filter: 16158316               ->  Parallel Seq Scan on iot_data_0 da  (cost=0.00..473490.00 rows=247 width=37) (actual time=19075.081..19075.081 rows=0 loops=1)                     Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))                     Rows Removed by Filter: 30303339         ->  Hash  (cost=2.25..2.25 rows=1 width=7) (never executed)               ->  Seq Scan on iot_device de  (cost=0.00..2.25 rows=1 width=7) (never executed)                     Filter: (name = '50a'::text) Planning Time: 30.429 ms Execution Time: 22309.364 ms(20 rows)I tried disabling max_parallel_workers_gathers but It just decreased the db`s performance.Now regarding the main issue here - as u can see when I used the second query, I didnt mentioned the partition column specificly but I joined another table based on it(\n\nwhere de.name in ('50a') and de.id = da.device)This condition should be enough for the optimizer to understand that it needs to scan a specific partition but it scans all the partitions. The \"never executed\" tag isnt added to the partitions scans but it is added to the joined table.Justin - Regarding adding index on the parittion column - I dont understand why ? the value in that column is the same for all rows in the partition, when exactly the index will be used ?", "msg_date": "Mon, 9 Mar 2020 15:08:49 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg12 partitions show bad performance vs pg96" }, { "msg_contents": "> Also, it's not required, but I think a typical partitioning schema would\n> have\n> an index on the column being partitioned. I see you have an index on\n> iot_data(metadata,lower(data)), so I still wonder whether you'd have better\n> results partitioned on metadata, or otherwise maybe adding an index on\n> \"device\". But I don't know what your typical queries are.\n>\n> I understood now why u suggested an index on the partition column. It\ndepends on how many distinct values of the partition column I'll have in\nthat partition.\nThanks for the suggestion , good idea !\n\nAlso, it's not required, but I think a typical partitioning schema would havean index on the column being partitioned.  I see you have an index oniot_data(metadata,lower(data)), so I still wonder whether you'd have betterresults partitioned on metadata, or otherwise maybe adding an index on\"device\".  But I don't know what your typical queries are.I understood now why u suggested an index on the partition column. It depends on how many distinct values of the partition column I'll have in that partition.Thanks for the suggestion , good idea !", "msg_date": "Mon, 9 Mar 2020 15:16:47 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg12 partitions show bad performance vs pg96" }, { "msg_contents": "On Tue, 10 Mar 2020 at 02:08, Mariel Cherkassky\n<[email protected]> wrote:\n\n> PG12 - 3 PARTITIONS\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Gather (cost=1002.26..1563512.35 rows=10 width=44) (actual time=22306.091..22309.209 rows=0 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Hash Join (cost=2.26..1562511.35 rows=4 width=44) (actual time=22299.412..22299.413 rows=0 loops=3)\n> Hash Cond: (da_2.device = de.id)\n> -> Parallel Append (cost=0.00..1562506.90 rows=814 width=37) (actual time=22299.411..22299.411 rows=0 loops=3)\n> -> Parallel Seq Scan on iot_data_2 da_2 (cost=0.00..584064.14 rows=305 width=37) (actual time=9076.535..9076.535 rows=0 loops=3)\n> Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))\n> Rows Removed by Filter: 12460009\n> -> Parallel Seq Scan on iot_data_1 da_1 (cost=0.00..504948.69 rows=262 width=36) (actual time=10296.751..10296.751 rows=0 loops=2)\n> Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))\n> Rows Removed by Filter: 16158316\n> -> Parallel Seq Scan on iot_data_0 da (cost=0.00..473490.00 rows=247 width=37) (actual time=19075.081..19075.081 rows=0 loops=1)\n> Filter: ((metadata = 50) AND (lower(data) ~~ '50'::text))\n> Rows Removed by Filter: 30303339\n> -> Hash (cost=2.25..2.25 rows=1 width=7) (never executed)\n> -> Seq Scan on iot_device de (cost=0.00..2.25 rows=1 width=7) (never executed)\n> Filter: (name = '50a'::text)\n> Planning Time: 30.429 ms\n> Execution Time: 22309.364 ms\n> (20 rows)\n\n From what I can work out, the DDL you used here is:\n\n-- you didn't seem to specify the DDL for iot_device, so I used:\ncreate table iot_device (\nid bigint primary key,\nname text not null\n);\n\ninsert into iot_device select x,x::Text || 'a' from generate_Series(1,100) x;\n\ncreate table iot_data(id serial ,data text,metadata bigint,device\nbigint references iot_device(id),primary key(id,device)) partition by\nhash(device);\ncreate table iot_data_0 partition of iot_data for values with (MODULUS\n3, remainder 0);\ncreate table iot_data_1 partition of iot_data for values with (MODULUS\n3, remainder 1);\ncreate table iot_data_2 partition of iot_data for values with (MODULUS\n3, remainder 2);\n\ninsert into iot_data select\ngenerate_series(1,10000000),random()*10,random()*254,random()*99+1;\ncreate index on iot_data(metadata,lower(data));\nvacuum analyze iot_data;\n\nIn which case, you're getting a pretty different plan than I am. (I\nadmit that I only tried on current master and not PG12.2, however, I\nsee no reason that PG12.2 shouldn't produce the same plan)\n\nI get:\n\n# explain analyze select * from iot_data da,iot_device de where\nde.name in ('50a') and de.id = da.device and da.metadata=50 and\nlower(da.data) like '50';\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.56..28.04 rows=1 width=49) (actual\ntime=0.058..0.058 rows=0 loops=1)\n Join Filter: (da.device = de.id)\n -> Seq Scan on iot_device de (cost=0.00..2.25 rows=1 width=11)\n(actual time=0.013..0.016 rows=1 loops=1)\n Filter: (name = '50a'::text)\n Rows Removed by Filter: 99\n -> Append (cost=0.56..25.76 rows=3 width=38) (actual\ntime=0.040..0.040 rows=0 loops=1)\n -> Index Scan using iot_data_0_metadata_lower_idx on\niot_data_0 da_1 (cost=0.56..8.58 rows=1 width=38) (actual\ntime=0.020..0.020 rows=0 loops=1)\n Index Cond: ((metadata = 50) AND (lower(data) = '50'::text))\n Filter: (lower(data) ~~ '50'::text)\n -> Index Scan using iot_data_1_metadata_lower_idx on\niot_data_1 da_2 (cost=0.56..8.58 rows=1 width=38) (actual\ntime=0.010..0.010 rows=0 loops=1)\n Index Cond: ((metadata = 50) AND (lower(data) = '50'::text))\n Filter: (lower(data) ~~ '50'::text)\n -> Index Scan using iot_data_2_metadata_lower_idx on\niot_data_2 da_3 (cost=0.56..8.58 rows=1 width=38) (actual\ntime=0.009..0.009 rows=0 loops=1)\n Index Cond: ((metadata = 50) AND (lower(data) = '50'::text))\n Filter: (lower(data) ~~ '50'::text)\n Planning Time: 0.280 ms\n Execution Time: 0.094 ms\n(17 rows)\n\nAre you certain that you added an index on iot_data (metadata, lower(data)); ?\n\n\n", "msg_date": "Tue, 10 Mar 2020 10:43:46 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg12 partitions show bad performance vs pg96" } ]
[ { "msg_contents": "Hello,\n\nOn PG 12.2, I am analyzing a performance problem when using a client\n(Elixir/postgrex) querying via the extended query protocol. I am comparing\nwith\npsql and a C program. Logs for all three follow this short explanation.\n\nThe query is trivial: `SELECT [cols] FROM t WHERE id = X` on a 65K row\ntable.\n\nThe Elixir client executes this as an extended query in >500-700ms, very\nslow.\nIf relevant, the client does not use libpq, it is a native implementation.\n\nA simple query via psql `PREPARE q AS ... ; EXECUTE q;` executes in ~130ms.\n(Aside, please let me know if psql can execute extended queries.)\n\nTo compare another extended query protocol client, I wrote a tiny C program\nusing libpq PQprepare()/PQexecPrepared() executes in ~100ms.\n\nI have tried to make the three tests as similar as possible; all are via\nlocalhost and use named statements.\n\nAll use an identical query plan. There is a btree index on the WHERE col,\nbut\nthe table is small enough it is not used.\n\nThe above durations are consistent across server restart, reboot, and\nrepetition\n(i.e. still >500ms if run multiple times), so it seems independent of\nfilesystem caching, buffers, etc.\n\nObviously the client's query execution is somehow different, but I do not\nknow\nwhat/why.\n\nI have enabled auto_explain min_duration (0), analyze, buffers, verbose and\nsettings. What more can I log or do to postgres to understand why the\nclient is\nbehaving poorly? Would wireshark on client messages reveal anything\npostgres\ncan't log?\n\nOther suggestions much appreciated as well.\n\n(I happen to be on OSX, but I can test elsewhere if necessary.)\n\nRegards,\nrichard\n\n\nElixir/postgrex extended query: (always >500ms)\n-------------------------------\n2020-03-11 13:46:20.090 EDT [3401] LOG: connection received:\nhost=127.0.0.1 port=50128\n2020-03-11 13:46:20.096 EDT [3401] LOG: connection authorized:\nuser=testuser database=biodata\n2020-03-11 13:46:20.141 EDT [3401] LOG: duration: 1.138 ms parse ecto_98:\nSELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\",\ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", g0.\"attribute\"\nFROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1)\n2020-03-11 13:46:20.144 EDT [3401] LOG: duration: 2.292 ms bind ecto_98:\nSELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\",\ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", g0.\"attribute\"\nFROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1)\n2020-03-11 13:46:20.658 EDT [3401] LOG: duration: 513.791 ms execute\necto_98: SELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\",\ng0.\"feature\", g0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\",\ng0.\"attribute\" FROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1)\n2020-03-11 13:46:20.658 EDT [3401] LOG: duration: 513.792 ms plan:\nQuery Text: SELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\",\ng0.\"feature\", g0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\",\ng0.\"attribute\" FROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1)\nSeq Scan on public.genes g0 (cost=0.00..6591.62 rows=60632 width=224)\n(actual time=0.523..33.229 rows=60623 loops=1)\n Output: id, genome_id, seqid, source, feature, start, \"end\", score,\nstrand, phase, attribute\n Filter: (g0.genome_id = 1)\n Rows Removed by Filter: 3907\n Buffers: shared read=5785\n2020-03-11 13:46:20.887 EDT [3401] LOG: disconnection: session time:\n0:00:00.796 user=testuser database=biodata host=127.0.0.1 port=50128\n\n\n\npsql simple PREPARE/EXECUTE:\n----------------------------\n2020-03-11 13:46:40.021 EDT [3438] LOG: connection received: host=::1\nport=50129\n2020-03-11 13:46:40.044 EDT [3438] LOG: connection authorized:\nuser=testuser database=biodata application_name=psql\n2020-03-11 13:46:40.106 EDT [3438] LOG: duration: 58.071 ms plan:\nQuery Text: PREPARE q AS SELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\",\ng0.\"source\", g0.\"feature\", g0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\",\ng0.\"phase\", g0.\"attribute\" FROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1);\nEXECUTE q;\nSeq Scan on public.genes g0 (cost=0.00..6591.62 rows=60632 width=224)\n(actual time=0.060..25.542 rows=60623 loops=1)\n Output: id, genome_id, seqid, source, feature, start, \"end\", score,\nstrand, phase, attribute\n Filter: (g0.genome_id = 1)\n Rows Removed by Filter: 3907\n Buffers: shared hit=42 read=5743\n2020-03-11 13:46:40.182 EDT [3438] LOG: duration: 136.173 ms statement:\nPREPARE q AS SELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\",\ng0.\"feature\", g0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\",\ng0.\"attribute\" FROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1); EXECUTE q;\n2020-03-11 13:46:40.182 EDT [3438] DETAIL: prepare: PREPARE q AS SELECT\ng0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\", g0.\"start\",\ng0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", g0.\"attribute\" FROM \"genes\"\nAS g0 WHERE (g0.\"genome_id\" = 1); EXECUTE q;\n2020-03-11 13:46:41.140 EDT [3438] LOG: disconnection: session time:\n0:00:01.119 user=testuser database=biodata host=::1 port=50129\n\n\n\nC libpq extended query:\n-----------------------\n2020-03-11 13:50:00.220 EDT [4299] LOG: connection received: host=::1\nport=50137\n2020-03-11 13:50:00.232 EDT [4299] LOG: connection authorized:\nuser=testuser database=biodata\n2020-03-11 13:50:00.234 EDT [4299] LOG: duration: 0.437 ms parse foo:\nSELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\",\ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", g0.\"attribute\"\nFROM genes AS g0 WHERE (g0.\"genome_id\" = 1)\n2020-03-11 13:50:00.235 EDT [4299] LOG: duration: 0.489 ms bind foo:\nSELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\",\ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", g0.\"attribute\"\nFROM genes AS g0 WHERE (g0.\"genome_id\" = 1)\n2020-03-11 13:50:00.342 EDT [4299] LOG: duration: 106.874 ms execute foo:\nSELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\",\ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", g0.\"attribute\"\nFROM genes AS g0 WHERE (g0.\"genome_id\" = 1)\n2020-03-11 13:50:00.342 EDT [4299] LOG: duration: 106.861 ms plan:\nQuery Text: SELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\",\ng0.\"feature\", g0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\",\ng0.\"attribute\" FROM genes AS g0 WHERE (g0.\"genome_id\" = 1)\nSeq Scan on public.genes g0 (cost=0.00..6591.62 rows=60632 width=224)\n(actual time=0.056..25.049 rows=60623 loops=1)\n Output: id, genome_id, seqid, source, feature, start, \"end\", score,\nstrand, phase, attribute\n Filter: (g0.genome_id = 1)\n Rows Removed by Filter: 3907\n Buffers: shared hit=74 read=5711\n2020-03-11 13:50:00.345 EDT [4299] LOG: disconnection: session time:\n0:00:00.125 user=testuser database=biodata host=::1 port=50137\n\nHello,On PG 12.2, I am analyzing a performance problem when using a client(Elixir/postgrex) querying via the extended query protocol.  I am comparing withpsql and a C program.  Logs for all three follow this short explanation.The query is trivial: `SELECT [cols] FROM t WHERE id = X` on a 65K row table.The Elixir client executes this as an extended query in >500-700ms, very slow.If relevant, the client does not use libpq, it is a native implementation.A simple query via psql `PREPARE q AS ... ; EXECUTE q;` executes in ~130ms.(Aside, please let me know if psql can execute extended queries.)To compare another extended query protocol client, I wrote a tiny C programusing libpq PQprepare()/PQexecPrepared() executes in ~100ms.I have tried to make the three tests as similar as possible; all are vialocalhost and use named statements.All use an identical query plan. There is a btree index on the WHERE col, butthe table is small enough it is not used.The above durations are consistent across server restart, reboot, and repetition(i.e.  still >500ms if run multiple times), so it seems independent offilesystem caching, buffers, etc.Obviously the client's query execution is somehow different, but I do not knowwhat/why.I have enabled auto_explain min_duration (0), analyze, buffers, verbose andsettings.  What more can I log or do to postgres to understand why the client isbehaving poorly?  Would wireshark on client messages reveal anything postgrescan't log?Other suggestions much appreciated as well.(I happen to be on OSX, but I can test elsewhere if necessary.)Regards,richardElixir/postgrex extended query: (always >500ms)-------------------------------2020-03-11 13:46:20.090 EDT [3401] LOG:  connection received: host=127.0.0.1 port=501282020-03-11 13:46:20.096 EDT [3401] LOG:  connection authorized: user=testuser database=biodata2020-03-11\n 13:46:20.141 EDT [3401] LOG:  duration: 1.138 ms  parse ecto_98: SELECT\n g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\", \ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", \ng0.\"attribute\" FROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1)2020-03-11\n 13:46:20.144 EDT [3401] LOG:  duration: 2.292 ms  bind ecto_98: SELECT \ng0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\", \ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", \ng0.\"attribute\" FROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1)2020-03-11\n 13:46:20.658 EDT [3401] LOG:  duration: 513.791 ms  execute ecto_98: \nSELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\", \ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", \ng0.\"attribute\" FROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1)2020-03-11 13:46:20.658 EDT [3401] LOG:  duration: 513.792 ms  plan:\n\tQuery Text: SELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", \ng0.\"feature\", g0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\",\n g0.\"attribute\" FROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1)\tSeq Scan on public.genes g0  (cost=0.00..6591.62 rows=60632 width=224) (actual time=0.523..33.229 rows=60623 loops=1)\t  Output: id, genome_id, seqid, source, feature, start, \"end\", score, strand, phase, attribute\t  Filter: (g0.genome_id = 1)\t  Rows Removed by Filter: 3907\t  Buffers: shared read=57852020-03-11\n 13:46:20.887 EDT [3401] LOG:  disconnection: session time: 0:00:00.796 \nuser=testuser database=biodata host=127.0.0.1 port=50128psql simple PREPARE/EXECUTE:----------------------------2020-03-11 13:46:40.021 EDT [3438] LOG:  connection received: host=::1 port=501292020-03-11 13:46:40.044 EDT [3438] LOG:  connection authorized: user=testuser database=biodata application_name=psql2020-03-11 13:46:40.106 EDT [3438] LOG:  duration: 58.071 ms  plan:\n\tQuery Text: PREPARE q AS SELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", \ng0.\"source\", g0.\"feature\", g0.\"start\", g0.\"end\", g0.\"score\", \ng0.\"strand\", g0.\"phase\", g0.\"attribute\" FROM \"genes\" AS g0 WHERE \n(g0.\"genome_id\" = 1); EXECUTE q;\tSeq Scan on public.genes g0  (cost=0.00..6591.62 rows=60632 width=224) (actual time=0.060..25.542 rows=60623 loops=1)\t  Output: id, genome_id, seqid, source, feature, start, \"end\", score, strand, phase, attribute\t  Filter: (g0.genome_id = 1)\t  Rows Removed by Filter: 3907\t  Buffers: shared hit=42 read=57432020-03-11\n 13:46:40.182 EDT [3438] LOG:  duration: 136.173 ms  statement: PREPARE q\n AS SELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", \ng0.\"feature\", g0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\",\n g0.\"attribute\" FROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1); EXECUTE \nq;2020-03-11 13:46:40.182 EDT [3438] DETAIL:  prepare: PREPARE q AS \nSELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\", \ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", \ng0.\"attribute\" FROM \"genes\" AS g0 WHERE (g0.\"genome_id\" = 1); EXECUTE q;2020-03-11\n 13:46:41.140 EDT [3438] LOG:  disconnection: session time: 0:00:01.119 \nuser=testuser database=biodata host=::1 port=50129C libpq extended query:-----------------------2020-03-11 13:50:00.220 EDT [4299] LOG:  connection received: host=::1 port=501372020-03-11 13:50:00.232 EDT [4299] LOG:  connection authorized: user=testuser database=biodata2020-03-11\n 13:50:00.234 EDT [4299] LOG:  duration: 0.437 ms  parse foo: SELECT \ng0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\", \ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", \ng0.\"attribute\" FROM genes AS g0 WHERE (g0.\"genome_id\" = 1)2020-03-11\n 13:50:00.235 EDT [4299] LOG:  duration: 0.489 ms  bind foo: SELECT \ng0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\", \ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", \ng0.\"attribute\" FROM genes AS g0 WHERE (g0.\"genome_id\" = 1)2020-03-11\n 13:50:00.342 EDT [4299] LOG:  duration: 106.874 ms  execute foo: SELECT\n g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", g0.\"feature\", \ng0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\", \ng0.\"attribute\" FROM genes AS g0 WHERE (g0.\"genome_id\" = 1)2020-03-11 13:50:00.342 EDT [4299] LOG:  duration: 106.861 ms  plan:\n\tQuery Text: SELECT g0.\"id\", g0.\"genome_id\", g0.\"seqid\", g0.\"source\", \ng0.\"feature\", g0.\"start\", g0.\"end\", g0.\"score\", g0.\"strand\", g0.\"phase\",\n g0.\"attribute\" FROM genes AS g0 WHERE (g0.\"genome_id\" = 1)\tSeq Scan on public.genes g0  (cost=0.00..6591.62 rows=60632 width=224) (actual time=0.056..25.049 rows=60623 loops=1)\t  Output: id, genome_id, seqid, source, feature, start, \"end\", score, strand, phase, attribute\t  Filter: (g0.genome_id = 1)\t  Rows Removed by Filter: 3907\t  Buffers: shared hit=74 read=57112020-03-11\n 13:50:00.345 EDT [4299] LOG:  disconnection: session time: 0:00:00.125 \nuser=testuser database=biodata host=::1 port=50137", "msg_date": "Wed, 11 Mar 2020 15:31:48 -0400", "msg_from": "Richard Michael <[email protected]>", "msg_from_op": true, "msg_subject": "Slow ext'd query via client native implementation vs. libpq & simple\n psql" }, { "msg_contents": "On Wed, Mar 11, 2020 at 03:31:48PM -0400, Richard Michael wrote:\n> The query is trivial: `SELECT [cols] FROM t WHERE id = X` on a 65K row\n> table.\n\n> The Elixir client executes this as an extended query in >500-700ms, very\n> slow.\n> If relevant, the client does not use libpq, it is a native implementation.\n\n> A simple query via psql `PREPARE q AS ... ; EXECUTE q;` executes in ~130ms.\n> (Aside, please let me know if psql can execute extended queries.)\n> To compare another extended query protocol client, I wrote a tiny C program\n> using libpq PQprepare()/PQexecPrepared() executes in ~100ms.\n\npsql can't do it, but pygres can do it since last year.\n\n> Would wireshark on client messages reveal anything postgres can't log?\n\nI think maybe, like how many round trips there are. \nDoes the difference between the clients scale linearly with the number of rows\nreturned?\n\n> Elixir/postgrex extended query: (always >500ms)\n> -------------------------------\n\n> 2020-03-11 13:46:20.658 EDT [3401] LOG: duration: 513.792 ms plan:\n...\n> Seq Scan on public.genes g0 (cost=0.00..6591.62 rows=60632 width=224)\n> (actual time=0.523..33.229 rows=60623 loops=1)\n\nYou can see the \"actual time\" is low (that's postgres running the query), so\nthe rest seems to comes from something like sending data back to the client or\nthe client parsing the results.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 11 Mar 2020 16:59:21 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow ext'd query via client native implementation vs. libpq &\n simple psql" }, { "msg_contents": "Richard Michael <[email protected]> writes:\n> On PG 12.2, I am analyzing a performance problem when using a client\n> (Elixir/postgrex) querying via the extended query protocol. I am comparing\n> with\n> psql and a C program. Logs for all three follow this short explanation.\n\nHmm, your auto-explain log entries show all three queries having\nserver-side execution times of a couple dozen msec. It seems like\nthe excess time must be involved in transmitting the data to the\nclient. So I guess I'd be looking at whether the client is really\nslow at absorbing data for some reason.\n\nIIRC, auto-explain's 'actual time' for the top-level query node does not\ncount the time to format data or transmit it to the client. Still, we\nhave an upper bound of ~80 msec for that to happen with the libpq client,\nand there's no obvious reason why it'd be different for the other client.\n\n[ thinks for a bit... ] You might double check that those two clients\nare using the same client_encoding setting. 400ms doing encoding\nconversion seems excessive, but there aren't that many other possibilities\nfor the I/O time to be different.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 18:54:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow ext'd query via client native implementation vs. libpq &\n simple psql" }, { "msg_contents": "On Thu, Mar 12, 2020 at 04:27:54PM -0400, Richard Michael wrote:\n> > psql can't do it, but pygres can do it since last year.\n> \n> Thank you for mentioning it. Do you mean this:\n> https://github.com/rogamba/pygres ?\n\nUgh, no. I'm referring to PyGreSQL, which was at one point a part of the\npostgres source tree.\nhttps://www.pygresql.org/\n\n$ python -c \"import pg; print(pg.DB('postgres').query('SELECT 1').getresult())\"\n[(1,)]\n\n$ python -c \"import pg; p=pg.DB('postgres'); p.prepare('q', 'SELECT 1'); print(p.query_prepared('q').getresult())\"\n[(1,)]\n\n-- \nJustin\n\n(I'm having trouble believe someone made a project called \"pygres\", which uses\npython, postgres, and psycopg... Will mail D'Arcy)\n\n\n", "msg_date": "Thu, 12 Mar 2020 15:43:36 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow ext'd query via client native implementation vs. libpq &\n simple psql" } ]
[ { "msg_contents": "Hi,\n\nWe have a multi-tenant app where each tenant has their own schemas, and\ninherits tables from tables defined in the public schema. Other shared data\nsuch as data types are also stored in the public schema. While running this\napp, every transaction is started with setting the search_path to\n<tenant_id>, public.\n\nWe haven't noticed any issues with this before now, until we started seeing\nreally slow planning time on some relatively simple queries:\n\nmm_prod=> explain analyze select cs.* from contacts_segments cs inner join\nsegments s on s.sid = cs.segment_id inner join contacts_lists cl on\ncl.email = cs.email and cl.lid = s.lid where cs.segment_id = 34983 and\ncl.lstatus = 'a';\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=452.96..1887.72 rows=1518 width=41) (actual\ntime=6.581..18.845 rows=2945 loops=1)\n Hash Cond: ((cs.email)::citext = (cl.email)::citext)\n -> Bitmap Heap Scan on contacts_segments cs (cost=127.17..1488.89\nrows=9258 width=41) (actual time=0.501..4.085 rows=9258 loops=1)\n Recheck Cond: (segment_id = 34983)\n Heap Blocks: exact=1246\n -> Bitmap Index Scan on contacts_segments_segment_id_idx\n (cost=0.00..124.86 rows=9258 width=0) (actual time=0.380..0.380 rows=9258\nloops=1)\n Index Cond: (segment_id = 34983)\n -> Hash (cost=298.45..298.45 rows=2187 width=25) (actual\ntime=6.061..6.061 rows=4645 loops=1)\n Buckets: 8192 (originally 4096) Batches: 1 (originally 1) Memory\nUsage: 324kB\n -> Nested Loop (cost=0.56..298.45 rows=2187 width=25) (actual\ntime=0.025..3.182 rows=4645 loops=1)\n -> Index Scan using segments_pkey on segments s\n (cost=0.27..2.49 rows=1 width=8) (actual time=0.010..0.010 rows=1 loops=1)\n Index Cond: (sid = 34983)\n -> Index Scan using contacts_lists_lid_idx on\ncontacts_lists cl (cost=0.29..288.53 rows=744 width=25) (actual\ntime=0.012..2.791 rows=4645 loops=1)\n Index Cond: (lid = s.lid)\n Filter: ((lstatus)::bpchar = 'a'::bpchar)\n Rows Removed by Filter: 6628\n Planning Time: 1930.901 ms\n Execution Time: 18.996 ms\n(18 rows)\n\nThe planning time is the same even if running the same query multiple times\nwithin the same session. When having only the tenant's schema in the\nsearch_path, planning time is much improved:\n\nmm_prod=> set search_path = eliksir;\nSET\nmm_prod=> explain analyze select cs.* from contacts_segments cs inner join\nsegments s on s.sid = cs.segment_id inner join contacts_lists cl on\ncl.email = cs.email and cl.lid = s.lid where cs.segment_id = 34983 and\ncl.lstatus = 'a';\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=452.96..1887.72 rows=1517 width=41) (actual\ntime=3.980..8.554 rows=2945 loops=1)\n Hash Cond: ((cs.email)::text = (cl.email)::text)\n -> Bitmap Heap Scan on contacts_segments cs (cost=127.17..1488.89\nrows=9258 width=41) (actual time=0.495..3.467 rows=9258 loops=1)\n Recheck Cond: (segment_id = 34983)\n Heap Blocks: exact=1246\n -> Bitmap Index Scan on contacts_segments_segment_id_idx\n (cost=0.00..124.86 rows=9258 width=0) (actual time=0.376..0.376 rows=9258\nloops=1)\n Index Cond: (segment_id = 34983)\n -> Hash (cost=298.45..298.45 rows=2187 width=25) (actual\ntime=3.476..3.476 rows=4645 loops=1)\n Buckets: 8192 (originally 4096) Batches: 1 (originally 1) Memory\nUsage: 324kB\n -> Nested Loop (cost=0.56..298.45 rows=2187 width=25) (actual\ntime=0.019..2.726 rows=4645 loops=1)\n -> Index Scan using segments_pkey on segments s\n (cost=0.27..2.49 rows=1 width=8) (actual time=0.005..0.006 rows=1 loops=1)\n Index Cond: (sid = 34983)\n -> Index Scan using contacts_lists_lid_idx on\ncontacts_lists cl (cost=0.29..288.53 rows=744 width=25) (actual\ntime=0.012..2.394 rows=4645 loops=1)\n Index Cond: (lid = s.lid)\n Filter: ((lstatus)::bpchar = 'a'::bpchar)\n Rows Removed by Filter: 6628\n Planning Time: 23.416 ms\n Execution Time: 8.668 ms\n(18 rows)\n\nTo give the schema:\n\nmm_prod=> \\d contacts_segments\n Table \"eliksir.contacts_segments\"\n Column | Type | Collation | Nullable | Default\n------------+-----------------------------+-----------+----------+---------\n email | email | | not null |\n segment_id | integer | | not null |\n entered_at | timestamp without time zone | | not null | now()\n exited_at | timestamp without time zone | | |\nIndexes:\n \"contacts_segments_pkey\" PRIMARY KEY, btree (email, segment_id)\n \"contacts_segments_segment_id_idx\" btree (segment_id)\nForeign-key constraints:\n \"contacts_segments_email_fkey\" FOREIGN KEY (email) REFERENCES\ncontacts(email) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE\n \"contacts_segments_segment_id_fkey\" FOREIGN KEY (segment_id) REFERENCES\nsegments(sid) ON DELETE CASCADE DEFERRABLE\nInherits: public.contacts_segments\n\nmm_prod=> \\d segments\n Table \"eliksir.segments\"\n Column | Type | Collation | Nullable |\n Default\n--------------+-----------------------------+-----------+----------+---------------------------------------\n sid | integer | | not null |\nnextval('segments_sid_seq'::regclass)\n lid | integer | | not null |\n segmentname | text | | not null |\n createdat | timestamp without time zone | | not null | now()\n label | text | | |\n num_contacts | integer | | |\n cid | integer | | |\n mid | integer | | |\n original_sid | integer | | |\n archived_at | timestamp without time zone | | |\nIndexes:\n \"segments_pkey\" PRIMARY KEY, btree (sid)\n \"segments_name\" UNIQUE, btree (lid, segmentname)\nForeign-key constraints:\n \"segments_cid_fkey\" FOREIGN KEY (cid) REFERENCES campaigns(cid) ON\nUPDATE RESTRICT ON DELETE CASCADE DEFERRABLE\n \"segments_lid_fkey\" FOREIGN KEY (lid) REFERENCES lists(lid) ON UPDATE\nRESTRICT ON DELETE CASCADE DEFERRABLE\n \"segments_mid_fkey\" FOREIGN KEY (mid) REFERENCES mails(mid) ON UPDATE\nRESTRICT ON DELETE CASCADE DEFERRABLE\n \"segments_original_sid_fkey\" FOREIGN KEY (original_sid) REFERENCES\nsegments(sid) ON UPDATE RESTRICT ON DELETE CASCADE DEFERRABLE\nReferenced by:\n TABLE \"contacts_segments\" CONSTRAINT\n\"contacts_segments_segment_id_fkey\" FOREIGN KEY (segment_id) REFERENCES\nsegments(sid) ON DELETE CASCADE DEFERRABLE\n TABLE \"mails_segments\" CONSTRAINT \"mails_segments_sid_fkey\" FOREIGN KEY\n(sid) REFERENCES segments(sid) ON UPDATE RESTRICT ON DELETE SET NULL\nDEFERRABLE\n TABLE \"segments\" CONSTRAINT \"segments_original_sid_fkey\" FOREIGN KEY\n(original_sid) REFERENCES segments(sid) ON UPDATE RESTRICT ON DELETE\nCASCADE DEFERRABLE\nInherits: public.segments\n\nmm_prod=> \\d contacts_lists\n Table \"eliksir.contacts_lists\"\n Column | Type | Collation |\nNullable | Default\n----------------------------+-----------------------------+-----------+----------+-------------\n email | email | | not\nnull |\n lid | integer | | not\nnull |\n lstatus | contact_status | | not\nnull | 'a'::bpchar\n ladded | timestamp without time zone | | not\nnull | now()\n lstatuschanged | timestamp without time zone | | not\nnull | now()\n skip_preexisting_campaigns | boolean | |\n |\n source_api_client_id | uuid | |\n |\n source_integration_id | text | |\n |\n source_form_id | integer | |\n |\n source_user_id | text | |\n |\n is_bulk_added | boolean | |\n | false\nIndexes:\n \"contacts_lists_pkey\" PRIMARY KEY, btree (email, lid)\n \"contacts_lists_lid_idx\" btree (lid) CLUSTER\n \"contacts_lists_lstatus_idx\" btree (lstatus)\n \"contacts_lists_lstatuschanged_idx\" btree (lstatuschanged)\nForeign-key constraints:\n \"contacts_lists_email_fkey\" FOREIGN KEY (email) REFERENCES\ncontacts(email) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE\n \"contacts_lists_lid_fkey\" FOREIGN KEY (lid) REFERENCES lists(lid) ON\nUPDATE RESTRICT ON DELETE CASCADE DEFERRABLE\n \"contacts_lists_source_form_id_fkey\" FOREIGN KEY (source_form_id)\nREFERENCES forms(id) ON UPDATE RESTRICT ON DELETE SET NULL DEFERRABLE\nTriggers:\n [multiple triggers]\nInherits: public.contacts_lists\n\nI tried investigated a bit PostgreSQL 12 vs 9.4 we were on a few weeks ago,\nand while this exact query can not be run on our 9.4 database (since some\ntables here are new), I found a similar query where the data is more or\nless unchanged in the old backup. In this query, planning time is much\nslower and constant in 12 compared to 9.4:\n\nPostgreSQL 12.2:\n\nmm_prod=> set search_path = eliksir, public;\nSET\nmm_prod=> explain analyze select * from segments_with_contacts swc inner\njoin segments s using (sid) inner join contacts_lists cl on cl.email =\nswc.email and s.lid = cl.lid where swc.sid = 34983 and lstatus = 'a';\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=326.21..931.38 rows=402 width=187) (actual\ntime=6.708..10.343 rows=2216 loops=1)\n Hash Cond: (swc.email = (cl.email)::citext)\n -> Index Only Scan using segments_with_contacts_sid_lid_email_idx on\nsegments_with_contacts swc (cost=0.42..587.71 rows=2218 width=29) (actual\ntime=0.013..0.546 rows=2216 loops=1)\n Index Cond: (sid = 34983)\n Heap Fetches: 2216\n -> Hash (cost=298.45..298.45 rows=2187 width=162) (actual\ntime=6.687..6.687 rows=4645 loops=1)\n Buckets: 8192 (originally 4096) Batches: 1 (originally 1) Memory\nUsage: 817kB\n -> Nested Loop (cost=0.56..298.45 rows=2187 width=162) (actual\ntime=0.019..3.315 rows=4645 loops=1)\n -> Index Scan using segments_pkey on segments s\n (cost=0.27..2.49 rows=1 width=74) (actual time=0.005..0.006 rows=1 loops=1)\n Index Cond: (sid = 34983)\n -> Index Scan using contacts_lists_lid_idx on\ncontacts_lists cl (cost=0.29..288.53 rows=744 width=88) (actual\ntime=0.011..2.572 rows=4645 loops=1)\n Index Cond: (lid = s.lid)\n Filter: ((lstatus)::bpchar = 'a'::bpchar)\n Rows Removed by Filter: 6628\n Planning Time: 1096.942 ms\n Execution Time: 10.473 ms\n(16 rows)\n\nPostgreSQL 9.4.12:\n\nmm_prod=> set search_path = eliksir, public;\nSET\nmm_prod=> explain analyze select * from segments_with_contacts swc inner\njoin segments s using (sid) inner join contacts_lists cl on cl.email =\nswc.email and s.lid = cl.lid where swc.sid = 34983 and lstatus = 'a';\n\nQUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=700.92..1524.11 rows=444 width=187) (actual\ntime=13.800..20.009 rows=2295 loops=1)\n Hash Cond: (swc.email = (cl.email)::citext)\n -> Bitmap Heap Scan on segments_with_contacts swc (cost=110.44..888.50\nrows=2325 width=29) (actual time=0.318..0.737 rows=2295 loops=1)\n Recheck Cond: (sid = 34983)\n Heap Blocks: exact=19\n -> Bitmap Index Scan on segments_with_contacts_sid_lid_email_idx\n (cost=0.00..109.86 rows=2325 width=0) (actual time=0.299..0.299 rows=2335\nloops=1)\n Index Cond: (sid = 34983)\n -> Hash (cost=559.75..559.75 rows=2459 width=162) (actual\ntime=13.401..13.401 rows=4624 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 749kB\n -> Nested Loop (cost=51.55..559.75 rows=2459 width=162) (actual\ntime=1.375..7.515 rows=4624 loops=1)\n -> Index Scan using segments_pkey on segments s\n (cost=0.28..8.29 rows=1 width=74) (actual time=0.012..0.015 rows=1 loops=1)\n Index Cond: (sid = 34983)\n -> Bitmap Heap Scan on contacts_lists cl\n (cost=51.27..543.86 rows=760 width=88) (actual time=1.355..6.525 rows=4624\nloops=1)\n Recheck Cond: (lid = s.lid)\n Filter: ((lstatus)::bpchar = 'a'::bpchar)\n Rows Removed by Filter: 6428\n Heap Blocks: exact=455\n -> Bitmap Index Scan on contacts_lists_lid_idx\n (cost=0.00..51.08 rows=1439 width=0) (actual time=1.275..1.275 rows=11533\nloops=1)\n Index Cond: (lid = s.lid)\n Planning time: 22.120 ms\n Execution time: 20.337 ms\n(21 rows)\n\nPostgreSQL 12.2 is run on EPYC 7302P 16-core, 64GB RAM, 2x NVMe disks in\nRAID1, Ubuntu 18.04.1 running 5.0.0-37 HWE kernel. Some settings:\n\nconstraint_exclusion = partition (I tried setting this to off or on, to see\nif that made any difference; nope)\ndefault_statistics_target = 1000\neffective_cache_size = 48GB\neffective_io_concurrency = 200\nmax_worker_processes = 16\nrandom_page_cost = 1.1\nshared_buffers = 16GB\nwork_mem = 16MB\n\nThanks for any guidance.\n\n-- a.\n\nHi,We have a multi-tenant app where each tenant has their own schemas, and inherits tables from tables defined in the public schema. Other shared data such as data types are also stored in the public schema. While running this app, every transaction is started with setting the search_path to <tenant_id>, public. We haven't noticed any issues with this before now, until we started seeing really slow planning time on some relatively simple queries:mm_prod=> explain analyze select cs.* from contacts_segments cs inner join segments s on s.sid = cs.segment_id inner join contacts_lists cl on cl.email = cs.email and cl.lid = s.lid where cs.segment_id = 34983 and cl.lstatus = 'a';                                                                             QUERY PLAN                                                                             -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=452.96..1887.72 rows=1518 width=41) (actual time=6.581..18.845 rows=2945 loops=1)   Hash Cond: ((cs.email)::citext = (cl.email)::citext)   ->  Bitmap Heap Scan on contacts_segments cs  (cost=127.17..1488.89 rows=9258 width=41) (actual time=0.501..4.085 rows=9258 loops=1)         Recheck Cond: (segment_id = 34983)         Heap Blocks: exact=1246         ->  Bitmap Index Scan on contacts_segments_segment_id_idx  (cost=0.00..124.86 rows=9258 width=0) (actual time=0.380..0.380 rows=9258 loops=1)               Index Cond: (segment_id = 34983)   ->  Hash  (cost=298.45..298.45 rows=2187 width=25) (actual time=6.061..6.061 rows=4645 loops=1)         Buckets: 8192 (originally 4096)  Batches: 1 (originally 1)  Memory Usage: 324kB         ->  Nested Loop  (cost=0.56..298.45 rows=2187 width=25) (actual time=0.025..3.182 rows=4645 loops=1)               ->  Index Scan using segments_pkey on segments s  (cost=0.27..2.49 rows=1 width=8) (actual time=0.010..0.010 rows=1 loops=1)                     Index Cond: (sid = 34983)               ->  Index Scan using contacts_lists_lid_idx on contacts_lists cl  (cost=0.29..288.53 rows=744 width=25) (actual time=0.012..2.791 rows=4645 loops=1)                     Index Cond: (lid = s.lid)                     Filter: ((lstatus)::bpchar = 'a'::bpchar)                     Rows Removed by Filter: 6628 Planning Time: 1930.901 ms Execution Time: 18.996 ms(18 rows)The planning time is the same even if running the same query multiple times within the same session. When having only the tenant's schema in the search_path, planning time is much improved:mm_prod=> set search_path = eliksir;SETmm_prod=> explain analyze select cs.* from contacts_segments cs inner join segments s on s.sid = cs.segment_id inner join contacts_lists cl on cl.email = cs.email and cl.lid = s.lid where cs.segment_id = 34983 and cl.lstatus = 'a';                                                                             QUERY PLAN                                                                             -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=452.96..1887.72 rows=1517 width=41) (actual time=3.980..8.554 rows=2945 loops=1)   Hash Cond: ((cs.email)::text = (cl.email)::text)   ->  Bitmap Heap Scan on contacts_segments cs  (cost=127.17..1488.89 rows=9258 width=41) (actual time=0.495..3.467 rows=9258 loops=1)         Recheck Cond: (segment_id = 34983)         Heap Blocks: exact=1246         ->  Bitmap Index Scan on contacts_segments_segment_id_idx  (cost=0.00..124.86 rows=9258 width=0) (actual time=0.376..0.376 rows=9258 loops=1)               Index Cond: (segment_id = 34983)   ->  Hash  (cost=298.45..298.45 rows=2187 width=25) (actual time=3.476..3.476 rows=4645 loops=1)         Buckets: 8192 (originally 4096)  Batches: 1 (originally 1)  Memory Usage: 324kB         ->  Nested Loop  (cost=0.56..298.45 rows=2187 width=25) (actual time=0.019..2.726 rows=4645 loops=1)               ->  Index Scan using segments_pkey on segments s  (cost=0.27..2.49 rows=1 width=8) (actual time=0.005..0.006 rows=1 loops=1)                     Index Cond: (sid = 34983)               ->  Index Scan using contacts_lists_lid_idx on contacts_lists cl  (cost=0.29..288.53 rows=744 width=25) (actual time=0.012..2.394 rows=4645 loops=1)                     Index Cond: (lid = s.lid)                     Filter: ((lstatus)::bpchar = 'a'::bpchar)                     Rows Removed by Filter: 6628 Planning Time: 23.416 ms Execution Time: 8.668 ms(18 rows)To give the schema:mm_prod=> \\d contacts_segments                     Table \"eliksir.contacts_segments\"   Column   |            Type             | Collation | Nullable | Default ------------+-----------------------------+-----------+----------+--------- email      | email                       |           | not null |  segment_id | integer                     |           | not null |  entered_at | timestamp without time zone |           | not null | now() exited_at  | timestamp without time zone |           |          | Indexes:    \"contacts_segments_pkey\" PRIMARY KEY, btree (email, segment_id)    \"contacts_segments_segment_id_idx\" btree (segment_id)Foreign-key constraints:    \"contacts_segments_email_fkey\" FOREIGN KEY (email) REFERENCES contacts(email) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE    \"contacts_segments_segment_id_fkey\" FOREIGN KEY (segment_id) REFERENCES segments(sid) ON DELETE CASCADE DEFERRABLEInherits: public.contacts_segmentsmm_prod=> \\d segments                                         Table \"eliksir.segments\"    Column    |            Type             | Collation | Nullable |                Default                --------------+-----------------------------+-----------+----------+--------------------------------------- sid          | integer                     |           | not null | nextval('segments_sid_seq'::regclass) lid          | integer                     |           | not null |  segmentname  | text                        |           | not null |  createdat    | timestamp without time zone |           | not null | now() label        | text                        |           |          |  num_contacts | integer                     |           |          |  cid          | integer                     |           |          |  mid          | integer                     |           |          |  original_sid | integer                     |           |          |  archived_at  | timestamp without time zone |           |          | Indexes:    \"segments_pkey\" PRIMARY KEY, btree (sid)    \"segments_name\" UNIQUE, btree (lid, segmentname)Foreign-key constraints:    \"segments_cid_fkey\" FOREIGN KEY (cid) REFERENCES campaigns(cid) ON UPDATE RESTRICT ON DELETE CASCADE DEFERRABLE    \"segments_lid_fkey\" FOREIGN KEY (lid) REFERENCES lists(lid) ON UPDATE RESTRICT ON DELETE CASCADE DEFERRABLE    \"segments_mid_fkey\" FOREIGN KEY (mid) REFERENCES mails(mid) ON UPDATE RESTRICT ON DELETE CASCADE DEFERRABLE    \"segments_original_sid_fkey\" FOREIGN KEY (original_sid) REFERENCES segments(sid) ON UPDATE RESTRICT ON DELETE CASCADE DEFERRABLEReferenced by:    TABLE \"contacts_segments\" CONSTRAINT \"contacts_segments_segment_id_fkey\" FOREIGN KEY (segment_id) REFERENCES segments(sid) ON DELETE CASCADE DEFERRABLE    TABLE \"mails_segments\" CONSTRAINT \"mails_segments_sid_fkey\" FOREIGN KEY (sid) REFERENCES segments(sid) ON UPDATE RESTRICT ON DELETE SET NULL DEFERRABLE    TABLE \"segments\" CONSTRAINT \"segments_original_sid_fkey\" FOREIGN KEY (original_sid) REFERENCES segments(sid) ON UPDATE RESTRICT ON DELETE CASCADE DEFERRABLEInherits: public.segmentsmm_prod=> \\d contacts_lists                                Table \"eliksir.contacts_lists\"           Column           |            Type             | Collation | Nullable |   Default   ----------------------------+-----------------------------+-----------+----------+------------- email                      | email                       |           | not null |  lid                        | integer                     |           | not null |  lstatus                    | contact_status              |           | not null | 'a'::bpchar ladded                     | timestamp without time zone |           | not null | now() lstatuschanged             | timestamp without time zone |           | not null | now() skip_preexisting_campaigns | boolean                     |           |          |  source_api_client_id       | uuid                        |           |          |  source_integration_id      | text                        |           |          |  source_form_id             | integer                     |           |          |  source_user_id             | text                        |           |          |  is_bulk_added              | boolean                     |           |          | falseIndexes:    \"contacts_lists_pkey\" PRIMARY KEY, btree (email, lid)    \"contacts_lists_lid_idx\" btree (lid) CLUSTER    \"contacts_lists_lstatus_idx\" btree (lstatus)    \"contacts_lists_lstatuschanged_idx\" btree (lstatuschanged)Foreign-key constraints:    \"contacts_lists_email_fkey\" FOREIGN KEY (email) REFERENCES contacts(email) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE    \"contacts_lists_lid_fkey\" FOREIGN KEY (lid) REFERENCES lists(lid) ON UPDATE RESTRICT ON DELETE CASCADE DEFERRABLE    \"contacts_lists_source_form_id_fkey\" FOREIGN KEY (source_form_id) REFERENCES forms(id) ON UPDATE RESTRICT ON DELETE SET NULL DEFERRABLETriggers:    [multiple triggers]Inherits: public.contacts_listsI tried investigated a bit PostgreSQL 12 vs 9.4 we were on a few weeks ago, and while this exact query can not be run on our 9.4 database (since some tables here are new), I found a similar query where the data is more or less unchanged in the old backup. In this query, planning time is much slower and constant in 12 compared to 9.4:PostgreSQL 12.2:mm_prod=> set search_path = eliksir, public;SETmm_prod=> explain analyze select * from segments_with_contacts swc inner join segments s using (sid) inner join contacts_lists cl on cl.email = swc.email and s.lid = cl.lid where swc.sid = 34983 and lstatus = 'a';                                                                                       QUERY PLAN                                                                                        ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=326.21..931.38 rows=402 width=187) (actual time=6.708..10.343 rows=2216 loops=1)   Hash Cond: (swc.email = (cl.email)::citext)   ->  Index Only Scan using segments_with_contacts_sid_lid_email_idx on segments_with_contacts swc  (cost=0.42..587.71 rows=2218 width=29) (actual time=0.013..0.546 rows=2216 loops=1)         Index Cond: (sid = 34983)         Heap Fetches: 2216   ->  Hash  (cost=298.45..298.45 rows=2187 width=162) (actual time=6.687..6.687 rows=4645 loops=1)         Buckets: 8192 (originally 4096)  Batches: 1 (originally 1)  Memory Usage: 817kB         ->  Nested Loop  (cost=0.56..298.45 rows=2187 width=162) (actual time=0.019..3.315 rows=4645 loops=1)               ->  Index Scan using segments_pkey on segments s  (cost=0.27..2.49 rows=1 width=74) (actual time=0.005..0.006 rows=1 loops=1)                     Index Cond: (sid = 34983)               ->  Index Scan using contacts_lists_lid_idx on contacts_lists cl  (cost=0.29..288.53 rows=744 width=88) (actual time=0.011..2.572 rows=4645 loops=1)                     Index Cond: (lid = s.lid)                     Filter: ((lstatus)::bpchar = 'a'::bpchar)                     Rows Removed by Filter: 6628 Planning Time: 1096.942 ms Execution Time: 10.473 ms(16 rows)PostgreSQL 9.4.12:mm_prod=> set search_path = eliksir, public;SETmm_prod=> explain analyze select * from segments_with_contacts swc inner join segments s using (sid) inner join contacts_lists cl on cl.email = swc.email and s.lid = cl.lid where swc.sid = 34983 and lstatus = 'a';                                                                          QUERY PLAN                                                                           --------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=700.92..1524.11 rows=444 width=187) (actual time=13.800..20.009 rows=2295 loops=1)   Hash Cond: (swc.email = (cl.email)::citext)   ->  Bitmap Heap Scan on segments_with_contacts swc  (cost=110.44..888.50 rows=2325 width=29) (actual time=0.318..0.737 rows=2295 loops=1)         Recheck Cond: (sid = 34983)         Heap Blocks: exact=19         ->  Bitmap Index Scan on segments_with_contacts_sid_lid_email_idx  (cost=0.00..109.86 rows=2325 width=0) (actual time=0.299..0.299 rows=2335 loops=1)               Index Cond: (sid = 34983)   ->  Hash  (cost=559.75..559.75 rows=2459 width=162) (actual time=13.401..13.401 rows=4624 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 749kB         ->  Nested Loop  (cost=51.55..559.75 rows=2459 width=162) (actual time=1.375..7.515 rows=4624 loops=1)               ->  Index Scan using segments_pkey on segments s  (cost=0.28..8.29 rows=1 width=74) (actual time=0.012..0.015 rows=1 loops=1)                     Index Cond: (sid = 34983)               ->  Bitmap Heap Scan on contacts_lists cl  (cost=51.27..543.86 rows=760 width=88) (actual time=1.355..6.525 rows=4624 loops=1)                     Recheck Cond: (lid = s.lid)                     Filter: ((lstatus)::bpchar = 'a'::bpchar)                     Rows Removed by Filter: 6428                     Heap Blocks: exact=455                     ->  Bitmap Index Scan on contacts_lists_lid_idx  (cost=0.00..51.08 rows=1439 width=0) (actual time=1.275..1.275 rows=11533 loops=1)                           Index Cond: (lid = s.lid) Planning time: 22.120 ms Execution time: 20.337 ms(21 rows)PostgreSQL 12.2 is run on EPYC 7302P 16-core, 64GB RAM, 2x NVMe disks in RAID1, Ubuntu 18.04.1 running 5.0.0-37 HWE kernel. Some settings:constraint_exclusion = partition (I tried setting this to off or on, to see if that made any difference; nope)default_statistics_target = 1000effective_cache_size = 48GBeffective_io_concurrency = 200max_worker_processes = 16random_page_cost = 1.1shared_buffers = 16GBwork_mem = 16MBThanks for any guidance.-- a.", "msg_date": "Sat, 21 Mar 2020 13:02:10 +0100", "msg_from": "Anders Steinlein <[email protected]>", "msg_from_op": true, "msg_subject": "Slow planning time when public schema included (12 vs. 9.4)" }, { "msg_contents": "\n\nAm 21.03.20 um 13:02 schrieb Anders Steinlein:\n> default_statistics_target = 1000\n\nnot sure if this be the culprit here, but i think this is way too high. \nLeave it at the normal value of 100 and raise it only for particular \ntables and columns.\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n", "msg_date": "Sat, 21 Mar 2020 14:37:02 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow planning time when public schema included (12 vs. 9.4)" }, { "msg_contents": "Anders Steinlein <[email protected]> writes:\n> We haven't noticed any issues with this before now, until we started seeing\n> really slow planning time on some relatively simple queries:\n> ...\n> The planning time is the same even if running the same query multiple times\n> within the same session. When having only the tenant's schema in the\n> search_path, planning time is much improved:\n\nI notice a difference in these plans:\n\n> Hash Join (cost=452.96..1887.72 rows=1518 width=41) (actual\n> time=6.581..18.845 rows=2945 loops=1)\n> Hash Cond: ((cs.email)::citext = (cl.email)::citext)\n ^^^^^^ ^^^^^^\n\n> Hash Join (cost=452.96..1887.72 rows=1517 width=41) (actual\n> time=3.980..8.554 rows=2945 loops=1)\n> Hash Cond: ((cs.email)::text = (cl.email)::text)\n ^^^^ ^^^^\n\nI think what is happening is that the \"cl.email = cs.email\" clause\nis resolving as a different operator depending on your search path;\nprobably there is a \"citext = citext\" operator in the public\nschema, and if available the parser will think it's a better match\nthan the \"text = text\" operator. However, \"citext = citext\" can\nbe orders of magnitude slower, depending on what locale settings\nyou're using. That's affecting your planning time (since the\nplanner will apply the operator to the values available from\npg_stats), and it's also visibly affecting the query runtime.\n\nNot sure why you'd not have seen the same effect in your 9.4\ninstallation, but maybe you had citext installed somewhere else?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Mar 2020 10:26:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow planning time when public schema included (12 vs. 9.4)" }, { "msg_contents": "On Sat, Mar 21, 2020 at 2:37 PM Andreas Kretschmer <[email protected]>\nwrote:\n\n>\n>\n> Am 21.03.20 um 13:02 schrieb Anders Steinlein:\n> > default_statistics_target = 1000\n>\n> not sure if this be the culprit here, but i think this is way too high.\n> Leave it at the normal value of 100 and raise it only for particular\n> tables and columns.\n>\n\nIt may very well be too high, but the 9.4 instance also has\ndefault_statistics_target = 1000.\n\nBest,\n-- a.\n\nOn Sat, Mar 21, 2020 at 2:37 PM Andreas Kretschmer <[email protected]> wrote:\n\nAm 21.03.20 um 13:02 schrieb Anders Steinlein:\n> default_statistics_target = 1000\n\nnot sure if this be the culprit here, but i think this is way too high. \nLeave it at the normal value of 100 and raise it only for particular \ntables and columns.It may very well be too high, but the 9.4 instance also has default_statistics_target = 1000.Best,-- a.", "msg_date": "Sat, 21 Mar 2020 16:45:47 +0100", "msg_from": "Anders Steinlein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow planning time when public schema included (12 vs. 9.4)" }, { "msg_contents": "On Sat, Mar 21, 2020 at 3:26 PM Tom Lane <[email protected]> wrote:\n\n> Anders Steinlein <[email protected]> writes:\n> > We haven't noticed any issues with this before now, until we started\n> seeing\n> > really slow planning time on some relatively simple queries:\n> > ...\n> > The planning time is the same even if running the same query multiple\n> times\n> > within the same session. When having only the tenant's schema in the\n> > search_path, planning time is much improved:\n>\n> I notice a difference in these plans:\n>\n> > Hash Join (cost=452.96..1887.72 rows=1518 width=41) (actual\n> > time=6.581..18.845 rows=2945 loops=1)\n> > Hash Cond: ((cs.email)::citext = (cl.email)::citext)\n> ^^^^^^ ^^^^^^\n>\n> > Hash Join (cost=452.96..1887.72 rows=1517 width=41) (actual\n> > time=3.980..8.554 rows=2945 loops=1)\n> > Hash Cond: ((cs.email)::text = (cl.email)::text)\n> ^^^^ ^^^^\n>\n> I think what is happening is that the \"cl.email = cs.email\" clause\n> is resolving as a different operator depending on your search path;\n> probably there is a \"citext = citext\" operator in the public\n> schema, and if available the parser will think it's a better match\n> than the \"text = text\" operator. However, \"citext = citext\" can\n> be orders of magnitude slower, depending on what locale settings\n> you're using. That's affecting your planning time (since the\n> planner will apply the operator to the values available from\n> pg_stats), and it's also visibly affecting the query runtime.\n>\n> Not sure why you'd not have seen the same effect in your 9.4\n> installation, but maybe you had citext installed somewhere else?\n>\n\n The citext extension is installed in the public schema in both instances.\nAlso, the second query example that I could run on both 12 and 9.4 runs\nwith the citext comparison in both cases. From 9.4:\n\nmm_prod=> explain analyze select * from segments_with_contacts swc inner\njoin segments s using (sid) inner join contacts_lists cl on cl.email =\nswc.email and s.lid = cl.lid where swc.sid = 34983 and lstatus = 'a';\n\nQUERY PLAN\n\n------------------------------------------------------------\n------------------------------------------------------------\n---------------------------------------\n Hash Join (cost=700.92..1524.11 rows=444 width=187) (actual\ntime=13.800..20.009 rows=2295 loops=1)\n Hash Cond: (swc.email = (cl.email)::citext)\n ^^^^^^^^\nmm2_prod=> \\d segments_with_contacts\nMaterialized view \"eliksir.segments_with_contacts\"\n Column | Type | Modifiers\n--------+---------+-----------\n lid | integer |\n sid | integer |\n email | citext |\n\nThe tables segments and contacts_lists are identical on the two instances,\ni.e. both are using citext (email domain using the citext type) on both 12\nand 9.4, with the citext extension in the public schema. Is it the\nlc_collate setting citext cares about? lc_collate=nb_NO.UTF-8 on both 9.4\nand 12.\n\nSo I don't understand this big difference? Because it does seem like citext\nis indeed the difference. I tried to modify the query to cast before\njoining, and it is indeed much improved:\n\nmm_prod=> set search_path = eliksir, public;\nSET\nmm_prod=> explain analyze select cs.* from contacts_segments cs inner join\nsegments s on s.sid = cs.segment_id inner join contacts_lists cl on\nlower(cl.email::text) = lower(cs.email::text) and cl.lid = s.lid where\ncs.segment_id = 34983 and cl.lstatus = 'a';\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=2518.61..4565.20 rows=101259 width=41) (actual\ntime=47.278..51.686 rows=2945 loops=1)\n Merge Cond: ((lower((cl.email)::text)) = (lower((cs.email)::text)))\n -> Sort (cost=419.78..425.24 rows=2187 width=25) (actual\ntime=18.283..18.516 rows=4646 loops=1)\n Sort Key: (lower((cl.email)::text))\n Sort Method: quicksort Memory: 665kB\n -> Nested Loop (cost=0.56..298.45 rows=2187 width=25) (actual\ntime=0.057..9.805 rows=4646 loops=1)\n -> Index Scan using segments_pkey on segments s\n (cost=0.27..2.49 rows=1 width=8) (actual time=0.021..0.022 rows=1 loops=1)\n Index Cond: (sid = 34983)\n -> Index Scan using contacts_lists_lid_idx on\ncontacts_lists cl (cost=0.29..288.53 rows=744 width=25) (actual\ntime=0.023..4.953 rows=4646 loops=1)\n Index Cond: (lid = s.lid)\n Filter: ((lstatus)::bpchar = 'a'::bpchar)\n Rows Removed by Filter: 6628\n -> Sort (cost=2098.83..2121.98 rows=9258 width=41) (actual\ntime=28.988..29.373 rows=9258 loops=1)\n Sort Key: (lower((cs.email)::text))\n Sort Method: quicksort Memory: 1598kB\n -> Bitmap Heap Scan on contacts_segments cs\n (cost=127.17..1488.89 rows=9258 width=41) (actual time=0.511..7.910\nrows=9258 loops=1)\n Recheck Cond: (segment_id = 34983)\n Heap Blocks: exact=1246\n -> Bitmap Index Scan on contacts_segments_segment_id_idx\n (cost=0.00..124.86 rows=9258 width=0) (actual time=0.390..0.391 rows=9258\nloops=1)\n Index Cond: (segment_id = 34983)\n Planning Time: 0.416 ms\n Execution Time: 51.924 ms\n(22 rows)\n\nBest,\n-- a.\n\nOn Sat, Mar 21, 2020 at 3:26 PM Tom Lane <[email protected]> wrote:Anders Steinlein <[email protected]> writes:\n> We haven't noticed any issues with this before now, until we started seeing\n> really slow planning time on some relatively simple queries:\n> ...\n> The planning time is the same even if running the same query multiple times\n> within the same session. When having only the tenant's schema in the\n> search_path, planning time is much improved:\n\nI notice a difference in these plans:\n\n>  Hash Join  (cost=452.96..1887.72 rows=1518 width=41) (actual\n> time=6.581..18.845 rows=2945 loops=1)\n>    Hash Cond: ((cs.email)::citext = (cl.email)::citext)\n                             ^^^^^^               ^^^^^^\n\n>  Hash Join  (cost=452.96..1887.72 rows=1517 width=41) (actual\n> time=3.980..8.554 rows=2945 loops=1)\n>    Hash Cond: ((cs.email)::text = (cl.email)::text)\n                             ^^^^               ^^^^\n\nI think what is happening is that the \"cl.email = cs.email\" clause\nis resolving as a different operator depending on your search path;\nprobably there is a \"citext = citext\" operator in the public\nschema, and if available the parser will think it's a better match\nthan the \"text = text\" operator.  However, \"citext = citext\" can\nbe orders of magnitude slower, depending on what locale settings\nyou're using.  That's affecting your planning time (since the\nplanner will apply the operator to the values available from\npg_stats), and it's also visibly affecting the query runtime.\n\nNot sure why you'd not have seen the same effect in your 9.4\ninstallation, but maybe you had citext installed somewhere else? The citext extension is installed in the public schema in both instances. Also, the second query example that I could run on both 12 and 9.4 runs with the citext comparison in both cases. From 9.4:mm_prod=> explain analyze select * from segments_with_contacts swc inner join segments s using (sid) inner join contacts_lists cl on cl.email = swc.email and s.lid = cl.lid where swc.sid = 34983 and lstatus = 'a';                                                                          QUERY PLAN                                                                          --------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=700.92..1524.11 rows=444 width=187) (actual time=13.800..20.009 rows=2295 loops=1)   Hash Cond: (swc.email = (cl.email)::citext)                                     ^^^^^^^^mm2_prod=> \\d segments_with_contactsMaterialized view \"eliksir.segments_with_contacts\" Column |  Type   | Modifiers --------+---------+----------- lid    | integer |  sid    | integer |  email  | citext  | The tables segments and contacts_lists are identical on the two instances, i.e. both are using citext (email domain using the citext type) on both 12 and 9.4, with the citext extension in the public schema. Is it the lc_collate setting citext cares about? lc_collate=nb_NO.UTF-8 on both 9.4 and 12.So I don't understand this big difference? Because it does seem like citext is indeed the difference. I tried to modify the query to cast before joining, and it is indeed much improved:mm_prod=> set search_path = eliksir, public;SETmm_prod=> explain analyze select cs.* from contacts_segments cs inner join segments s on s.sid = cs.segment_id inner join contacts_lists cl on lower(cl.email::text) = lower(cs.email::text) and cl.lid = s.lid where cs.segment_id = 34983 and cl.lstatus = 'a';                                                                             QUERY PLAN                                                                             -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge Join  (cost=2518.61..4565.20 rows=101259 width=41) (actual time=47.278..51.686 rows=2945 loops=1)   Merge Cond: ((lower((cl.email)::text)) = (lower((cs.email)::text)))   ->  Sort  (cost=419.78..425.24 rows=2187 width=25) (actual time=18.283..18.516 rows=4646 loops=1)         Sort Key: (lower((cl.email)::text))         Sort Method: quicksort  Memory: 665kB         ->  Nested Loop  (cost=0.56..298.45 rows=2187 width=25) (actual time=0.057..9.805 rows=4646 loops=1)               ->  Index Scan using segments_pkey on segments s  (cost=0.27..2.49 rows=1 width=8) (actual time=0.021..0.022 rows=1 loops=1)                     Index Cond: (sid = 34983)               ->  Index Scan using contacts_lists_lid_idx on contacts_lists cl  (cost=0.29..288.53 rows=744 width=25) (actual time=0.023..4.953 rows=4646 loops=1)                     Index Cond: (lid = s.lid)                     Filter: ((lstatus)::bpchar = 'a'::bpchar)                     Rows Removed by Filter: 6628   ->  Sort  (cost=2098.83..2121.98 rows=9258 width=41) (actual time=28.988..29.373 rows=9258 loops=1)         Sort Key: (lower((cs.email)::text))         Sort Method: quicksort  Memory: 1598kB         ->  Bitmap Heap Scan on contacts_segments cs  (cost=127.17..1488.89 rows=9258 width=41) (actual time=0.511..7.910 rows=9258 loops=1)               Recheck Cond: (segment_id = 34983)               Heap Blocks: exact=1246               ->  Bitmap Index Scan on contacts_segments_segment_id_idx  (cost=0.00..124.86 rows=9258 width=0) (actual time=0.390..0.391 rows=9258 loops=1)                     Index Cond: (segment_id = 34983) Planning Time: 0.416 ms Execution Time: 51.924 ms(22 rows)Best,-- a.", "msg_date": "Sat, 21 Mar 2020 16:59:07 +0100", "msg_from": "Anders Steinlein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow planning time when public schema included (12 vs. 9.4)" }, { "msg_contents": "Anders Steinlein <[email protected]> writes:\n> On Sat, Mar 21, 2020 at 3:26 PM Tom Lane <[email protected]> wrote:\n>> Not sure why you'd not have seen the same effect in your 9.4\n>> installation, but maybe you had citext installed somewhere else?\n\n> The tables segments and contacts_lists are identical on the two instances,\n> i.e. both are using citext (email domain using the citext type) on both 12\n> and 9.4, with the citext extension in the public schema. Is it the\n> lc_collate setting citext cares about? lc_collate=nb_NO.UTF-8 on both 9.4\n> and 12.\n\nI think it depends on both lc_collate and lc_ctype, since basically\nwhat it's doing is lower() on each string and then a strcoll()\ncomparison. The strcoll() part should be pretty much equivalent to\ntext comparisons, though ... or, hmm, maybe not. texteq() knows\nit can reduce that to just a memcmp bitwise-equality test, but\ncitext doesn't have that optimization.\n\n> So I don't understand this big difference? Because it does seem like citext\n> is indeed the difference.\n\nIt seems odd to me too. I'm not at all surprised that citext comparison\nis way slower than text, but I am surprised that you don't see that on 9.4\nas well. Is lc_ctype the same in both installs? For that matter, is the\nunderlying libc the same? We have seen large performance discrepancies\nbetween different libc versions in this area.\n\nIf you're interested in digging further, getting a \"perf\" profile while\nrunning the problem query over and over would likely yield some insight\nabout where the time is going.\n\nhttps://wiki.postgresql.org/wiki/Profiling_with_perf\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Mar 2020 15:35:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow planning time when public schema included (12 vs. 9.4)" }, { "msg_contents": "On Sat, Mar 21, 2020 at 8:35 PM Tom Lane <[email protected]> wrote:\n\n> Anders Steinlein <[email protected]> writes:\n> > So I don't understand this big difference? Because it does seem like\n> citext\n> > is indeed the difference.\n>\n> It seems odd to me too. I'm not at all surprised that citext comparison\n> is way slower than text, but I am surprised that you don't see that on 9.4\n> as well.\n\n\nIndeed. But also, how come this is part of the planner time? I would think\nthat would be part of the execution time? (Just a detail I'm curious about.)\n\nIs lc_ctype the same in both installs?\n\n\nYes, lc_ctype is also nb_NO.UTF-8 on both installs.\n\n\n> For that matter, is the underlying libc the same? We have seen large\n> performance discrepancies\n> between different libc versions in this area.\n>\n\nThis they most definitely are not. 9.4 was running on an old box, Ubuntu\n12.04, while 12 is on an up-to-date Ubuntu 18.04 LTS. AFAICS, 2.15 on the\n9.4 box and 2.27 on the 12 box.\n\nIf you're interested in digging further, getting a \"perf\" profile while\n> running the problem query over and over would likely yield some insight\n> about where the time is going.\n>\n\nI collected a profile now, but I've never done this before so I'm unsure\nhow to read the report. I'll email you directly with a link to the\nperf.data file, if you would be so kind as to take a quick look. From what\nlittle I think I understand, towlower from libc seems to take up 32% of the\ntotal time, although that by itself doesn't seem to explain almost 2 second\nplanner time vs. 20ms... Should really citext/libc string comparison\n\"issues\" cause this order of magnitude slower planner time?\n\nBest,\n-- a.\n\nOn Sat, Mar 21, 2020 at 8:35 PM Tom Lane <[email protected]> wrote:Anders Steinlein <[email protected]> writes:\n> So I don't understand this big difference? Because it does seem like citext\n> is indeed the difference.\n\nIt seems odd to me too.  I'm not at all surprised that citext comparison\nis way slower than text, but I am surprised that you don't see that on 9.4\nas well. Indeed. But also, how come this is part of the planner time? I would think that would be part of the execution time? (Just a detail I'm curious about.)Is lc_ctype the same in both installs?Yes, lc_ctype is also nb_NO.UTF-8 on both installs. For that matter, is the underlying libc the same?  We have seen large performance discrepancies\nbetween different libc versions in this area.This they most definitely are not. 9.4 was running on an old box, Ubuntu 12.04, while 12 is on an up-to-date Ubuntu 18.04 LTS. AFAICS, 2.15 on the 9.4 box and 2.27 on the 12 box.\nIf you're interested in digging further, getting a \"perf\" profile while\nrunning the problem query over and over would likely yield some insight\nabout where the time is going.I collected a profile now, but I've never done this before so I'm unsure how to read the report. I'll email you directly with a link to the perf.data file, if you would be so kind as to take a quick look. From what little I think I understand, towlower from libc seems to take up 32% of the total time, although that by itself doesn't seem to explain almost 2 second planner time vs. 20ms... Should really citext/libc string comparison \"issues\" cause this order of magnitude slower planner time?Best,-- a.", "msg_date": "Sat, 21 Mar 2020 22:13:37 +0100", "msg_from": "Anders Steinlein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow planning time when public schema included (12 vs. 9.4)" }, { "msg_contents": "Anders Steinlein <[email protected]> writes:\n> On Sat, Mar 21, 2020 at 8:35 PM Tom Lane <[email protected]> wrote:\n>> It seems odd to me too. I'm not at all surprised that citext comparison\n>> is way slower than text, but I am surprised that you don't see that on 9.4\n>> as well.\n\n> Indeed. But also, how come this is part of the planner time? I would think\n> that would be part of the execution time? (Just a detail I'm curious about.)\n\nAs part of estimating the size of a join, the planner will run through all\nthe most-common-values available from pg_stats and see which values from\none table match to which values from the other. If you have a lot of MCVs\n(which'd involve a fairly flat, but not unique, data distribution and a\nlarge stats target setting) and a slow join operator, it's not hard for\nthat to take a lot of time. You might care to look into pg_stats and see\njust how big those arrays are for each of these columns.\n\nBut 9.4 did that too, so we're still at a loss as to why v12 is so much\nslower.\n\n> This they most definitely are not. 9.4 was running on an old box, Ubuntu\n> 12.04, while 12 is on an up-to-date Ubuntu 18.04 LTS. AFAICS, 2.15 on the\n> 9.4 box and 2.27 on the 12 box.\n\nI'm suspicious that the root issue has to do with libc differences,\nbut I haven't any hard data to back that up with.\n\nAnother possibility perhaps is that v12's ANALYZE is collecting a lot\nmore \"common\" values than 9.4 did. Whether it is or not, the advice\nyou already got to ratchet down the stats target would likely be\nhelpful to reduce the planning time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Mar 2020 18:55:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow planning time when public schema included (12 vs. 9.4)" }, { "msg_contents": "On Sat, Mar 21, 2020 at 11:55 PM Tom Lane <[email protected]> wrote:\n\n> Anders Steinlein <[email protected]> writes:\n> > This they most definitely are not. 9.4 was running on an old box, Ubuntu\n> > 12.04, while 12 is on an up-to-date Ubuntu 18.04 LTS. AFAICS, 2.15 on the\n> > 9.4 box and 2.27 on the 12 box.\n>\n> I'm suspicious that the root issue has to do with libc differences,\n> but I haven't any hard data to back that up with.\n>\n> Another possibility perhaps is that v12's ANALYZE is collecting a lot\n> more \"common\" values than 9.4 did. Whether it is or not, the advice\n> you already got to ratchet down the stats target would likely be\n> helpful to reduce the planning time.\n>\n\nYes, indeed, lowering the statistics target to the default 100 decreased\nthe planning time a lot -- to sub-10m! Thanks for the guidance, although\nthe excessive difference between the two boxes/libc versions are\ndisappointing, to say the least.\n\nDo you have any insight into how the Postgres 12 nondeterministic collation\nfeature (with ICU) compares performance-wise in general? Although having a\nmuch lower statistics target \"fixed\" this, I'm concerned joins and sorting\nis slower in general after having uncovered this (we haven't dug into that\nperformance numbers yet), since email (citext) are PKs in a lot of our\ntables. Would changing our email domain using citext to instead be a domain\nover text using a case-insensitive collation be a better choice?\n\nThanks again,\n-- a.\n\nOn Sat, Mar 21, 2020 at 11:55 PM Tom Lane <[email protected]> wrote:Anders Steinlein <[email protected]> writes:\n> This they most definitely are not. 9.4 was running on an old box, Ubuntu\n> 12.04, while 12 is on an up-to-date Ubuntu 18.04 LTS. AFAICS, 2.15 on the\n> 9.4 box and 2.27 on the 12 box.\n\nI'm suspicious that the root issue has to do with libc differences,\nbut I haven't any hard data to back that up with.\n\nAnother possibility perhaps is that v12's ANALYZE is collecting a lot\nmore \"common\" values than 9.4 did.  Whether it is or not, the advice\nyou already got to ratchet down the stats target would likely be\nhelpful to reduce the planning time.Yes, indeed, lowering the statistics target to the default 100 decreased the planning time a lot -- to sub-10m! Thanks for the guidance, although the excessive difference between the two boxes/libc versions are disappointing, to say the least.Do you have any insight into how the Postgres 12 nondeterministic collation feature (with ICU) compares performance-wise in general? Although having a much lower statistics target \"fixed\" this, I'm concerned joins and sorting is slower in general after having uncovered this (we haven't dug into that performance numbers yet), since email (citext) are PKs in a lot of our tables. Would changing our email domain using citext to instead be a domain over text using a case-insensitive collation be a better choice?Thanks again,-- a.", "msg_date": "Tue, 24 Mar 2020 23:55:29 +0100", "msg_from": "Anders Steinlein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow planning time when public schema included (12 vs. 9.4)" } ]
[ { "msg_contents": "Hi folks,\n\nWe are using postgreSQL database and I am hitting some limits. I have\npartitions on company_sale_account table\nbased on company name\n\nWe generate a report on accounts matched between the two. Below is the\nquery:\n\nSELECT DISTINCT cpsa1.*\nFROM company_sale_account cpsa1\n JOIN company_sale_account cpsa2 ON cpsa1.sale_account_id =\ncpsa2.sale_account_id\n WHERE cpsa1.company_name = 'company_a'\n AND cpsa2.company_name = 'company_b'\n\n\nWe have setup BTREE indexes on sale_account_id column on both the tables.\nThis worked fine till recently. Now, we have 10 million rows in\ncompany_a partition and 7 million rows in company_b partition. This query\nis taking\nmore than 10 minutes.\n\nBelow is the explain plan output for it:\n\n Buffers: shared hit=20125996 read=47811 dirtied=75, temp read=1333427\nwritten=1333427\n I/O Timings: read=19619.322\n -> Sort (cost=167950986.43..168904299.23 rows=381325118 width=132)\n(actual time=517017.334..603691.048 rows=16854094 loops=1)\n Sort Key: cpsa1.crm_account_id, ((cpsa1.account_name)::text),\n((cpsa1.account_owner)::text), ((cpsa1.account_type)::text),\ncpsa1.is_customer, ((date_part('epoch'::text,\ncpsa1.created_date))::integer),\n((hstore_to_json(cpsa1.custom_crm_fields))::tex (...)\n Sort Method: external merge Disk: 2862656kB\n Buffers: shared hit=20125996 read=47811 dirtied=75, temp\nread=1333427 written=1333427\n I/O Timings: read=19619.322\n -> Nested Loop (cost=0.00..9331268.39 rows=381325118 width=132)\n(actual time=1.680..118698.570 rows=16854094 loops=1)\n Buffers: shared hit=20125977 read=47811 dirtied=75\n I/O Timings: read=19619.322\n -> Append (cost=0.00..100718.94 rows=2033676 width=33)\n(actual time=0.014..1783.243 rows=2033675 loops=1)\n Buffers: shared hit=75298 dirtied=75\n -> Seq Scan on company_sale_account cpsa2\n (cost=0.00..0.00 rows=1 width=516) (actual time=0.001..0.001 rows=0\nloops=1)\n Filter: ((company_name)::text = 'company_b'::text)\n -> Seq Scan on company_sale_account_concur cpsa2_1\n (cost=0.00..100718.94 rows=2033675 width=33) (actual time=0.013..938.145\nrows=2033675 loops=1)\n Filter: ((company_name)::text = 'company_b'::text)\n Buffers: shared hit=75298 dirtied=75\n -> Append (cost=0.00..1.97 rows=23 width=355) (actual\ntime=0.034..0.047 rows=8 loops=2033675)\n Buffers: shared hit=20050679 read=47811\n I/O Timings: read=19619.322\n -> Seq Scan on company_sale_account cpsa1\n (cost=0.00..0.00 rows=1 width=4525) (actual time=0.000..0.000 rows=0\nloops=2033675)\n Filter: (((company_name)::text =\n'company_a'::text) AND ((cpsa2.sale_account_id)::text =\n(sale_account_id)::text))\n -> Index Scan using ix_csa_adp_sale_account on\ncompany_sale_account_adp cpsa1_1 (cost=0.56..1.97 rows=22 width=165)\n(actual time=0.033..0.042 rows=8 loops=2033675)\n Index Cond: ((sale_account_id)::text =\n(cpsa2.sale_account_id)::text)\n Filter: ((company_name)::text = 'company_a'::text)\n Buffers: shared hit=20050679 read=47811\n I/O Timings: read=19619.322\nPlanning time: 30.853 ms\nExecution time: 618218.321 ms\n\n\nDo you have any suggestion on how to tune postgres.\nPlease share your thoughts. It would be a great help to me.\n\nHi folks,We are using postgreSQL database and I am hitting some limits. I have partitions on company_sale_account table based on company nameWe generate a report on accounts matched between the two. Below is the query:SELECT DISTINCT cpsa1.*FROM company_sale_account cpsa1   JOIN  company_sale_account cpsa2  ON cpsa1.sale_account_id = cpsa2.sale_account_id  WHERE  cpsa1.company_name = 'company_a'   AND cpsa2.company_name = 'company_b' We have setup BTREE indexes on sale_account_id column on both the tables.This worked fine till recently. Now, we have 10 million rows in company_a partition and 7 million rows in company_b partition. This query is takingmore than 10 minutes. Below is the explain plan output for it:  Buffers: shared hit=20125996 read=47811 dirtied=75, temp read=1333427 written=1333427  I/O Timings: read=19619.322  ->  Sort  (cost=167950986.43..168904299.23 rows=381325118 width=132) (actual time=517017.334..603691.048 rows=16854094 loops=1)        Sort Key: cpsa1.crm_account_id, ((cpsa1.account_name)::text), ((cpsa1.account_owner)::text), ((cpsa1.account_type)::text), cpsa1.is_customer, ((date_part('epoch'::text, cpsa1.created_date))::integer), ((hstore_to_json(cpsa1.custom_crm_fields))::tex (...)        Sort Method: external merge  Disk: 2862656kB        Buffers: shared hit=20125996 read=47811 dirtied=75, temp read=1333427 written=1333427        I/O Timings: read=19619.322        ->  Nested Loop  (cost=0.00..9331268.39 rows=381325118 width=132) (actual time=1.680..118698.570 rows=16854094 loops=1)              Buffers: shared hit=20125977 read=47811 dirtied=75              I/O Timings: read=19619.322              ->  Append  (cost=0.00..100718.94 rows=2033676 width=33) (actual time=0.014..1783.243 rows=2033675 loops=1)                    Buffers: shared hit=75298 dirtied=75                    ->  Seq Scan on company_sale_account cpsa2  (cost=0.00..0.00 rows=1 width=516) (actual time=0.001..0.001 rows=0 loops=1)                          Filter: ((company_name)::text = 'company_b'::text)                    ->  Seq Scan on company_sale_account_concur cpsa2_1  (cost=0.00..100718.94 rows=2033675 width=33) (actual time=0.013..938.145 rows=2033675 loops=1)                          Filter: ((company_name)::text = 'company_b'::text)                          Buffers: shared hit=75298 dirtied=75              ->  Append  (cost=0.00..1.97 rows=23 width=355) (actual time=0.034..0.047 rows=8 loops=2033675)                    Buffers: shared hit=20050679 read=47811                    I/O Timings: read=19619.322                    ->  Seq Scan on company_sale_account cpsa1  (cost=0.00..0.00 rows=1 width=4525) (actual time=0.000..0.000 rows=0 loops=2033675)                          Filter: (((company_name)::text = 'company_a'::text) AND ((cpsa2.sale_account_id)::text = (sale_account_id)::text))                    ->  Index Scan using ix_csa_adp_sale_account on company_sale_account_adp cpsa1_1  (cost=0.56..1.97 rows=22 width=165) (actual time=0.033..0.042 rows=8 loops=2033675)                          Index Cond: ((sale_account_id)::text = (cpsa2.sale_account_id)::text)                          Filter: ((company_name)::text = 'company_a'::text)                          Buffers: shared hit=20050679 read=47811                          I/O Timings: read=19619.322Planning time: 30.853 msExecution time: 618218.321 msDo you have any suggestion on how to tune postgres.Please share your thoughts. It would be a great help to me.", "msg_date": "Sun, 22 Mar 2020 22:52:48 +0530", "msg_from": "daya airody <[email protected]>", "msg_from_op": true, "msg_subject": "JOIN on partitions is very slow" }, { "msg_contents": "Are you able to tweak the query or is that generated by an ORM? What\nversion of Postgres? Which configs have you changed from default? How many\npartitions do you have? Is there an index on company name?\n\nAnytime I see distinct keyword, I expect it to be a performance bottleneck\nand wonder about rewriting the query. Even just using group by can be much\nfaster because of how it gets executed.\n\nAre you able to tweak the query or is that generated by an ORM? What version of Postgres? Which configs have you changed from default? How many partitions do you have? Is there an index on company name?Anytime I see distinct keyword, I expect it to be a performance bottleneck and wonder about rewriting the query. Even just using group by can be much faster because of how it gets executed.", "msg_date": "Sun, 22 Mar 2020 12:08:37 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JOIN on partitions is very slow" }, { "msg_contents": "Yes. I can tweak the query. Version of postgres is 9.5.15. I have about 20\npartitions for company_sale_account table.\nI do have an index on company name.\n\nI need to use DISTINCT as i need to remove the duplicates.\n\nThanks for your time.\n\n\n\nOn Sun, Mar 22, 2020 at 11:38 PM Michael Lewis <[email protected]> wrote:\n\n> Are you able to tweak the query or is that generated by an ORM? What\n> version of Postgres? Which configs have you changed from default? How many\n> partitions do you have? Is there an index on company name?\n>\n> Anytime I see distinct keyword, I expect it to be a performance bottleneck\n> and wonder about rewriting the query. Even just using group by can be much\n> faster because of how it gets executed.\n>\n\nYes. I can tweak the query. Version of postgres is 9.5.15. I have about 20 partitions for company_sale_account table. I do have an index on company name.I need to use DISTINCT as i need to remove the duplicates.Thanks for your time.On Sun, Mar 22, 2020 at 11:38 PM Michael Lewis <[email protected]> wrote:Are you able to tweak the query or is that generated by an ORM? What version of Postgres? Which configs have you changed from default? How many partitions do you have? Is there an index on company name?Anytime I see distinct keyword, I expect it to be a performance bottleneck and wonder about rewriting the query. Even just using group by can be much faster because of how it gets executed.", "msg_date": "Mon, 23 Mar 2020 13:10:19 +0530", "msg_from": "daya airody <[email protected]>", "msg_from_op": true, "msg_subject": "Re: JOIN on partitions is very slow" }, { "msg_contents": "On Mon, Mar 23, 2020 at 1:40 AM daya airody <[email protected]> wrote:\n\n> Yes. I can tweak the query. Version of postgres is 9.5.15. I have about 20\n> partitions for company_sale_account table.\n> I do have an index on company name.\n>\n> I need to use DISTINCT as i need to remove the duplicates.\n>\n\nDISTINCT is a sign of improper joins most of the time in my experience.\nOften, just changing to group by is faster\n\nSELECT cpsa1.*\nFROM company_sale_account cpsa1\n JOIN company_sale_account cpsa2 ON cpsa1.sale_account_id =\ncpsa2.sale_account_id\n WHERE cpsa1.company_name = 'company_a'\n AND cpsa2.company_name = 'company_b'\nGROUP BY cpsa1.id; --assuming primary key exists, and I forget if the\nfeature that allows only naming primary key in group by might have been\nintroduced with 9.6\n\nIt should be noted that 9.5 is about 1 year from being EOL'd so it would be\nprudent to update to v11 or 12 when possible.\n\nHow does the below query perform? By the way, \"top posting\" (replying with\nall previous email thread below your reply) is discouraged on these forums.\nIt makes the reviewing archived posts more cumbersome. Instead, please\nreply with only your message and copying the relevant parts of prior\nconversation that you are responding to.\n\nSELECT cpsa1.*\nFROM company_sale_account cpsa1\nWHERE cpsa1.company_name = 'company_a' AND EXISTS(SELECT FROM\ncompany_sale_account cpsa2 WHER cpsa1.sale_account_id =\ncpsa2.sale_account_id AND cpsa2.company_name = 'company_b' );\n\nOn Mon, Mar 23, 2020 at 1:40 AM daya airody <[email protected]> wrote:Yes. I can tweak the query. Version of postgres is 9.5.15. I have about 20 partitions for company_sale_account table. I do have an index on company name.I need to use DISTINCT as i need to remove the duplicates.DISTINCT is a sign of improper joins most of the time in my experience. Often, just changing to group by is fasterSELECT cpsa1.*FROM company_sale_account cpsa1   JOIN  company_sale_account cpsa2  ON cpsa1.sale_account_id = cpsa2.sale_account_id WHERE  cpsa1.company_name = 'company_a'   AND cpsa2.company_name = 'company_b'GROUP BY cpsa1.id; --assuming primary key exists, and I forget if the feature that allows only naming primary key in group by might have been introduced with 9.6It should be noted that 9.5 is about 1 year from being EOL'd so it would be prudent to update to v11 or 12 when possible.How does the below query perform? By the way, \"top posting\" (replying with all previous email thread below your reply) is discouraged on these forums. It makes the reviewing archived posts more cumbersome. Instead, please reply with only your message and copying the relevant parts of prior conversation that you are responding to. SELECT cpsa1.*FROM company_sale_account cpsa1  WHERE cpsa1.company_name = 'company_a' AND EXISTS(SELECT FROM  company_sale_account cpsa2 WHER cpsa1.sale_account_id = cpsa2.sale_account_id AND cpsa2.company_name = 'company_b' );", "msg_date": "Mon, 23 Mar 2020 10:16:31 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JOIN on partitions is very slow" }, { "msg_contents": "Michael Lewis schrieb am 23.03.2020 um 17:16:\n> Yes. I can tweak the query. Version of postgres is 9.5.15. I have\n> about 20 partitions for company_sale_account table. I do have an\n> index on company name.\n>\n> I need to use DISTINCT as i need to remove the duplicates.\n>\n>\n> DISTINCT is a sign of improper joins most of the time in my\n> experience. Often, just changing to group by is faster\n\nAs none of the columns of the joined table are used, most probably\nthis should be re-written as an EXISTS condition.\nThen neither GROUP BY nor DISTINCT is needed.\n\n\n\n\n\n", "msg_date": "Mon, 23 Mar 2020 17:33:34 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JOIN on partitions is very slow" }, { "msg_contents": ">\n>\n>\n> As none of the columns of the joined table are used, most probably\n> this should be re-written as an EXISTS condition.\n> Then neither GROUP BY nor DISTINCT is needed.\n>\n>\nI need the columns from joined tables. To keep it simple, I didn't include\nthem in the query. EXISTS solution won't work for me.\n\n\nAs none of the columns of the joined table are used, most probably\nthis should be re-written as an EXISTS condition.\nThen neither GROUP BY nor DISTINCT is needed.\n I need the columns from joined tables. To keep it simple, I didn't include them in the query. EXISTS solution won't work for me.", "msg_date": "Fri, 27 Mar 2020 20:54:54 +0530", "msg_from": "daya airody <[email protected]>", "msg_from_op": true, "msg_subject": "Re: JOIN on partitions is very slow" } ]
[ { "msg_contents": "I have noticed that my write/update performance starts to dramatically\nreduce after about 10 million rows on my hardware. The reason for the\nslowdown is the index updates on every write/update.\n\nThe solution would be partitioning? One of my tables will have more\nthan 1 billion rows of data, so I would have to create about 100\npartitions for that table. Is the a practical limit to the amount of\npartitions I can have with Postgresql 12?\n\n\n", "msg_date": "Sun, 22 Mar 2020 21:22:50 -0400", "msg_from": "Arya F <[email protected]>", "msg_from_op": true, "msg_subject": "Partitions to improve write/update speed for tables with indexes?" }, { "msg_contents": "On Sun, Mar 22, 2020 at 09:22:50PM -0400, Arya F wrote:\n> I have noticed that my write/update performance starts to dramatically\n> reduce after about 10 million rows on my hardware. The reason for the\n> slowdown is the index updates on every write/update.\n\nIt's commonly true that the indexes need to fit entirely in shared_buffers for\ngood write performance. I gave some suggestions here:\nhttps://www.postgresql.org/message-id/20200223101209.GU31889%40telsasoft.com\n\n> The solution would be partitioning? One of my tables will have more\n> than 1 billion rows of data, so I would have to create about 100\n> partitions for that table. Is the a practical limit to the amount of\n> partitions I can have with Postgresql 12?\n\nThe recommendation since pg12 is to use at most a \"few thousand\" partitions, so\nfor the moment you'd be well within the recommendation.\nhttps://www.postgresql.org/docs/12/ddl-partitioning.html\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 22 Mar 2020 20:29:04 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitions to improve write/update speed for tables with indexes?" } ]
[ { "msg_contents": "Hello All,\n\nWhile doing some tests with hash partitioning behavior in PG11 and 12, I\nhave found that PG11 is not performing partition pruning with DELETEs\n(explain analyze returned >2000 lines). I then ran the same test in PG12\nand recreated the objects using the same DDL, and it worked\n\nHere are the tests:\n\n*1) PG11 Hash Partitioning, no partition pruning:*\npostgres=> \\timing\nTiming is on.\npostgres=> select version();\n version\n---------------------------------------------------------------------------------------------------------\nPostgreSQL 11.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-11), 64-bit\n(1 row)\n\nTime: 33.325 ms\n\npostgres=> create table hp ( foo text ) partition by hash (foo);\nCREATE TABLE\nTime: 40.810 ms\npostgres=> create table hp_0 partition of hp for values with (modulus 3,\nremainder 0);\nCREATE TABLE\nTime: 43.990 ms\npostgres=> create table hp_1 partition of hp for values with (modulus 3,\nremainder 1);\nCREATE TABLE\nTime: 43.314 ms\npostgres=> create table hp_2 partition of hp for values with (modulus 3,\nremainder 2);\nCREATE TABLE\nTime: 43.447 ms\npostgres=> insert into hp values ('shayon');\nINSERT 0 1\nTime: 42.975 ms\npostgres=> select * from hp;\n foo\n--------\nshayon\n(1 row)\n\nTime: 40.210 ms\npostgres=> select * from hp_0;\n foo\n--------\nshayon\n(1 row)\n\nTime: 38.898 ms\npostgres=> insert into hp values ('shayon1'), ('shayon2'), ('shayon3');\nINSERT 0 3\nTime: 40.359 ms\npostgres=> select * from hp_0;\n foo\n--------\nshayon\n(1 row)\n\nTime: 39.105 ms\npostgres=> select * from hp_1;\n foo\n---------\nshayon2\n(1 row)\n\nTime: 37.292 ms\npostgres=> select * from hp_2;\n foo\n---------\nshayon1\nshayon3\n(2 rows)\n\nTime: 38.604 ms\npostgres=> explain select * from hp where foo = 'shayon2';\n QUERY PLAN\n------------------------------------------------------------\nAppend (cost=0.00..27.04 rows=7 width=32)\n -> Seq Scan on hp_1 (cost=0.00..27.00 rows=7 width=32)\n Filter: (foo = 'shayon2'::text)\n(3 rows)\n\nTime: 39.581 ms\npostgres=> explain delete from hp where foo = 'shayon2';\n QUERY PLAN\n-----------------------------------------------------------\nDelete on hp (cost=0.00..81.00 rows=21 width=6)\n Delete on hp_0\n Delete on hp_1\n Delete on hp_2\n\n\n\n\n\n* -> Seq Scan on hp_0 (cost=0.00..27.00 rows=7 width=6) Filter:\n(foo = 'shayon2'::text) -> Seq Scan on hp_1 (cost=0.00..27.00 rows=7\nwidth=6) Filter: (foo = 'shayon2'::text) -> Seq Scan on hp_2\n (cost=0.00..27.00 rows=7 width=6) Filter: (foo = 'shayon2'::text)*\n(10 rows)\n\nTime: 38.749 ms\n\n2) *PG12 hash prune, pruning works: *\ndev=> \\timing\nTiming is on.\ndev=> select version();\n version\n--------------------------------------------------------------------------------------------------------\nPostgreSQL 12.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3\n20140911 (Red Hat 4.8.3-9), 64-bit\n(1 row)\n\nTime: 29.786 ms\ndev=> CREATE TABLE hp ( foo text ) PARTITION BY HASH (foo);\nCREATE TABLE\nTime: 30.680 ms\ndev=> CREATE TABLE hp_0 PARTITION OF hp FOR VALUES WITH (MODULUS 3,\nREMAINDER 0);\nCREATE TABLE\nTime: 122.791 ms\ndev=> CREATE TABLE hp_1 PARTITION OF hp FOR VALUES WITH (MODULUS 3,\nREMAINDER 1);\nCREATE TABLE\nTime: 32.053 ms\ndev=> CREATE TABLE hp_2 PARTITION OF hp FOR VALUES WITH (MODULUS 3,\nREMAINDER 2);\nCREATE TABLE\nTime: 31.839 ms\ndev=> insert into hp values ('shayon1'), ('shayon2'), ('shayon3'),\n('shayon');\nINSERT 0 4\nTime: 27.887 ms\ndev=> select * from hp_1;\n foo\n---------\nshayon2\n(1 row)\n\nTime: 27.697 ms\ndev=> select * from hp_2;\n foo\n---------\nshayon1\nshayon3\n(2 rows)\n\nTime: 27.845 ms\ndev=> select * from hp_0;\n foo\n--------\nshayon\n(1 row)\n\nTime: 27.679 ms\ndev=> explain delete from hp where foo = 'shayon2';\n QUERY PLAN\n-----------------------------------------------------------\nDelete on hp (cost=0.00..27.00 rows=7 width=6)\n Delete on hp_1\n ->\n*Seq Scan on hp_1 (cost=0.00..27.00 rows=7 width=6) Filter: (foo =\n'shayon2'::text)*\n(4 rows)\n\nTime: 30.490 ms\n\nIs this a bug, somewhat related to MergeAppend?\nhttps://github.com/postgres/postgres/commit/5220bb7533f9891b1e071da6461d5c387e8f7b09\n\nIf it is, anyone know if we have a workaround for DELETEs to use hash\npartitions in PG11?\n\nThanks,\nShayon\n\nHello All,While doing some tests with hash partitioning behavior in PG11 and 12, I have found that PG11 is not performing partition pruning with DELETEs (explain analyze returned >2000 lines). I then ran the same test in PG12 and recreated the objects using the same DDL, and it worked Here are the tests: 1) PG11 Hash Partitioning, no partition pruning:postgres=> \\timingTiming is on.postgres=> select version();                                                 version---------------------------------------------------------------------------------------------------------PostgreSQL 11.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit(1 row) Time: 33.325 ms postgres=> create table hp ( foo text ) partition by hash (foo);CREATE TABLETime: 40.810 mspostgres=> create table hp_0 partition of hp for values with (modulus 3, remainder 0);CREATE TABLETime: 43.990 mspostgres=> create table hp_1 partition of hp for values with (modulus 3, remainder 1);CREATE TABLETime: 43.314 mspostgres=> create table hp_2 partition of hp for values with (modulus 3, remainder 2);CREATE TABLETime: 43.447 mspostgres=> insert into hp values ('shayon');INSERT 0 1Time: 42.975 mspostgres=> select * from hp;  foo--------shayon(1 row) Time: 40.210 mspostgres=> select * from hp_0;  foo--------shayon(1 row) Time: 38.898 mspostgres=> insert into hp values ('shayon1'), ('shayon2'), ('shayon3');INSERT 0 3Time: 40.359 mspostgres=> select * from hp_0;  foo--------shayon(1 row) Time: 39.105 mspostgres=> select * from hp_1;   foo---------shayon2(1 row) Time: 37.292 mspostgres=> select * from hp_2;   foo---------shayon1shayon3(2 rows) Time: 38.604 mspostgres=> explain select * from hp where foo = 'shayon2';                         QUERY PLAN------------------------------------------------------------Append  (cost=0.00..27.04 rows=7 width=32)   ->  Seq Scan on hp_1  (cost=0.00..27.00 rows=7 width=32)         Filter: (foo = 'shayon2'::text)(3 rows) Time: 39.581 mspostgres=> explain delete from hp where foo = 'shayon2';                        QUERY PLAN-----------------------------------------------------------Delete on hp  (cost=0.00..81.00 rows=21 width=6)   Delete on hp_0   Delete on hp_1   Delete on hp_2   ->  Seq Scan on hp_0  (cost=0.00..27.00 rows=7 width=6)         Filter: (foo = 'shayon2'::text)   ->  Seq Scan on hp_1  (cost=0.00..27.00 rows=7 width=6)         Filter: (foo = 'shayon2'::text)   ->  Seq Scan on hp_2  (cost=0.00..27.00 rows=7 width=6)         Filter: (foo = 'shayon2'::text)(10 rows) Time: 38.749 ms2) PG12 hash prune, pruning works:            dev=> \\timingTiming is on.dev=> select version();                                                version--------------------------------------------------------------------------------------------------------PostgreSQL 12.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit(1 row) Time: 29.786 msdev=> CREATE TABLE hp ( foo text ) PARTITION BY HASH (foo);CREATE TABLETime: 30.680 msdev=> CREATE TABLE hp_0 PARTITION OF hp FOR VALUES WITH (MODULUS 3, REMAINDER 0);CREATE TABLETime: 122.791 msdev=> CREATE TABLE hp_1 PARTITION OF hp FOR VALUES WITH (MODULUS 3, REMAINDER 1);CREATE TABLETime: 32.053 msdev=> CREATE TABLE hp_2 PARTITION OF hp FOR VALUES WITH (MODULUS 3, REMAINDER 2);CREATE TABLETime: 31.839 msdev=> insert into hp values ('shayon1'), ('shayon2'), ('shayon3'), ('shayon');INSERT 0 4Time: 27.887 msdev=> select * from hp_1;   foo---------shayon2(1 row) Time: 27.697 msdev=> select * from hp_2;   foo---------shayon1shayon3(2 rows) Time: 27.845 msdev=> select * from hp_0;  foo--------shayon(1 row) Time: 27.679 msdev=> explain delete from hp where foo = 'shayon2';                        QUERY PLAN-----------------------------------------------------------Delete on hp  (cost=0.00..27.00 rows=7 width=6)   Delete on hp_1   ->  Seq Scan on hp_1  (cost=0.00..27.00 rows=7 width=6)         Filter: (foo = 'shayon2'::text)(4 rows) Time: 30.490 msIs this a bug, somewhat related to MergeAppend? https://github.com/postgres/postgres/commit/5220bb7533f9891b1e071da6461d5c387e8f7b09If it is, anyone know if we have a workaround for DELETEs to use hash partitions in PG11?Thanks,Shayon", "msg_date": "Sun, 22 Mar 2020 23:45:53 -0400", "msg_from": "Ronnie S <[email protected]>", "msg_from_op": true, "msg_subject": "Partition Pruning (Hash Partitions) Support for DELETEs in PostgreSQL\n 11 and 12" }, { "msg_contents": "On Sun, Mar 22, 2020 at 11:45:53PM -0400, Ronnie S wrote:\n> Hello All,\n> \n> While doing some tests with hash partitioning behavior in PG11 and 12, I\n> have found that PG11 is not performing partition pruning with DELETEs\n> (explain analyze returned >2000 lines). I then ran the same test in PG12\n> and recreated the objects using the same DDL, and it worked\n\n> Is this a bug, somewhat related to MergeAppend?\n> https://github.com/postgres/postgres/commit/5220bb7533f9891b1e071da6461d5c387e8f7b09\n\n> If it is, anyone know if we have a workaround for DELETEs to use hash\n> partitions in PG11?\n\nI think due to this commit to pg12:\nhttps://commitfest.postgresql.org/22/1778/\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=428b260f87e8861ba8e58807b69d433db491c4f4\n...\nhttps://www.postgresql.org/message-id/5c83dbca-12b5-1acf-0e85-58299e464a26%40lab.ntt.co.jp\nhttps://www.postgresql.org/message-id/4f049572-9440-3c99-afa1-f7ca7f38fe80%40lab.ntt.co.jp\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 22 Mar 2020 23:10:41 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partition Pruning (Hash Partitions) Support for DELETEs in\n PostgreSQL 11 and 12" }, { "msg_contents": "Thanks!\n\nOn Mon, Mar 23, 2020 at 12:10 AM Justin Pryzby <[email protected]> wrote:\n\n> On Sun, Mar 22, 2020 at 11:45:53PM -0400, Ronnie S wrote:\n> > Hello All,\n> >\n> > While doing some tests with hash partitioning behavior in PG11 and 12, I\n> > have found that PG11 is not performing partition pruning with DELETEs\n> > (explain analyze returned >2000 lines). I then ran the same test in PG12\n> > and recreated the objects using the same DDL, and it worked\n>\n> > Is this a bug, somewhat related to MergeAppend?\n> >\n> https://github.com/postgres/postgres/commit/5220bb7533f9891b1e071da6461d5c387e8f7b09\n>\n> > If it is, anyone know if we have a workaround for DELETEs to use hash\n> > partitions in PG11?\n>\n> I think due to this commit to pg12:\n> https://commitfest.postgresql.org/22/1778/\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=428b260f87e8861ba8e58807b69d433db491c4f4\n> ...\n>\n> https://www.postgresql.org/message-id/5c83dbca-12b5-1acf-0e85-58299e464a26%40lab.ntt.co.jp\n>\n> https://www.postgresql.org/message-id/4f049572-9440-3c99-afa1-f7ca7f38fe80%40lab.ntt.co.jp\n>\n> --\n> Justin\n>\n\nThanks! On Mon, Mar 23, 2020 at 12:10 AM Justin Pryzby <[email protected]> wrote:On Sun, Mar 22, 2020 at 11:45:53PM -0400, Ronnie S wrote:\n> Hello All,\n> \n> While doing some tests with hash partitioning behavior in PG11 and 12, I\n> have found that PG11 is not performing partition pruning with DELETEs\n> (explain analyze returned >2000 lines). I then ran the same test in PG12\n> and recreated the objects using the same DDL, and it worked\n\n> Is this a bug, somewhat related to MergeAppend?\n> https://github.com/postgres/postgres/commit/5220bb7533f9891b1e071da6461d5c387e8f7b09\n\n> If it is, anyone know if we have a workaround for DELETEs to use hash\n> partitions in PG11?\n\nI think due to this commit to pg12:\nhttps://commitfest.postgresql.org/22/1778/\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=428b260f87e8861ba8e58807b69d433db491c4f4\n...\nhttps://www.postgresql.org/message-id/5c83dbca-12b5-1acf-0e85-58299e464a26%40lab.ntt.co.jp\nhttps://www.postgresql.org/message-id/4f049572-9440-3c99-afa1-f7ca7f38fe80%40lab.ntt.co.jp\n\n-- \nJustin", "msg_date": "Mon, 23 Mar 2020 10:18:27 -0400", "msg_from": "Ronnie S <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partition Pruning (Hash Partitions) Support for DELETEs in\n PostgreSQL 11 and 12" } ]
[ { "msg_contents": "Hi,\n\nI am trying to generate some random data using the random() function.\n\nHowever, I am getting the same result over mulitiple rows. This is a \nsample of the SQL I am using:\n\nselect (select string_agg(random()::text,';')\n           from pg_catalog.generate_series(1,3,1) )\n   from generate_series(1,10,1)\n\nAnd I am getting something like:\n\n|string_agg |\n+--------------------------------------------------------------+\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n\nIf this is the expected output, is there a way to always generate random \nnumbers?\n\n\n\n", "msg_date": "Tue, 24 Mar 2020 15:10:16 -0300", "msg_from": "Luis Roberto Weck <[email protected]>", "msg_from_op": true, "msg_subject": "Random function" }, { "msg_contents": "Luis Roberto Weck <[email protected]> writes:\n> I am trying to generate some random data using the random() function.\n\n> However, I am getting the same result over mulitiple rows. This is a \n> sample of the SQL I am using:\n\n> select (select string_agg(random()::text,';')\n> from pg_catalog.generate_series(1,3,1) )\n> from generate_series(1,10,1)\n\nThe sub-select is independent of the outer select so it's only computed\nonce, and then you get ten copies of that result. Restructuring the\nquery, or inserting an artificial dependency on the outer select's data,\nwould help.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Mar 2020 14:33:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random function" }, { "msg_contents": "How is this a performance related question?\n\nOn Tue, Mar 24, 2020 at 11:10 AM Luis Roberto Weck <\[email protected]> wrote:\n\n> However, I am getting the same result over mulitiple rows. This is a\n> sample of the SQL I am using:\n>\n> select (select string_agg(random()::text,';')\n> from pg_catalog.generate_series(1,3,1) )\n> from generate_series(1,10,1)\n>\n> And I am getting something like:\n>\n> |string_agg |\n> +--------------------------------------------------------------+\n> |0.243969671428203583;0.692578794434666634;0.291524752043187618|\n> |0.243969671428203583;0.692578794434666634;0.291524752043187618|\n> |0.243969671428203583;0.692578794434666634;0.291524752043187618|\n> |0.243969671428203583;0.692578794434666634;0.291524752043187618|\n> |0.243969671428203583;0.692578794434666634;0.291524752043187618|\n> |0.243969671428203583;0.692578794434666634;0.291524752043187618|\n> |0.243969671428203583;0.692578794434666634;0.291524752043187618|\n> |0.243969671428203583;0.692578794434666634;0.291524752043187618|\n> |0.243969671428203583;0.692578794434666634;0.291524752043187618|\n> |0.243969671428203583;0.692578794434666634;0.291524752043187618|\n>\n> If this is the expected output,\n\n\nYes, you've asked it to compute a value, assign it to a column, then\ngenerate 10 rows of that value.\n\nis there a way to always generate random\n> numbers?\n>\n\nDon't use a scalar subquery in the main target list.\n\nOne possible answer:\n\nselect format('%s;%s;%s', random(), random(), random()) from\ngenerate_series(1, 10)\n\nDavid J.\n\nHow is this a performance related question?On Tue, Mar 24, 2020 at 11:10 AM Luis Roberto Weck <[email protected]> wrote:However, I am getting the same result over mulitiple rows. This is a \nsample of the SQL I am using:\n\nselect (select string_agg(random()::text,';')\n           from pg_catalog.generate_series(1,3,1) )\n   from generate_series(1,10,1)\n\nAnd I am getting something like:\n\n|string_agg |\n+--------------------------------------------------------------+\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n|0.243969671428203583;0.692578794434666634;0.291524752043187618|\n\nIf this is the expected output,Yes, you've asked it to compute a value, assign it to a column, then generate 10 rows of that value. is there a way to always generate random \nnumbers?Don't use a scalar subquery in the main target list.One possible answer:select format('%s;%s;%s', random(), random(), random()) from generate_series(1, 10)David J.", "msg_date": "Tue, 24 Mar 2020 11:33:26 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random function" } ]
[ { "msg_contents": "Hello list,\n\nI'm trying to clean up a database with millions of records of \nuseless-but-don't-remove-just-in-case data. This database has all tables \nin public schema so I've created a new schema \"old_data\" to move there \nall this data. I have several tables with 20million of records or so \nthat I've managed to clean up relatively fast without special effort \n(not having to drop indexes or constraints) What I've made with these \ntables is easy as these ones are going to be emptied (I have to keep \ntables) so I only have to insert data into old_data.new_table and \ntruncate cascade.\n\nBut also I'm cleaning tables with 150million records where I'm going to \nremove 60% of existing data and after a few tests I'm not sure what's \nthe best approach as all seem to take similar time to run. These tables \nare grouped in 4 tables group with master, detail, master_history, \ndetail_history structure. None of the tables have primary key nor \nforeign key or any constraint but the sequence used for what should be \nthe PK column, though this column is not defined as PK.\n\nI've decided to delete from the last one in chunks (10 days of data per \nchunk but it coud be any other quantity) so I've created a function.  \nI've tested it with indexes (in master_hist for filtering data and in \ndetail_hist for the fk and pk), without indexes, after analyzing table, \nand no matter what I always end up with more or less the same execution \ntime. I can afford the time it's getting to run but I'd like to know if \nit's there a better way to do this. I'm testing on version 9.2 BUT \nproduction server is 8.4 (legacy application, supposed to be in at least \n9.2 but recently discovered it was 8.4, planning upgrade but not now). \nConfig parameters are default ones.\n\nTable definition:\n\nCREATE TABLE master (\n\n   id integer serial NOT NULL,\n   device_id int4 NOT NULL,\n   col1 int4 NULL DEFAULT 0,\n   data_date bpchar(17) NULL, -- field to filter data\n   data_file_date bpchar(14) NULL\n); -- 9 of 20 records to be removed\n\nCREATE TABLE detail (\n   id integer serial NOT NULL,\n   parent_id int4 NOT NULL,\n   col1 float8 NULL,\n   col2 int4 NOT NULL\n); -- 2304 of 5120 records to be removed\n\nCREATE TABLE master_history (\n   id integer serial NOT NULL,\n   device_id int4 NOT NULL,\n   col1 int4 NULL DEFAULT 0,\n   data_date bpchar(17) NULL, -- field to filter data\n   data_file_date bpchar(14) NULL\n);  --355687 of 586999 records to be removed\n\nCREATE TABLE detail_history (\n   id integer serial NOT NULL,\n   parent_id int4 NOT NULL,\n   col1 float8 NULL,\n   col2 int4 NOT NULL\n); -- 91055872 of  150.271.744 records to be removed\n\n\nAnd the function:\n\nCREATE or replace FUNCTION delete_test() RETURNS integer AS $$\nDECLARE\n     _begin_date date;\n     _end_date date := '2019-08-01';\n     _begin_exec timestamp := clock_timestamp();\n     _end_exec timestamp ;\n     _begin_exec_partial timestamp;\n     _end_exec_partial timestamp;\n     _time double precision;\n     _num_regs integer;\nBEGIN\n     for _begin_date in (select '2018-05-01'::date + s.a * '10 \ndays'::interval from (select generate_series(0,1000) as a) as s)\n     loop\n         if (_begin_date > _end_date) then\n             raise log 'STOP!!!!!';\n             exit;\n         end if;\n         raise log 'Date %', _begin_date;\n         _begin_exec_partial := clock_timestamp();\n         delete from public.detail_history t1\n           where exists\n             (select 1 from public.master_history t2\n               where t2.id = t1.parent_id\n                 and t2.data_date >= rpad(to_char(_begin_date, \n'YYYYMMDD'), 17, '0')\n                 and t2.data_date < rpad(to_char((_begin_date + interval \n'10 days'), 'YYYYMMDD'), 17, '0'));\n         GET DIAGNOSTICS _num_regs = ROW_COUNT;\n         _end_exec_partial := clock_timestamp();\n         _time := 1000 * ( extract(epoch from _end_exec_partial) - \nextract(epoch from _begin_exec_partial) );\n         raise log 'Records removed % in % ms', _num_regs, _time;\n\n     end loop;\n\n     _end_exec := clock_timestamp();\n     _time := 1000 * ( extract(epoch from _end_exec) - extract(epoch \nfrom _begin_exec) );\n     raise log 'Total time: %', _time;\n     return 0;\nEND;\n$$ LANGUAGE plpgsql;\n\n\nDelete execution plan in 8.4 is:\n\ntest_eka=# explain delete from public.detail_hist t1\ntest_eka-#   where exists\ntest_eka-#     (select 1 from public.master_hist t2\ntest_eka(#       where t2.id = t1.parent_id\ntest_eka(#         and t2.data_date >= '20180501000000000000000'\ntest_eka(#         and t2.data_date < '20190101000000000000000');\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n  Hash Join  (cost=33431.46..5890182.88 rows=156649104 width=6)\n    Hash Cond: (t1.parent_id = t2.id)\n    ->  Seq Scan on detail_hist t1  (cost=0.00..2564256.04 \nrows=156649104 width=10)\n    ->  Hash  (cost=30922.13..30922.13 rows=152906 width=4)\n          ->  Unique  (cost=30157.60..30922.13 rows=152906 width=4)\n                ->  Sort  (cost=30157.60..30539.87 rows=152906 width=4)\n                      Sort Key: t2.id\n                      ->  Seq Scan on master_hist t2 \n(cost=0.00..14897.65 rows=152906 width=4)\n                            Filter: ((data_date >= \n'20180501000000000000000'::bpchar) AND (data_date < \n'20190101000000000000000'::bpchar))\n\n\nAfter PK-FK creation (with IX over FK)\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n  Hash Join  (cost=26678.41..5883424.77 rows=156648960 width=6)\n    Hash Cond: (t1.id_param_espec_este = t2.id_param_espec_este_historico)\n    ->  Seq Scan on param_espec_este_datos_historico_tbl t1 \n(cost=0.00..2564254.60 rows=156648960 width=10)\n    ->  Hash  (cost=24169.09..24169.09 rows=152906 width=4)\n          ->  Unique  (cost=23404.56..24169.09 rows=152906 width=4)\n                ->  Sort  (cost=23404.56..23786.82 rows=152906 width=4)\n                      Sort Key: t2.id_param_espec_este_historico\n                      ->  Index Scan using fecha_gps_pe_este_hist_idx on \nparam_espec_este_historico_tbl t2 (cost=0.00..8144.60 rows=152906 width=4)\n                            Index Cond: \n((fecha_gps_parametros_espectrales >= '20180501000000000000000'::bpchar) \nAND (fecha_gps_parametros_espectrales < '20190101000000000000000'::bpchar))\n\n\nAny ideas are welcome.\n\nKind regards,\n\nEkaterina.\n\n\n\n\n", "msg_date": "Fri, 27 Mar 2020 15:13:49 +0100", "msg_from": "Ekaterina Amez <[email protected]>", "msg_from_op": true, "msg_subject": "Best way to delete big amount of records from big table" }, { "msg_contents": "If you can afford the time, I am not sure the reason for the question. Just\nrun it and be done with it, yes?\n\nA couple of thoughts-\n1) That is a big big transaction if you are doing all the cleanup in a\nsingle function call. Will this be a production system that is still online\nfor this archiving? Having a plpgsql function that encapsulates the work\nseems fine, but I would limit the work to a month at a time or something\nand call the function repeatedly. Get the min month where records exist\nstill, delete everything matching that, return. Rinse, repeat.\n2) If you are deleting/moving most of the table (91 of 150 million),\nconsider moving only the records you are keeping to a new table, renaming\nold table, and renaming new table back to original name. Then you can do\nwhat you want to shift the data in the old table and delete it.\n\nIf you can afford the time, I am not sure the reason for the question. Just run it and be done with it, yes?A couple of thoughts-1) That is a big big transaction if you are doing all the cleanup in a single function call. Will this be a production system that is still online for this archiving? Having a plpgsql function that encapsulates the work seems fine, but I would limit the work to a month at a time or something and call the function repeatedly. Get the min month where records exist still, delete everything matching that, return. Rinse, repeat.2) If you are deleting/moving most of the table (91 of 150 million), consider moving only the records you are keeping to a new table, renaming old table, and renaming new table back to original name. Then you can do what you want to shift the data in the old table and delete it.", "msg_date": "Fri, 27 Mar 2020 08:41:04 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to delete big amount of records from big table" }, { "msg_contents": "On Fri, 2020-03-27 at 15:13 +0100, Ekaterina Amez wrote:\n> I'm trying to clean up a database with millions of records of \n> useless-but-don't-remove-just-in-case data. [...]\n> \n> But also I'm cleaning tables with 150million records where I'm going to \n> remove 60% of existing data and after a few tests I'm not sure what's \n> the best approach as all seem to take similar time to run. These tables \n> are grouped in 4 tables group with master, detail, master_history, \n> detail_history structure. None of the tables have primary key nor \n> foreign key or any constraint but the sequence used for what should be \n> the PK column, though this column is not defined as PK.\n\nYou should define primary and foreign keys if you can, but I guess\nI don't have to tell you that.\n\n> I've decided to delete from the last one in chunks (10 days of data per \n> chunk but it coud be any other quantity) so I've created a function. \n> I've tested it with indexes (in master_hist for filtering data and in \n> detail_hist for the fk and pk), without indexes, after analyzing table, \n> and no matter what I always end up with more or less the same execution \n> time. I can afford the time it's getting to run but I'd like to know if \n> it's there a better way to do this.\n\nThere is no need to delete in batches unless you have a need to keep\ntransactions short (danger of deadlock because the data are still\nmodified, or you cannot afford to block autovacuum that long).\n\nIf you can drop the indexes while you do it (downtime), go for it.\nPerhaps there is a way to use partial indexes that exclude all the\ndata that you have to delete, then work could go on as normal.\n\n> I'm testing on version 9.2 BUT \n> production server is 8.4 (legacy application, supposed to be in at least \n> 9.2 but recently discovered it was 8.4, planning upgrade but not now). \n> Config parameters are default ones.\n\nNow that is a seriously bad idea. You should test on the same version\nas you have running in production. And you should insist in an upgrade.\nPeople who insist in running ancient software often insist in ancient\nhardware as well, and both is a good way to get data corruption.\nIf the system blows up, they are going to blame you.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Fri, 27 Mar 2020 15:46:57 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to delete big amount of records from big table" }, { "msg_contents": "Sorry, I sent my response only to you, I'm sending it again to the group in\na minute...\n\nEl vie., 27 mar. 2020 a las 15:41, Michael Lewis (<[email protected]>)\nescribió:\n\n> If you can afford the time, I am not sure the reason for the question.\n> Just run it and be done with it, yes?\n>\n> A couple of thoughts-\n> 1) That is a big big transaction if you are doing all the cleanup in a\n> single function call. Will this be a production system that is still online\n> for this archiving? Having a plpgsql function that encapsulates the work\n> seems fine, but I would limit the work to a month at a time or something\n> and call the function repeatedly. Get the min month where records exist\n> still, delete everything matching that, return. Rinse, repeat.\n> 2) If you are deleting/moving most of the table (91 of 150 million),\n> consider moving only the records you are keeping to a new table, renaming\n> old table, and renaming new table back to original name. Then you can do\n> what you want to shift the data in the old table and delete it.\n>\n\nSorry, I sent my response only to you, I'm sending it again to the group in a minute...El vie., 27 mar. 2020 a las 15:41, Michael Lewis (<[email protected]>) escribió:If you can afford the time, I am not sure the reason for the question. Just run it and be done with it, yes?A couple of thoughts-1) That is a big big transaction if you are doing all the cleanup in a single function call. Will this be a production system that is still online for this archiving? Having a plpgsql function that encapsulates the work seems fine, but I would limit the work to a month at a time or something and call the function repeatedly. Get the min month where records exist still, delete everything matching that, return. Rinse, repeat.2) If you are deleting/moving most of the table (91 of 150 million), consider moving only the records you are keeping to a new table, renaming old table, and renaming new table back to original name. Then you can do what you want to shift the data in the old table and delete it.", "msg_date": "Fri, 27 Mar 2020 15:55:27 +0100", "msg_from": "Ekaterina Amez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best way to delete big amount of records from big table" }, { "msg_contents": "Hi Michael,\n\nEl vie., 27 mar. 2020 a las 15:41, Michael Lewis (<[email protected]>)\nescribió:\n\n> If you can afford the time, I am not sure the reason for the question.\n> Just run it and be done with it, yes?\n>\n\nI've been working with other RDBMS all of my life and I'm quite new to PG\nworld, and I'm learning to do things when I need to do them so I'm trying\nto learn them in the right way :D\nAlso, for what I'm seeing in other projects, this is going to be a problem\nin most of them (if it's not yet a problem), and it's going to be me the\none that solves it so again I'm in the path of learning to do this kind of\nthings in the right way.\n\n\n>\n> A couple of thoughts-\n> 1) That is a big big transaction if you are doing all the cleanup in a\n> single function call. Will this be a production system that is still online\n> for this archiving? Having a plpgsql function that encapsulates the work\n> seems fine, but I would limit the work to a month at a time or something\n> and call the function repeatedly. Get the min month where records exist\n> still, delete everything matching that, return. Rinse, repeat.\n>\n\nOk, the function provided it's just a first approach. I was planning to add\nparameters to make dates more flexible.\n\n2) If you are deleting/moving most of the table (91 of 150 million),\n> consider moving only the records you are keeping to a new table, renaming\n> old table, and renaming new table back to original name. Then you can do\n> what you want to shift the data in the old table and delete it.\n>\n\nI was aware of this solution but I've read it's not side effect free. As my\ntables don't have any kind of FK-PK only the sequences for the serial\ncolumns, would this be a safe way to do what I want?\n\nHi Michael,El vie., 27 mar. 2020 a las 15:41, Michael Lewis (<[email protected]>) escribió:If you can afford the time, I am not sure the reason for the question. Just run it and be done with it, yes?I've been working with other RDBMS all of my life and I'm quite new\n to PG world,  and I'm learning to do things when I need to do them so \nI'm trying to learn them in the right way :DAlso, for \nwhat I'm seeing in other projects, this is going to be a problem in most\n of them (if it's not yet a problem), and it's going to be me the one \nthat solves it so again I'm in the path of learning to do this kind of \nthings in the right way. A couple of thoughts-1) That is a big big transaction if you are doing all the cleanup in a single function call. Will this be a production system that is still online for this archiving? Having a plpgsql function that encapsulates the work seems fine, but I would limit the work to a month at a time or something and call the function repeatedly. Get the min month where records exist still, delete everything matching that, return. Rinse, repeat.Ok, the function provided it's just a first approach. I was planning to add parameters to make dates more flexible. 2) If you are deleting/moving most of the table (91 of 150 million), consider moving only the records you are keeping to a new table, renaming old table, and renaming new table back to original name. Then you can do what you want to shift the data in the old table and delete it.I was aware of this solution but I've read it's not side effect free. As\n my tables don't have any kind of FK-PK only the sequences for the \nserial columns,  would this be a safe way to do what I want?", "msg_date": "Fri, 27 Mar 2020 15:56:47 +0100", "msg_from": "Ekaterina Amez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best way to delete big amount of records from big table" }, { "msg_contents": "On Fri, Mar 27, 2020 at 10:14 AM Ekaterina Amez <[email protected]>\nwrote:\n\n>\n> it's there a better way to do this. I'm testing on version 9.2 BUT\n> production server is 8.4 (legacy application, supposed to be in at least\n> 9.2 but recently discovered it was 8.4, planning upgrade but not now).\n> Config parameters are default ones.\n>\n\nPostgreSQL 8.4 came out in 2009 and hit EOL in 2014. PostgreSQL 9.2 hit\nEOL in 2017.\nhttps://en.wikipedia.org/wiki/PostgreSQL#Release_history\n\nOn Fri, Mar 27, 2020 at 10:14 AM Ekaterina Amez <[email protected]> wrote:\nit's there a better way to do this. I'm testing on version 9.2 BUT \nproduction server is 8.4 (legacy application, supposed to be in at least \n9.2 but recently discovered it was 8.4, planning upgrade but not now). \nConfig parameters are default ones.PostgreSQL 8.4 came out in 2009 and hit EOL in 2014.   PostgreSQL 9.2 hit EOL in 2017.https://en.wikipedia.org/wiki/PostgreSQL#Release_history", "msg_date": "Fri, 27 Mar 2020 11:03:10 -0400", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to delete big amount of records from big table" }, { "msg_contents": "On Fri, Mar 27, 2020 at 08:41:04AM -0600, Michael Lewis wrote:\n> 2) If you are deleting/moving most of the table (91 of 150 million),\n> consider moving only the records you are keeping to a new table, renaming\n> old table, and renaming new table back to original name. Then you can do\n> what you want to shift the data in the old table and delete it.\n\nYou could also make the old table a child of (inherit from) the new table.\nThat allows you to remove rows separately from removing them.\nPartitioning (with legacy inheritence or the new, integrated way available in\npostgres 10) allows DROPing oldest tables rather than DELETEing from one\ngigantic table.\n\nYou should consider somehow cleaning up the old table after you DELETE from it,\nmaybe using vacuum full (which requires a long exclusive lock) or pg_repack\n(which briefly acquires an exclusive lock).\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 27 Mar 2020 10:09:54 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to delete big amount of records from big table" }, { "msg_contents": "Hi Laurenz,\n\nEl vie., 27 mar. 2020 a las 15:46, Laurenz Albe (<[email protected]>)\nescribió:\n\n> On Fri, 2020-03-27 at 15:13 +0100, Ekaterina Amez wrote:\n> > I'm trying to clean up a database with millions of records of\n> > useless-but-don't-remove-just-in-case data. [...]\n> >\n> > But also I'm cleaning tables with 150million records where I'm going to\n> > remove 60% of existing data and after a few tests I'm not sure what's\n> > the best approach as all seem to take similar time to run. These tables\n> > are grouped in 4 tables group with master, detail, master_history,\n> > detail_history structure. None of the tables have primary key nor\n> > foreign key or any constraint but the sequence used for what should be\n> > the PK column, though this column is not defined as PK.\n>\n> You should define primary and foreign keys if you can, but I guess\n> I don't have to tell you that.\n>\n\nI know about DB design ;)\nThis structure of master-detail-master_hist-detail_hist is repeated all\nover the DB and other groups of tables are perfectly created with theri\nPK-FK-UQ-IX... I don't know why these ones haven't been created in the same\nway.\nExcuse me if this is a silly question but I've read (or understood) that\nit's better to remove constraints to improve delete performance... this is\nrelated to indexes only? or also to PK-FK?\n\n\n> > I've decided to delete from the last one in chunks (10 days of data per\n> > chunk but it coud be any other quantity) so I've created a function.\n> > I've tested it with indexes (in master_hist for filtering data and in\n> > detail_hist for the fk and pk), without indexes, after analyzing table,\n> > and no matter what I always end up with more or less the same execution\n> > time. I can afford the time it's getting to run but I'd like to know if\n> > it's there a better way to do this.\n>\n> There is no need to delete in batches unless you have a need to keep\n> transactions short (danger of deadlock because the data are still\n> modified, or you cannot afford to block autovacuum that long).\n>\n\nI prefer doing it in batches because I know there are other processes\naccessing this table and I can't assure they won't change any data.\n\n\n> If you can drop the indexes while you do it (downtime), go for it.\n> Perhaps there is a way to use partial indexes that exclude all the\n> data that you have to delete, then work could go on as normal.\n>\n\nAs I said, these particular tables doesn't have any indexes at all. I'll\ngive a try to the partial index suggestion, thanks.\n\n\n> > I'm testing on version 9.2 BUT\n> > production server is 8.4 (legacy application, supposed to be in at least\n> > 9.2 but recently discovered it was 8.4, planning upgrade but not now).\n> > Config parameters are default ones.\n>\n> Now that is a seriously bad idea. You should test on the same version\n> as you have running in production. And you should insist in an upgrade.\n> People who insist in running ancient software often insist in ancient\n> hardware as well, and both is a good way to get data corruption.\n> If the system blows up, they are going to blame you.\n>\n\nBelieve me, I'm totally aware of all of this. Upgrade is planned to happen\nafter I clean up the database. I'm the one that has discover that\nproduction server is so old, it looked like no one knew it before. In the\ntime I've been working here I've upgraded 2 servers.\n\n\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\nRegards,\nEkaterina\n\nHi Laurenz,El vie., 27 mar. 2020 a las 15:46, Laurenz Albe (<[email protected]>) escribió:On Fri, 2020-03-27 at 15:13 +0100, Ekaterina Amez wrote:\n> I'm trying to clean up a database with millions of records of \n> useless-but-don't-remove-just-in-case data. [...]\n> \n> But also I'm cleaning tables with 150million records where I'm going to \n> remove 60% of existing data and after a few tests I'm not sure what's \n> the best approach as all seem to take similar time to run. These tables \n> are grouped in 4 tables group with master, detail, master_history, \n> detail_history structure. None of the tables have primary key nor \n> foreign key or any constraint but the sequence used for what should be \n> the PK column, though this column is not defined as PK.\n\nYou should define primary and foreign keys if you can, but I guess\nI don't have to tell you that.I know about DB design ;) This structure of master-detail-master_hist-detail_hist is repeated all over the DB and other groups of tables are perfectly created with theri PK-FK-UQ-IX... I don't know why these ones haven't been created in the same way.Excuse me if this is a silly question but I've read (or understood) that it's better to remove constraints to improve delete performance... this is related to indexes only? or also to PK-FK?\n\n> I've decided to delete from the last one in chunks (10 days of data per \n> chunk but it coud be any other quantity) so I've created a function.  \n> I've tested it with indexes (in master_hist for filtering data and in \n> detail_hist for the fk and pk), without indexes, after analyzing table, \n> and no matter what I always end up with more or less the same execution \n> time. I can afford the time it's getting to run but I'd like to know if \n> it's there a better way to do this.\n\nThere is no need to delete in batches unless you have a need to keep\ntransactions short (danger of deadlock because the data are still\nmodified, or you cannot afford to block autovacuum that long).I prefer doing it in batches because I know there are other processes accessing this table and I can't assure they won't change any data. \n\nIf you can drop the indexes while you do it (downtime), go for it.\nPerhaps there is a way to use partial indexes that exclude all the\ndata that you have to delete, then work could go on as normal.As I said, these particular tables doesn't have any indexes at all. I'll give a try to the partial index suggestion, thanks.\n\n> I'm testing on version 9.2 BUT \n> production server is 8.4 (legacy application, supposed to be in at least \n> 9.2 but recently discovered it was 8.4, planning upgrade but not now). \n> Config parameters are default ones.\n\nNow that is a seriously bad idea.  You should test on the same version\nas you have running in production.  And you should insist in an upgrade.\nPeople who insist in running ancient software often insist in ancient\nhardware as well, and both is a good way to get data corruption.\nIf the system blows up, they are going to blame you.Believe me, I'm totally aware of all of this. Upgrade is planned to happen after I clean up the database. I'm the one that has discover that production server is so old, it looked like no one knew it before. In the time I've been working here I've upgraded 2 servers. \n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\nRegards,Ekaterina", "msg_date": "Fri, 27 Mar 2020 16:15:04 +0100", "msg_from": "Ekaterina Amez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best way to delete big amount of records from big table" }, { "msg_contents": "On Fri, 2020-03-27 at 16:15 +0100, Ekaterina Amez wrote:\n> > You should define primary and foreign keys if you can, but I guess\n> > I don't have to tell you that.\n> \n> Excuse me if this is a silly question but I've read (or understood) that it's better\n> to remove constraints to improve delete performance... this is related to indexes only? or also to PK-FK?\n\nI meant, add constraints *after* you are done cleaning up.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Sat, 28 Mar 2020 05:32:57 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to delete big amount of records from big table" } ]
[ { "msg_contents": "Dear list,\n\nhere is a pretty contrived case where increasing work_mem produces a worse plan, with much worse overall query time. I wonder why that is the case.\n\n\nProblem: INSERTing a thousand new rows in a table which can easily have one million rows. PK is \"id\", which comes from a table, and we have two columns (called \"name\" and \"version\") which do not admit duplicates.\n\nSchema here: https://github.com/uyuni-project/uyuni/blob/Uyuni-2020.03/schema/spacewalk/common/tables/rhnPackageCapability.sql\nIndices here: https://github.com/uyuni-project/uyuni/blob/Uyuni-2020.03/schema/spacewalk/postgres/tables/rhnPackageCapability_index.sql\n\n\nWe want one command that returns IDs given (name, version) couples. If they are already in the table, they should be SELECTed, if they are not, they should be INSERTed.\n\nVersion is NULLable and NULL should be treated as a value.\n\nWe use:\n\nWITH wanted_capability(ordering, name, version) AS (\n VALUES (1, 'first_name', '1.0.0'), (2, 'first_name', '1.0.1'), (1, 'second_name', '1.0.0'), ...998 more...\n)\nmissing_capability AS (\n SELECT wanted_capability.*\n FROM wanted_capability LEFT JOIN rhnPackageCapability\n ON wanted_capability.name = rhnPackageCapability.name\n AND wanted_capability.version IS NOT DISTINCT FROM rhnPackageCapability.version\n WHERE rhnPackageCapability.id IS NULL\n),\ninserted_capability AS (\n INSERT INTO rhnPackageCapability(id, name, version)\n SELECT nextval('rhn_pkg_capability_id_seq'), name, version FROM missing_capability ON CONFLICT DO NOTHING\n RETURNING id, name, version\n)\nSELECT wanted_capability.ordering, inserted_capability.id\n FROM wanted_capability JOIN inserted_capability\n ON wanted_capability.name = inserted_capability.name\n AND wanted_capability.version IS NOT DISTINCT FROM inserted_capability.version\n UNION (\n SELECT wanted_capability.ordering, rhnPackageCapability.id\n FROM wanted_capability JOIN rhnPackageCapability\n ON wanted_capability.name = rhnPackageCapability.name\n AND wanted_capability.version IS NOT DISTINCT FROM rhnPackageCapability.version\n )\n ORDER BY ordering\n;\n\n\nBehavior at work_mem = 5 MB is pretty good, query finishes in 200ms. Plan: https://explain.dalibo.com/plan/4u\n\nBehavior at work_mem = 80 MB seems not equally good, query takes more than 13s. Two expensive SORTs and MERGE JOINs are done instead of HASH JOINs. Plan: thttps://explain.dalibo.com/plan/ORd\n\nAdding one more INDEX on rhnCapability.name fixes the issue.\n\nMy question is: why are SORTs chosen if more work_mem is available, and why can't the planner predict query will be slower that way?\n\nAll of the above is reproducible on openSUSE Leap and PostgreSQL 10.12.\n\nIdeas welcome, and thanks in advance!\n\nRegards,\n--\nSilvio Moioli\nSUSE Manager Development Team\n\n\n", "msg_date": "Mon, 30 Mar 2020 08:47:05 +0200", "msg_from": "Silvio Moioli <[email protected]>", "msg_from_op": true, "msg_subject": "Increasing work_mem slows down query, why?" }, { "msg_contents": "po 30. 3. 2020 v 8:47 odesílatel Silvio Moioli <[email protected]> napsal:\n\n> Dear list,\n>\n> here is a pretty contrived case where increasing work_mem produces a worse\n> plan, with much worse overall query time. I wonder why that is the case.\n>\n>\n> Problem: INSERTing a thousand new rows in a table which can easily have\n> one million rows. PK is \"id\", which comes from a table, and we have two\n> columns (called \"name\" and \"version\") which do not admit duplicates.\n>\n> Schema here:\n> https://github.com/uyuni-project/uyuni/blob/Uyuni-2020.03/schema/spacewalk/common/tables/rhnPackageCapability.sql\n> Indices here:\n> https://github.com/uyuni-project/uyuni/blob/Uyuni-2020.03/schema/spacewalk/postgres/tables/rhnPackageCapability_index.sql\n>\n>\n> We want one command that returns IDs given (name, version) couples. If\n> they are already in the table, they should be SELECTed, if they are not,\n> they should be INSERTed.\n>\n> Version is NULLable and NULL should be treated as a value.\n>\n> We use:\n>\n> WITH wanted_capability(ordering, name, version) AS (\n> VALUES (1, 'first_name', '1.0.0'), (2, 'first_name', '1.0.1'), (1,\n> 'second_name', '1.0.0'), ...998 more...\n> )\n> missing_capability AS (\n> SELECT wanted_capability.*\n> FROM wanted_capability LEFT JOIN rhnPackageCapability\n> ON wanted_capability.name = rhnPackageCapability.name\n> AND wanted_capability.version IS NOT DISTINCT FROM\n> rhnPackageCapability.version\n> WHERE rhnPackageCapability.id IS NULL\n> ),\n> inserted_capability AS (\n> INSERT INTO rhnPackageCapability(id, name, version)\n> SELECT nextval('rhn_pkg_capability_id_seq'), name, version FROM\n> missing_capability ON CONFLICT DO NOTHING\n> RETURNING id, name, version\n> )\n> SELECT wanted_capability.ordering, inserted_capability.id\n> FROM wanted_capability JOIN inserted_capability\n> ON wanted_capability.name = inserted_capability.name\n> AND wanted_capability.version IS NOT DISTINCT FROM\n> inserted_capability.version\n> UNION (\n> SELECT wanted_capability.ordering, rhnPackageCapability.id\n> FROM wanted_capability JOIN rhnPackageCapability\n> ON wanted_capability.name = rhnPackageCapability.name\n> AND wanted_capability.version IS NOT DISTINCT FROM\n> rhnPackageCapability.version\n> )\n> ORDER BY ordering\n> ;\n>\n>\n> Behavior at work_mem = 5 MB is pretty good, query finishes in 200ms. Plan:\n> https://explain.dalibo.com/plan/4u\n>\n> Behavior at work_mem = 80 MB seems not equally good, query takes more than\n> 13s. Two expensive SORTs and MERGE JOINs are done instead of HASH JOINs.\n> Plan: thttps://explain.dalibo.com/plan/ORd\n\n\nplease, can you send explain in text form?\n\nProbably, there is a problem in wrong estimation. What can be expected\nbecause CTE is optimization fence in this version\n\n\nRegards\n\nPavel\n\n\n\n>\n> Adding one more INDEX on rhnCapability.name fixes the issue.\n>\n> My question is: why are SORTs chosen if more work_mem is available, and\n> why can't the planner predict query will be slower that way?\n>\n> All of the above is reproducible on openSUSE Leap and PostgreSQL 10.12.\n>\n> Ideas welcome, and thanks in advance!\n>\n> Regards,\n> --\n> Silvio Moioli\n> SUSE Manager Development Team\n>\n>\n>\n\npo 30. 3. 2020 v 8:47 odesílatel Silvio Moioli <[email protected]> napsal:Dear list,\n\nhere is a pretty contrived case where increasing work_mem produces a worse plan, with much worse overall query time. I wonder why that is the case.\n\n\nProblem: INSERTing a thousand new rows in a table which can easily have one million rows. PK is \"id\", which comes from a table, and we have two columns (called \"name\" and \"version\") which do not admit duplicates.\n\nSchema here: https://github.com/uyuni-project/uyuni/blob/Uyuni-2020.03/schema/spacewalk/common/tables/rhnPackageCapability.sql\nIndices here: https://github.com/uyuni-project/uyuni/blob/Uyuni-2020.03/schema/spacewalk/postgres/tables/rhnPackageCapability_index.sql\n\n\nWe want one command that returns IDs given (name, version) couples. If they are already in the table, they should be SELECTed, if they are not, they should be INSERTed.\n\nVersion is NULLable and NULL should be treated as a value.\n\nWe use:\n\nWITH wanted_capability(ordering, name, version) AS (\n  VALUES (1, 'first_name', '1.0.0'), (2, 'first_name', '1.0.1'), (1, 'second_name', '1.0.0'), ...998 more...\n)\nmissing_capability AS (\n  SELECT wanted_capability.*\n    FROM wanted_capability LEFT JOIN rhnPackageCapability\n      ON wanted_capability.name = rhnPackageCapability.name\n        AND wanted_capability.version IS NOT DISTINCT FROM rhnPackageCapability.version\n    WHERE rhnPackageCapability.id IS NULL\n),\ninserted_capability AS (\n  INSERT INTO rhnPackageCapability(id, name, version)\n    SELECT nextval('rhn_pkg_capability_id_seq'), name, version FROM missing_capability ON CONFLICT DO NOTHING\n    RETURNING id, name, version\n)\nSELECT wanted_capability.ordering, inserted_capability.id\n  FROM wanted_capability JOIN inserted_capability\n    ON wanted_capability.name = inserted_capability.name\n      AND wanted_capability.version IS NOT DISTINCT FROM inserted_capability.version\n    UNION (\n      SELECT wanted_capability.ordering, rhnPackageCapability.id\n        FROM wanted_capability JOIN rhnPackageCapability\n          ON wanted_capability.name = rhnPackageCapability.name\n            AND wanted_capability.version IS NOT DISTINCT FROM rhnPackageCapability.version\n    )\n  ORDER BY ordering\n;\n\n\nBehavior at work_mem = 5 MB is pretty good, query finishes in 200ms. Plan: https://explain.dalibo.com/plan/4u\n\nBehavior at work_mem = 80 MB seems not equally good, query takes more than 13s. Two expensive SORTs and MERGE JOINs are done instead of HASH JOINs. Plan: thttps://explain.dalibo.com/plan/ORdplease, can you send explain in text form?Probably, there is a problem in wrong estimation. What can be expected because CTE is optimization fence in this versionRegardsPavel\n\nAdding one more INDEX on rhnCapability.name fixes the issue.\n\nMy question is: why are SORTs chosen if more work_mem is available, and why can't the planner predict query will be slower that way?\n\nAll of the above is reproducible on openSUSE Leap and PostgreSQL 10.12.\n\nIdeas welcome, and thanks in advance!\n\nRegards,\n--\nSilvio Moioli\nSUSE Manager Development Team", "msg_date": "Mon, 30 Mar 2020 08:56:59 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing work_mem slows down query, why?" }, { "msg_contents": "On 3/30/20 8:56 AM, Pavel Stehule wrote:\n> please, can you send explain in text form?\n\nSure. With work_mem = 80MB:\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=608228.26..608228.27 rows=2 width=36) (actual time=13360.241..13360.454 rows=1100 loops=1)\n Output: wanted_capability.ordering, inserted_capability.id\n Buffers: shared hit=14448\n CTE wanted_capability\n -> Values Scan on \"*VALUES*\" (cost=0.00..552.75 rows=1100 width=68) (actual time=0.001..0.246 rows=1100 loops=1)\n Output: \"*VALUES*\".column1, \"*VALUES*\".column2, \"*VALUES*\".column3\n CTE missing_capability\n -> Merge Left Join (cost=300263.57..303282.17 rows=1 width=68) (actual time=6686.320..6686.320 rows=0 loops=1)\n Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n Merge Cond: (wanted_capability_2.name = (rhnpackagecapability_1.name)::text)\n Join Filter: (NOT (wanted_capability_2.version IS DISTINCT FROM (rhnpackagecapability_1.version)::text))\n Filter: (rhnpackagecapability_1.id IS NULL)\n Rows Removed by Filter: 1100\n Buffers: shared hit=7222\n -> Sort (cost=1155.57..1158.32 rows=1100 width=68) (actual time=10.011..10.053 rows=1100 loops=1)\n Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n Sort Key: wanted_capability_2.name\n Sort Method: quicksort Memory: 203kB\n Buffers: shared hit=5\n -> CTE Scan on wanted_capability wanted_capability_2 (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.000..0.516 rows=1100 loops=1)\n Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n -> Sort (cost=299108.00..300335.41 rows=490964 width=79) (actual time=6475.147..6494.111 rows=462600 loops=1)\n Output: rhnpackagecapability_1.name, rhnpackagecapability_1.version, rhnpackagecapability_1.id\n Sort Key: rhnpackagecapability_1.name\n Sort Method: quicksort Memory: 79862kB\n Buffers: shared hit=7217\n -> Seq Scan on public.rhnpackagecapability rhnpackagecapability_1 (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.016..59.976 rows=490964 loops=1)\n Output: rhnpackagecapability_1.name, rhnpackagecapability_1.version, rhnpackagecapability_1.id\n Buffers: shared hit=7217\n CTE inserted_capability\n -> Insert on public.rhnpackagecapability rhnpackagecapability_2 (cost=0.00..1.51 rows=1 width=1080) (actual time=6686.322..6686.322 rows=0 loops=1)\n Output: rhnpackagecapability_2.id, rhnpackagecapability_2.name, rhnpackagecapability_2.version\n Conflict Resolution: NOTHING\n Tuples Inserted: 0\n Conflicting Tuples: 0\n Buffers: shared hit=7222\n -> Subquery Scan on \"*SELECT*\" (cost=0.00..1.51 rows=1 width=1080) (actual time=6686.321..6686.321 rows=0 loops=1)\n Output: \"*SELECT*\".nextval, \"*SELECT*\".name, \"*SELECT*\".version, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP\n Buffers: shared hit=7222\n -> CTE Scan on missing_capability (cost=0.00..1.00 rows=1 width=72) (actual time=6686.320..6686.320 rows=0 loops=1)\n Output: nextval('rhn_pkg_capability_id_seq'::regclass), missing_capability.name, missing_capability.version\n Buffers: shared hit=7222\n -> Sort (cost=304391.82..304391.83 rows=2 width=36) (actual time=13360.240..13360.283 rows=1100 loops=1)\n Output: wanted_capability.ordering, inserted_capability.id\n Sort Key: wanted_capability.ordering, inserted_capability.id\n Sort Method: quicksort Memory: 100kB\n Buffers: shared hit=14448\n -> Append (cost=1.50..304391.81 rows=2 width=36) (actual time=13357.167..13360.051 rows=1100 loops=1)\n Buffers: shared hit=14442\n -> Hash Join (cost=1.50..1108.64 rows=1 width=36) (actual time=6686.340..6686.340 rows=0 loops=1)\n Output: wanted_capability.ordering, inserted_capability.id\n Hash Cond: (wanted_capability.name = (inserted_capability.name)::text)\n Join Filter: (NOT (wanted_capability.version IS DISTINCT FROM (inserted_capability.version)::text))\n Buffers: shared hit=7225\n -> CTE Scan on wanted_capability (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.002..0.002 rows=1 loops=1)\n Output: wanted_capability.ordering, wanted_capability.name, wanted_capability.version\n -> Hash (cost=1.00..1.00 rows=1 width=1064) (actual time=6686.323..6686.323 rows=0 loops=1)\n Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\n Buffers: shared hit=7222\n -> CTE Scan on inserted_capability (cost=0.00..1.00 rows=1 width=1064) (actual time=6686.322..6686.322 rows=0 loops=1)\n Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n Buffers: shared hit=7222\n -> Merge Join (cost=300263.57..303282.17 rows=1 width=10) (actual time=6670.825..6673.642 rows=1100 loops=1)\n Output: wanted_capability_1.ordering, rhnpackagecapability.id\n Merge Cond: (wanted_capability_1.name = (rhnpackagecapability.name)::text)\n Join Filter: (NOT (wanted_capability_1.version IS DISTINCT FROM (rhnpackagecapability.version)::text))\n Buffers: shared hit=7217\n -> Sort (cost=1155.57..1158.32 rows=1100 width=68) (actual time=9.430..9.474 rows=1100 loops=1)\n Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n Sort Key: wanted_capability_1.name\n Sort Method: quicksort Memory: 203kB\n -> CTE Scan on wanted_capability wanted_capability_1 (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.001..0.066 rows=1100 loops=1)\n Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n -> Sort (cost=299108.00..300335.41 rows=490964 width=79) (actual time=6458.988..6477.151 rows=462600 loops=1)\n Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n Sort Key: rhnpackagecapability.name\n Sort Method: quicksort Memory: 79862kB\n Buffers: shared hit=7217\n -> Seq Scan on public.rhnpackagecapability (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.012..50.467 rows=490964 loops=1)\n Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n Buffers: shared hit=7217\n Planning time: 2.110 ms\n Execution time: 13362.965 ms\n\n\n\nWith work_mem = 5MB:\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=648953.89..648953.91 rows=2 width=36) (actual time=221.127..221.337 rows=1100 loops=1)\n Output: wanted_capability.ordering, inserted_capability.id\n Buffers: shared hit=7226 read=7217\n CTE wanted_capability\n -> Values Scan on \"*VALUES*\" (cost=0.00..552.75 rows=1100 width=68) (actual time=0.001..0.266 rows=1100 loops=1)\n Output: \"*VALUES*\".column1, \"*VALUES*\".column2, \"*VALUES*\".column3\n CTE missing_capability\n -> Hash Right Join (cost=1652.75..323644.99 rows=1 width=68) (actual time=137.544..137.544 rows=0 loops=1)\n Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n Hash Cond: ((rhnpackagecapability_1.name)::text = wanted_capability_2.name)\n Join Filter: (NOT (wanted_capability_2.version IS DISTINCT FROM (rhnpackagecapability_1.version)::text))\n Filter: (rhnpackagecapability_1.id IS NULL)\n Rows Removed by Filter: 1100\n Buffers: shared read=7217\n -> Seq Scan on public.rhnpackagecapability rhnpackagecapability_1 (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.016..77.305 rows=490964 loops=1)\n Output: rhnpackagecapability_1.id, rhnpackagecapability_1.name, rhnpackagecapability_1.version, rhnpackagecapability_1.created, rhnpackagecapability_1.modified\n Buffers: shared read=7217\n -> Hash (cost=1100.00..1100.00 rows=1100 width=68) (actual time=0.812..0.812 rows=1100 loops=1)\n Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n Buckets: 2048 Batches: 1 Memory Usage: 134kB\n -> CTE Scan on wanted_capability wanted_capability_2 (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.000..0.574 rows=1100 loops=1)\n Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n CTE inserted_capability\n -> Insert on public.rhnpackagecapability rhnpackagecapability_2 (cost=0.00..1.51 rows=1 width=1080) (actual time=137.546..137.546 rows=0 loops=1)\n Output: rhnpackagecapability_2.id, rhnpackagecapability_2.name, rhnpackagecapability_2.version\n Conflict Resolution: NOTHING\n Tuples Inserted: 0\n Conflicting Tuples: 0\n Buffers: shared read=7217\n -> Subquery Scan on \"*SELECT*\" (cost=0.00..1.51 rows=1 width=1080) (actual time=137.545..137.545 rows=0 loops=1)\n Output: \"*SELECT*\".nextval, \"*SELECT*\".name, \"*SELECT*\".version, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP\n Buffers: shared read=7217\n -> CTE Scan on missing_capability (cost=0.00..1.00 rows=1 width=72) (actual time=137.544..137.545 rows=0 loops=1)\n Output: nextval('rhn_pkg_capability_id_seq'::regclass), missing_capability.name, missing_capability.version\n Buffers: shared read=7217\n -> Sort (cost=324754.64..324754.65 rows=2 width=36) (actual time=221.126..221.165 rows=1100 loops=1)\n Output: wanted_capability.ordering, inserted_capability.id\n Sort Key: wanted_capability.ordering, inserted_capability.id\n Sort Method: quicksort Memory: 100kB\n Buffers: shared hit=7226 read=7217\n -> Append (cost=1.50..324754.63 rows=2 width=36) (actual time=169.421..220.870 rows=1100 loops=1)\n Buffers: shared hit=7220 read=7217\n -> Hash Join (cost=1.50..1108.64 rows=1 width=36) (actual time=137.573..137.573 rows=0 loops=1)\n Output: wanted_capability.ordering, inserted_capability.id\n Hash Cond: (wanted_capability.name = (inserted_capability.name)::text)\n Join Filter: (NOT (wanted_capability.version IS DISTINCT FROM (inserted_capability.version)::text))\n Buffers: shared hit=3 read=7217\n -> CTE Scan on wanted_capability (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.002..0.002 rows=1 loops=1)\n Output: wanted_capability.ordering, wanted_capability.name, wanted_capability.version\n -> Hash (cost=1.00..1.00 rows=1 width=1064) (actual time=137.547..137.547 rows=0 loops=1)\n Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\n Buffers: shared read=7217\n -> CTE Scan on inserted_capability (cost=0.00..1.00 rows=1 width=1064) (actual time=137.547..137.547 rows=0 loops=1)\n Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n Buffers: shared read=7217\n -> Hash Join (cost=1652.75..323644.99 rows=1 width=10) (actual time=31.846..83.234 rows=1100 loops=1)\n Output: wanted_capability_1.ordering, rhnpackagecapability.id\n Hash Cond: ((rhnpackagecapability.name)::text = wanted_capability_1.name)\n Join Filter: (NOT (wanted_capability_1.version IS DISTINCT FROM (rhnpackagecapability.version)::text))\n Buffers: shared hit=7217\n -> Seq Scan on public.rhnpackagecapability (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.007..29.702 rows=490964 loops=1)\n Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version, rhnpackagecapability.created, rhnpackagecapability.modified\n Buffers: shared hit=7217\n -> Hash (cost=1100.00..1100.00 rows=1100 width=68) (actual time=0.257..0.257 rows=1100 loops=1)\n Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n Buckets: 2048 Batches: 1 Memory Usage: 134kB\n -> CTE Scan on wanted_capability wanted_capability_1 (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.001..0.067 rows=1100 loops=1)\n Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n Planning time: 3.232 ms\n Execution time: 221.668 ms\n\n> Probably, there is a problem in wrong estimation.\n\nYes, that's what I would also assume.\n\n> What can be expected because CTE is optimization fence in this version\nI am aware of that, but would not expect it to really be a problem in this specific case. Fact that CTE is an optimization fence is true regardless of work_mem, so ATM I cannot see why it would lead to slow down the query in high work_mem case.\n\nI am sure I am still missing something...\n\nThanks!\n\nRegards,\n--\nSilvio Moioli\nSUSE Manager Development Team\n\n\n", "msg_date": "Mon, 30 Mar 2020 10:12:42 +0200", "msg_from": "Silvio Moioli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing work_mem slows down query, why?" }, { "msg_contents": "po 30. 3. 2020 v 10:12 odesílatel Silvio Moioli <[email protected]> napsal:\n\n> On 3/30/20 8:56 AM, Pavel Stehule wrote:\n> > please, can you send explain in text form?\n>\n> Sure. With work_mem = 80MB:\n>\n>\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=608228.26..608228.27 rows=2 width=36) (actual\n> time=13360.241..13360.454 rows=1100 loops=1)\n> Output: wanted_capability.ordering, inserted_capability.id\n> Buffers: shared hit=14448\n> CTE wanted_capability\n> -> Values Scan on \"*VALUES*\" (cost=0.00..552.75 rows=1100 width=68)\n> (actual time=0.001..0.246 rows=1100 loops=1)\n> Output: \"*VALUES*\".column1, \"*VALUES*\".column2,\n> \"*VALUES*\".column3\n> CTE missing_capability\n> -> Merge Left Join (cost=300263.57..303282.17 rows=1 width=68)\n> (actual time=6686.320..6686.320 rows=0 loops=1)\n> Output: wanted_capability_2.ordering, wanted_capability_2.name,\n> wanted_capability_2.version\n> Merge Cond: (wanted_capability_2.name = (\n> rhnpackagecapability_1.name)::text)\n> Join Filter: (NOT (wanted_capability_2.version IS DISTINCT FROM\n> (rhnpackagecapability_1.version)::text))\n> Filter: (rhnpackagecapability_1.id IS NULL)\n> Rows Removed by Filter: 1100\n> Buffers: shared hit=7222\n> -> Sort (cost=1155.57..1158.32 rows=1100 width=68) (actual\n> time=10.011..10.053 rows=1100 loops=1)\n> Output: wanted_capability_2.ordering,\n> wanted_capability_2.name, wanted_capability_2.version\n> Sort Key: wanted_capability_2.name\n> Sort Method: quicksort Memory: 203kB\n> Buffers: shared hit=5\n> -> CTE Scan on wanted_capability wanted_capability_2\n> (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.000..0.516 rows=1100\n> loops=1)\n> Output: wanted_capability_2.ordering,\n> wanted_capability_2.name, wanted_capability_2.version\n> -> Sort (cost=299108.00..300335.41 rows=490964 width=79)\n> (actual time=6475.147..6494.111 rows=462600 loops=1)\n> Output: rhnpackagecapability_1.name,\n> rhnpackagecapability_1.version, rhnpackagecapability_1.id\n> Sort Key: rhnpackagecapability_1.name\n> Sort Method: quicksort Memory: 79862kB\n> Buffers: shared hit=7217\n> -> Seq Scan on public.rhnpackagecapability\n> rhnpackagecapability_1 (cost=0.00..252699.00 rows=490964 width=79) (actual\n> time=0.016..59.976 rows=490964 loops=1)\n> Output: rhnpackagecapability_1.name,\n> rhnpackagecapability_1.version, rhnpackagecapability_1.id\n> Buffers: shared hit=7217\n> CTE inserted_capability\n> -> Insert on public.rhnpackagecapability rhnpackagecapability_2\n> (cost=0.00..1.51 rows=1 width=1080) (actual time=6686.322..6686.322 rows=0\n> loops=1)\n> Output: rhnpackagecapability_2.id, rhnpackagecapability_2.name,\n> rhnpackagecapability_2.version\n> Conflict Resolution: NOTHING\n> Tuples Inserted: 0\n> Conflicting Tuples: 0\n> Buffers: shared hit=7222\n> -> Subquery Scan on \"*SELECT*\" (cost=0.00..1.51 rows=1\n> width=1080) (actual time=6686.321..6686.321 rows=0 loops=1)\n> Output: \"*SELECT*\".nextval, \"*SELECT*\".name,\n> \"*SELECT*\".version, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP\n> Buffers: shared hit=7222\n> -> CTE Scan on missing_capability (cost=0.00..1.00\n> rows=1 width=72) (actual time=6686.320..6686.320 rows=0 loops=1)\n> Output:\n> nextval('rhn_pkg_capability_id_seq'::regclass), missing_capability.name,\n> missing_capability.version\n> Buffers: shared hit=7222\n> -> Sort (cost=304391.82..304391.83 rows=2 width=36) (actual\n> time=13360.240..13360.283 rows=1100 loops=1)\n> Output: wanted_capability.ordering, inserted_capability.id\n> Sort Key: wanted_capability.ordering, inserted_capability.id\n> Sort Method: quicksort Memory: 100kB\n> Buffers: shared hit=14448\n> -> Append (cost=1.50..304391.81 rows=2 width=36) (actual\n> time=13357.167..13360.051 rows=1100 loops=1)\n> Buffers: shared hit=14442\n> -> Hash Join (cost=1.50..1108.64 rows=1 width=36) (actual\n> time=6686.340..6686.340 rows=0 loops=1)\n> Output: wanted_capability.ordering,\n> inserted_capability.id\n> Hash Cond: (wanted_capability.name = (\n> inserted_capability.name)::text)\n> Join Filter: (NOT (wanted_capability.version IS\n> DISTINCT FROM (inserted_capability.version)::text))\n> Buffers: shared hit=7225\n> -> CTE Scan on wanted_capability\n> (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.002..0.002 rows=1\n> loops=1)\n> Output: wanted_capability.ordering,\n> wanted_capability.name, wanted_capability.version\n> -> Hash (cost=1.00..1.00 rows=1 width=1064) (actual\n> time=6686.323..6686.323 rows=0 loops=1)\n> Output: inserted_capability.id,\n> inserted_capability.name, inserted_capability.version\n> Buckets: 1024 Batches: 1 Memory Usage: 8kB\n> Buffers: shared hit=7222\n> -> CTE Scan on inserted_capability\n> (cost=0.00..1.00 rows=1 width=1064) (actual time=6686.322..6686.322 rows=0\n> loops=1)\n> Output: inserted_capability.id,\n> inserted_capability.name, inserted_capability.version\n> Buffers: shared hit=7222\n> -> Merge Join (cost=300263.57..303282.17 rows=1 width=10)\n> (actual time=6670.825..6673.642 rows=1100 loops=1)\n> Output: wanted_capability_1.ordering,\n> rhnpackagecapability.id\n> Merge Cond: (wanted_capability_1.name = (\n> rhnpackagecapability.name)::text)\n> Join Filter: (NOT (wanted_capability_1.version IS\n> DISTINCT FROM (rhnpackagecapability.version)::text))\n> Buffers: shared hit=7217\n> -> Sort (cost=1155.57..1158.32 rows=1100 width=68)\n> (actual time=9.430..9.474 rows=1100 loops=1)\n> Output: wanted_capability_1.ordering,\n> wanted_capability_1.name, wanted_capability_1.version\n> Sort Key: wanted_capability_1.name\n> Sort Method: quicksort Memory: 203kB\n> -> CTE Scan on wanted_capability\n> wanted_capability_1 (cost=0.00..1100.00 rows=1100 width=68) (actual\n> time=0.001..0.066 rows=1100 loops=1)\n> Output: wanted_capability_1.ordering,\n> wanted_capability_1.name, wanted_capability_1.version\n> -> Sort (cost=299108.00..300335.41 rows=490964\n> width=79) (actual time=6458.988..6477.151 rows=462600 loops=1)\n> Output: rhnpackagecapability.id,\n> rhnpackagecapability.name, rhnpackagecapability.version\n> Sort Key: rhnpackagecapability.name\n> Sort Method: quicksort Memory: 79862kB\n> Buffers: shared hit=7217\n> -> Seq Scan on public.rhnpackagecapability\n> (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.012..50.467\n> rows=490964 loops=1)\n> Output: rhnpackagecapability.id,\n> rhnpackagecapability.name, rhnpackagecapability.version\n> Buffers: shared hit=7217\n> Planning time: 2.110 ms\n> Execution time: 13362.965 ms\n>\n>\n>\n> With work_mem = 5MB:\n>\n>\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=648953.89..648953.91 rows=2 width=36) (actual\n> time=221.127..221.337 rows=1100 loops=1)\n> Output: wanted_capability.ordering, inserted_capability.id\n> Buffers: shared hit=7226 read=7217\n> CTE wanted_capability\n> -> Values Scan on \"*VALUES*\" (cost=0.00..552.75 rows=1100 width=68)\n> (actual time=0.001..0.266 rows=1100 loops=1)\n> Output: \"*VALUES*\".column1, \"*VALUES*\".column2,\n> \"*VALUES*\".column3\n> CTE missing_capability\n> -> Hash Right Join (cost=1652.75..323644.99 rows=1 width=68)\n> (actual time=137.544..137.544 rows=0 loops=1)\n> Output: wanted_capability_2.ordering, wanted_capability_2.name,\n> wanted_capability_2.version\n> Hash Cond: ((rhnpackagecapability_1.name)::text =\n> wanted_capability_2.name)\n> Join Filter: (NOT (wanted_capability_2.version IS DISTINCT FROM\n> (rhnpackagecapability_1.version)::text))\n> Filter: (rhnpackagecapability_1.id IS NULL)\n> Rows Removed by Filter: 1100\n> Buffers: shared read=7217\n> -> Seq Scan on public.rhnpackagecapability\n> rhnpackagecapability_1 (cost=0.00..252699.00 rows=490964 width=79) (actual\n> time=0.016..77.305 rows=490964 loops=1)\n> Output: rhnpackagecapability_1.id,\n> rhnpackagecapability_1.name, rhnpackagecapability_1.version,\n> rhnpackagecapability_1.created, rhnpackagecapability_1.modified\n> Buffers: shared read=7217\n> -> Hash (cost=1100.00..1100.00 rows=1100 width=68) (actual\n> time=0.812..0.812 rows=1100 loops=1)\n> Output: wanted_capability_2.ordering,\n> wanted_capability_2.name, wanted_capability_2.version\n> Buckets: 2048 Batches: 1 Memory Usage: 134kB\n> -> CTE Scan on wanted_capability wanted_capability_2\n> (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.000..0.574 rows=1100\n> loops=1)\n> Output: wanted_capability_2.ordering,\n> wanted_capability_2.name, wanted_capability_2.version\n> CTE inserted_capability\n> -> Insert on public.rhnpackagecapability rhnpackagecapability_2\n> (cost=0.00..1.51 rows=1 width=1080) (actual time=137.546..137.546 rows=0\n> loops=1)\n> Output: rhnpackagecapability_2.id, rhnpackagecapability_2.name,\n> rhnpackagecapability_2.version\n> Conflict Resolution: NOTHING\n> Tuples Inserted: 0\n> Conflicting Tuples: 0\n> Buffers: shared read=7217\n> -> Subquery Scan on \"*SELECT*\" (cost=0.00..1.51 rows=1\n> width=1080) (actual time=137.545..137.545 rows=0 loops=1)\n> Output: \"*SELECT*\".nextval, \"*SELECT*\".name,\n> \"*SELECT*\".version, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP\n> Buffers: shared read=7217\n> -> CTE Scan on missing_capability (cost=0.00..1.00\n> rows=1 width=72) (actual time=137.544..137.545 rows=0 loops=1)\n> Output:\n> nextval('rhn_pkg_capability_id_seq'::regclass), missing_capability.name,\n> missing_capability.version\n> Buffers: shared read=7217\n> -> Sort (cost=324754.64..324754.65 rows=2 width=36) (actual\n> time=221.126..221.165 rows=1100 loops=1)\n> Output: wanted_capability.ordering, inserted_capability.id\n> Sort Key: wanted_capability.ordering, inserted_capability.id\n> Sort Method: quicksort Memory: 100kB\n> Buffers: shared hit=7226 read=7217\n> -> Append (cost=1.50..324754.63 rows=2 width=36) (actual\n> time=169.421..220.870 rows=1100 loops=1)\n> Buffers: shared hit=7220 read=7217\n> -> Hash Join (cost=1.50..1108.64 rows=1 width=36) (actual\n> time=137.573..137.573 rows=0 loops=1)\n> Output: wanted_capability.ordering,\n> inserted_capability.id\n> Hash Cond: (wanted_capability.name = (\n> inserted_capability.name)::text)\n> Join Filter: (NOT (wanted_capability.version IS\n> DISTINCT FROM (inserted_capability.version)::text))\n> Buffers: shared hit=3 read=7217\n> -> CTE Scan on wanted_capability\n> (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.002..0.002 rows=1\n> loops=1)\n> Output: wanted_capability.ordering,\n> wanted_capability.name, wanted_capability.version\n> -> Hash (cost=1.00..1.00 rows=1 width=1064) (actual\n> time=137.547..137.547 rows=0 loops=1)\n> Output: inserted_capability.id,\n> inserted_capability.name, inserted_capability.version\n> Buckets: 1024 Batches: 1 Memory Usage: 8kB\n> Buffers: shared read=7217\n> -> CTE Scan on inserted_capability\n> (cost=0.00..1.00 rows=1 width=1064) (actual time=137.547..137.547 rows=0\n> loops=1)\n> Output: inserted_capability.id,\n> inserted_capability.name, inserted_capability.version\n> Buffers: shared read=7217\n> -> Hash Join (cost=1652.75..323644.99 rows=1 width=10)\n> (actual time=31.846..83.234 rows=1100 loops=1)\n> Output: wanted_capability_1.ordering,\n> rhnpackagecapability.id\n> Hash Cond: ((rhnpackagecapability.name)::text =\n> wanted_capability_1.name)\n> Join Filter: (NOT (wanted_capability_1.version IS\n> DISTINCT FROM (rhnpackagecapability.version)::text))\n> Buffers: shared hit=7217\n> -> Seq Scan on public.rhnpackagecapability\n> (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.007..29.702\n> rows=490964 loops=1)\n> Output: rhnpackagecapability.id,\n> rhnpackagecapability.name, rhnpackagecapability.version,\n> rhnpackagecapability.created, rhnpackagecapability.modified\n> Buffers: shared hit=7217\n> -> Hash (cost=1100.00..1100.00 rows=1100 width=68)\n> (actual time=0.257..0.257 rows=1100 loops=1)\n> Output: wanted_capability_1.ordering,\n> wanted_capability_1.name, wanted_capability_1.version\n> Buckets: 2048 Batches: 1 Memory Usage: 134kB\n> -> CTE Scan on wanted_capability\n> wanted_capability_1 (cost=0.00..1100.00 rows=1100 width=68) (actual\n> time=0.001..0.067 rows=1100 loops=1)\n> Output: wanted_capability_1.ordering,\n> wanted_capability_1.name, wanted_capability_1.version\n> Planning time: 3.232 ms\n> Execution time: 221.668 ms\n>\n> > Probably, there is a problem in wrong estimation.\n>\n> Yes, that's what I would also assume.\n>\n> > What can be expected because CTE is optimization fence in this version\n> I am aware of that, but would not expect it to really be a problem in this\n> specific case. Fact that CTE is an optimization fence is true regardless of\n> work_mem, so ATM I cannot see why it would lead to slow down the query in\n> high work_mem case.\n>\n> I am sure I am still missing something...\n>\n\nThis parts looks strange\n\n -> Sort (cost=299108.00..300335.41 rows=490964\nwidth=79) (actual time=6458.988..6477.151 rows=462600 loops=1)\n Output: rhnpackagecapability.id,\nrhnpackagecapability.name, rhnpackagecapability.version\n Sort Key: rhnpackagecapability.name\n Sort Method: quicksort Memory: 79862kB\n Buffers: shared hit=7217\n -> Seq Scan on public.rhnpackagecapability\n (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.012..50.467\nrows=490964 loops=1)\n Output: rhnpackagecapability.id,\nrhnpackagecapability.name, rhnpackagecapability.version\n Buffers: shared hit=7217\n\nI did some test case\n\n\npostgres=# explain (analyze, buffers) select * from foo2 join foo3 on\nfoo2.name = foo3.name;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=188.62..33869.93 rows=866330 width=78) (actual\ntime=6.247..369.081 rows=934000 loops=1)\n Hash Cond: (foo2.name = foo3.name)\n Buffers: shared hit=2224 read=4092\n -> Seq Scan on foo2 (cost=0.00..12518.00 rows=625000 width=48) (actual\ntime=0.095..70.174 rows=625000 loops=1)\n Buffers: shared hit=2176 read=4092\n -> Hash (cost=110.50..110.50 rows=6250 width=30) (actual\ntime=6.116..6.116 rows=6250 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 447kB\n Buffers: shared hit=48\n -> Seq Scan on foo3 (cost=0.00..110.50 rows=6250 width=30)\n(actual time=0.014..1.801 rows=6250 loops=1)\n Buffers: shared hit=48\n Planning Time: 1.190 ms\n Execution Time: 414.264 ms\n(12 rows)\n\npostgres=# explain (analyze, buffers) select * from foo2 join foo3 on\nfoo2.name = foo3.name;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=73189.73..86215.92 rows=866330 width=78) (actual\ntime=1499.805..1835.262 rows=934000 loops=1)\n Merge Cond: (foo3.name = foo2.name)\n Buffers: shared hit=2261 read=4060, temp read=13104 written=10023\n -> Sort (cost=504.55..520.18 rows=6250 width=30) (actual\ntime=21.313..21.895 rows=6250 loops=1)\n Sort Key: foo3.name\n Sort Method: quicksort Memory: 763kB\n Buffers: shared hit=53\n -> Seq Scan on foo3 (cost=0.00..110.50 rows=6250 width=30)\n(actual time=0.017..1.802 rows=6250 loops=1)\n Buffers: shared hit=48\n -> Sort (cost=72685.18..74247.68 rows=625000 width=48) (actual\ntime=1478.480..1602.358 rows=933999 loops=1)\n Sort Key: foo2.name\n Sort Method: external sort Disk: 40088kB\n Buffers: shared hit=2208 read=4060, temp read=12196 written=10023\n -> Seq Scan on foo2 (cost=0.00..12518.00 rows=625000 width=48)\n(actual time=0.039..63.340 rows=625000 loops=1)\n Buffers: shared hit=2208 read=4060\n Planning Time: 1.116 ms\n Execution Time: 1884.985 ms\n(17 rows)\n\nAnd looks little bit strange the cost on seq scan on foo2 12K against cost\nof your public.rhnpackagecapability - 252K.\n\nDo you have some planner variables changed - like seq_page_cost?\n\nI did some tests and it looks so a penalization for sort long keys is not\ntoo high. In your case it is reason why sort is very slow (probably due\nslow locales). Then the cost of hash join and sort is similar, although in\nreality it is not true.\n\nOn your plan is strange the cost of seq scan. It is surprisingly high.\n\n\n\n\n\n\nRegards\n\nPavel\n\n\n\n\n\n> Thanks!\n>\n> Regards,\n> --\n> Silvio Moioli\n> SUSE Manager Development Team\n>\n>\n>\n\npo 30. 3. 2020 v 10:12 odesílatel Silvio Moioli <[email protected]> napsal:On 3/30/20 8:56 AM, Pavel Stehule wrote:\n> please, can you send explain in text form?\n\nSure. With work_mem = 80MB:\n\n                                                                                   QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique  (cost=608228.26..608228.27 rows=2 width=36) (actual time=13360.241..13360.454 rows=1100 loops=1)\n   Output: wanted_capability.ordering, inserted_capability.id\n   Buffers: shared hit=14448\n   CTE wanted_capability\n     ->  Values Scan on \"*VALUES*\"  (cost=0.00..552.75 rows=1100 width=68) (actual time=0.001..0.246 rows=1100 loops=1)\n           Output: \"*VALUES*\".column1, \"*VALUES*\".column2, \"*VALUES*\".column3\n   CTE missing_capability\n     ->  Merge Left Join  (cost=300263.57..303282.17 rows=1 width=68) (actual time=6686.320..6686.320 rows=0 loops=1)\n           Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n           Merge Cond: (wanted_capability_2.name = (rhnpackagecapability_1.name)::text)\n           Join Filter: (NOT (wanted_capability_2.version IS DISTINCT FROM (rhnpackagecapability_1.version)::text))\n           Filter: (rhnpackagecapability_1.id IS NULL)\n           Rows Removed by Filter: 1100\n           Buffers: shared hit=7222\n           ->  Sort  (cost=1155.57..1158.32 rows=1100 width=68) (actual time=10.011..10.053 rows=1100 loops=1)\n                 Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n                 Sort Key: wanted_capability_2.name\n                 Sort Method: quicksort  Memory: 203kB\n                 Buffers: shared hit=5\n                 ->  CTE Scan on wanted_capability wanted_capability_2  (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.000..0.516 rows=1100 loops=1)\n                       Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n           ->  Sort  (cost=299108.00..300335.41 rows=490964 width=79) (actual time=6475.147..6494.111 rows=462600 loops=1)\n                 Output: rhnpackagecapability_1.name, rhnpackagecapability_1.version, rhnpackagecapability_1.id\n                 Sort Key: rhnpackagecapability_1.name\n                 Sort Method: quicksort  Memory: 79862kB\n                 Buffers: shared hit=7217\n                 ->  Seq Scan on public.rhnpackagecapability rhnpackagecapability_1  (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.016..59.976 rows=490964 loops=1)\n                       Output: rhnpackagecapability_1.name, rhnpackagecapability_1.version, rhnpackagecapability_1.id\n                       Buffers: shared hit=7217\n   CTE inserted_capability\n     ->  Insert on public.rhnpackagecapability rhnpackagecapability_2  (cost=0.00..1.51 rows=1 width=1080) (actual time=6686.322..6686.322 rows=0 loops=1)\n           Output: rhnpackagecapability_2.id, rhnpackagecapability_2.name, rhnpackagecapability_2.version\n           Conflict Resolution: NOTHING\n           Tuples Inserted: 0\n           Conflicting Tuples: 0\n           Buffers: shared hit=7222\n           ->  Subquery Scan on \"*SELECT*\"  (cost=0.00..1.51 rows=1 width=1080) (actual time=6686.321..6686.321 rows=0 loops=1)\n                 Output: \"*SELECT*\".nextval, \"*SELECT*\".name, \"*SELECT*\".version, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP\n                 Buffers: shared hit=7222\n                 ->  CTE Scan on missing_capability  (cost=0.00..1.00 rows=1 width=72) (actual time=6686.320..6686.320 rows=0 loops=1)\n                       Output: nextval('rhn_pkg_capability_id_seq'::regclass), missing_capability.name, missing_capability.version\n                       Buffers: shared hit=7222\n   ->  Sort  (cost=304391.82..304391.83 rows=2 width=36) (actual time=13360.240..13360.283 rows=1100 loops=1)\n         Output: wanted_capability.ordering, inserted_capability.id\n         Sort Key: wanted_capability.ordering, inserted_capability.id\n         Sort Method: quicksort  Memory: 100kB\n         Buffers: shared hit=14448\n         ->  Append  (cost=1.50..304391.81 rows=2 width=36) (actual time=13357.167..13360.051 rows=1100 loops=1)\n               Buffers: shared hit=14442\n               ->  Hash Join  (cost=1.50..1108.64 rows=1 width=36) (actual time=6686.340..6686.340 rows=0 loops=1)\n                     Output: wanted_capability.ordering, inserted_capability.id\n                     Hash Cond: (wanted_capability.name = (inserted_capability.name)::text)\n                     Join Filter: (NOT (wanted_capability.version IS DISTINCT FROM (inserted_capability.version)::text))\n                     Buffers: shared hit=7225\n                     ->  CTE Scan on wanted_capability  (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.002..0.002 rows=1 loops=1)\n                           Output: wanted_capability.ordering, wanted_capability.name, wanted_capability.version\n                     ->  Hash  (cost=1.00..1.00 rows=1 width=1064) (actual time=6686.323..6686.323 rows=0 loops=1)\n                           Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n                           Buckets: 1024  Batches: 1  Memory Usage: 8kB\n                           Buffers: shared hit=7222\n                           ->  CTE Scan on inserted_capability  (cost=0.00..1.00 rows=1 width=1064) (actual time=6686.322..6686.322 rows=0 loops=1)\n                                 Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n                                 Buffers: shared hit=7222\n               ->  Merge Join  (cost=300263.57..303282.17 rows=1 width=10) (actual time=6670.825..6673.642 rows=1100 loops=1)\n                     Output: wanted_capability_1.ordering, rhnpackagecapability.id\n                     Merge Cond: (wanted_capability_1.name = (rhnpackagecapability.name)::text)\n                     Join Filter: (NOT (wanted_capability_1.version IS DISTINCT FROM (rhnpackagecapability.version)::text))\n                     Buffers: shared hit=7217\n                     ->  Sort  (cost=1155.57..1158.32 rows=1100 width=68) (actual time=9.430..9.474 rows=1100 loops=1)\n                           Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n                           Sort Key: wanted_capability_1.name\n                           Sort Method: quicksort  Memory: 203kB\n                           ->  CTE Scan on wanted_capability wanted_capability_1  (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.001..0.066 rows=1100 loops=1)\n                                 Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n                     ->  Sort  (cost=299108.00..300335.41 rows=490964 width=79) (actual time=6458.988..6477.151 rows=462600 loops=1)\n                           Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n                           Sort Key: rhnpackagecapability.name\n                           Sort Method: quicksort  Memory: 79862kB\n                           Buffers: shared hit=7217\n                           ->  Seq Scan on public.rhnpackagecapability  (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.012..50.467 rows=490964 loops=1)\n                                 Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n                                 Buffers: shared hit=7217\n Planning time: 2.110 ms\n Execution time: 13362.965 ms\n\n\n\nWith work_mem = 5MB:\n\n                                                                                   QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique  (cost=648953.89..648953.91 rows=2 width=36) (actual time=221.127..221.337 rows=1100 loops=1)\n   Output: wanted_capability.ordering, inserted_capability.id\n   Buffers: shared hit=7226 read=7217\n   CTE wanted_capability\n     ->  Values Scan on \"*VALUES*\"  (cost=0.00..552.75 rows=1100 width=68) (actual time=0.001..0.266 rows=1100 loops=1)\n           Output: \"*VALUES*\".column1, \"*VALUES*\".column2, \"*VALUES*\".column3\n   CTE missing_capability\n     ->  Hash Right Join  (cost=1652.75..323644.99 rows=1 width=68) (actual time=137.544..137.544 rows=0 loops=1)\n           Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n           Hash Cond: ((rhnpackagecapability_1.name)::text = wanted_capability_2.name)\n           Join Filter: (NOT (wanted_capability_2.version IS DISTINCT FROM (rhnpackagecapability_1.version)::text))\n           Filter: (rhnpackagecapability_1.id IS NULL)\n           Rows Removed by Filter: 1100\n           Buffers: shared read=7217\n           ->  Seq Scan on public.rhnpackagecapability rhnpackagecapability_1  (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.016..77.305 rows=490964 loops=1)\n                 Output: rhnpackagecapability_1.id, rhnpackagecapability_1.name, rhnpackagecapability_1.version, rhnpackagecapability_1.created, rhnpackagecapability_1.modified\n                 Buffers: shared read=7217\n           ->  Hash  (cost=1100.00..1100.00 rows=1100 width=68) (actual time=0.812..0.812 rows=1100 loops=1)\n                 Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n                 Buckets: 2048  Batches: 1  Memory Usage: 134kB\n                 ->  CTE Scan on wanted_capability wanted_capability_2  (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.000..0.574 rows=1100 loops=1)\n                       Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n   CTE inserted_capability\n     ->  Insert on public.rhnpackagecapability rhnpackagecapability_2  (cost=0.00..1.51 rows=1 width=1080) (actual time=137.546..137.546 rows=0 loops=1)\n           Output: rhnpackagecapability_2.id, rhnpackagecapability_2.name, rhnpackagecapability_2.version\n           Conflict Resolution: NOTHING\n           Tuples Inserted: 0\n           Conflicting Tuples: 0\n           Buffers: shared read=7217\n           ->  Subquery Scan on \"*SELECT*\"  (cost=0.00..1.51 rows=1 width=1080) (actual time=137.545..137.545 rows=0 loops=1)\n                 Output: \"*SELECT*\".nextval, \"*SELECT*\".name, \"*SELECT*\".version, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP\n                 Buffers: shared read=7217\n                 ->  CTE Scan on missing_capability  (cost=0.00..1.00 rows=1 width=72) (actual time=137.544..137.545 rows=0 loops=1)\n                       Output: nextval('rhn_pkg_capability_id_seq'::regclass), missing_capability.name, missing_capability.version\n                       Buffers: shared read=7217\n   ->  Sort  (cost=324754.64..324754.65 rows=2 width=36) (actual time=221.126..221.165 rows=1100 loops=1)\n         Output: wanted_capability.ordering, inserted_capability.id\n         Sort Key: wanted_capability.ordering, inserted_capability.id\n         Sort Method: quicksort  Memory: 100kB\n         Buffers: shared hit=7226 read=7217\n         ->  Append  (cost=1.50..324754.63 rows=2 width=36) (actual time=169.421..220.870 rows=1100 loops=1)\n               Buffers: shared hit=7220 read=7217\n               ->  Hash Join  (cost=1.50..1108.64 rows=1 width=36) (actual time=137.573..137.573 rows=0 loops=1)\n                     Output: wanted_capability.ordering, inserted_capability.id\n                     Hash Cond: (wanted_capability.name = (inserted_capability.name)::text)\n                     Join Filter: (NOT (wanted_capability.version IS DISTINCT FROM (inserted_capability.version)::text))\n                     Buffers: shared hit=3 read=7217\n                     ->  CTE Scan on wanted_capability  (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.002..0.002 rows=1 loops=1)\n                           Output: wanted_capability.ordering, wanted_capability.name, wanted_capability.version\n                     ->  Hash  (cost=1.00..1.00 rows=1 width=1064) (actual time=137.547..137.547 rows=0 loops=1)\n                           Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n                           Buckets: 1024  Batches: 1  Memory Usage: 8kB\n                           Buffers: shared read=7217\n                           ->  CTE Scan on inserted_capability  (cost=0.00..1.00 rows=1 width=1064) (actual time=137.547..137.547 rows=0 loops=1)\n                                 Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n                                 Buffers: shared read=7217\n               ->  Hash Join  (cost=1652.75..323644.99 rows=1 width=10) (actual time=31.846..83.234 rows=1100 loops=1)\n                     Output: wanted_capability_1.ordering, rhnpackagecapability.id\n                     Hash Cond: ((rhnpackagecapability.name)::text = wanted_capability_1.name)\n                     Join Filter: (NOT (wanted_capability_1.version IS DISTINCT FROM (rhnpackagecapability.version)::text))\n                     Buffers: shared hit=7217\n                     ->  Seq Scan on public.rhnpackagecapability  (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.007..29.702 rows=490964 loops=1)\n                           Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version, rhnpackagecapability.created, rhnpackagecapability.modified\n                           Buffers: shared hit=7217\n                     ->  Hash  (cost=1100.00..1100.00 rows=1100 width=68) (actual time=0.257..0.257 rows=1100 loops=1)\n                           Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n                           Buckets: 2048  Batches: 1  Memory Usage: 134kB\n                           ->  CTE Scan on wanted_capability wanted_capability_1  (cost=0.00..1100.00 rows=1100 width=68) (actual time=0.001..0.067 rows=1100 loops=1)\n                                 Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n Planning time: 3.232 ms\n Execution time: 221.668 ms\n\n> Probably, there is a problem in wrong estimation.\n\nYes, that's what I would also assume.\n\n> What can be expected because CTE is optimization fence in this version\nI am aware of that, but would not expect it to really be a problem in this specific case. Fact that CTE is an optimization fence is true regardless of work_mem, so ATM I cannot see why it would lead to slow down the query in high work_mem case.\n\nI am sure I am still missing something...This parts looks strange                     ->  Sort  (cost=299108.00..300335.41 rows=490964 width=79) (actual time=6458.988..6477.151 rows=462600 loops=1)                           Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version                           Sort Key: rhnpackagecapability.name                           Sort Method: quicksort  Memory: 79862kB                           Buffers: shared hit=7217                           ->  Seq Scan on public.rhnpackagecapability  (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.012..50.467 rows=490964 loops=1)                                 Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version                                 Buffers: shared hit=7217I did some test casepostgres=# explain (analyze, buffers) select * from foo2 join foo3 on foo2.name = foo3.name;                                                     QUERY PLAN                                                     -------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=188.62..33869.93 rows=866330 width=78) (actual time=6.247..369.081 rows=934000 loops=1)   Hash Cond: (foo2.name = foo3.name)   Buffers: shared hit=2224 read=4092   ->  Seq Scan on foo2  (cost=0.00..12518.00 rows=625000 width=48) (actual time=0.095..70.174 rows=625000 loops=1)         Buffers: shared hit=2176 read=4092   ->  Hash  (cost=110.50..110.50 rows=6250 width=30) (actual time=6.116..6.116 rows=6250 loops=1)         Buckets: 8192  Batches: 1  Memory Usage: 447kB         Buffers: shared hit=48         ->  Seq Scan on foo3  (cost=0.00..110.50 rows=6250 width=30) (actual time=0.014..1.801 rows=6250 loops=1)               Buffers: shared hit=48 Planning Time: 1.190 ms Execution Time: 414.264 ms(12 rows)postgres=# explain (analyze, buffers) select * from foo2 join foo3 on foo2.name = foo3.name;                                                        QUERY PLAN                                                        -------------------------------------------------------------------------------------------------------------------------- Merge Join  (cost=73189.73..86215.92 rows=866330 width=78) (actual time=1499.805..1835.262 rows=934000 loops=1)   Merge Cond: (foo3.name = foo2.name)   Buffers: shared hit=2261 read=4060, temp read=13104 written=10023   ->  Sort  (cost=504.55..520.18 rows=6250 width=30) (actual time=21.313..21.895 rows=6250 loops=1)         Sort Key: foo3.name         Sort Method: quicksort  Memory: 763kB         Buffers: shared hit=53         ->  Seq Scan on foo3  (cost=0.00..110.50 rows=6250 width=30) (actual time=0.017..1.802 rows=6250 loops=1)               Buffers: shared hit=48   ->  Sort  (cost=72685.18..74247.68 rows=625000 width=48) (actual time=1478.480..1602.358 rows=933999 loops=1)         Sort Key: foo2.name         Sort Method: external sort  Disk: 40088kB         Buffers: shared hit=2208 read=4060, temp read=12196 written=10023         ->  Seq Scan on foo2  (cost=0.00..12518.00 rows=625000 width=48) (actual time=0.039..63.340 rows=625000 loops=1)               Buffers: shared hit=2208 read=4060 Planning Time: 1.116 ms Execution Time: 1884.985 ms(17 rows)And looks little bit strange the cost on seq scan on foo2 12K against cost of your public.rhnpackagecapability - 252K.Do you have some planner variables changed - like seq_page_cost?I did some tests and it looks so a penalization for sort long keys is not too high. In your case it is reason why sort is very slow (probably due slow locales). Then the cost of hash join and sort is similar, although in reality it is not true.On your plan is strange the cost of seq scan. It is surprisingly high.RegardsPavel \n\nThanks!\n\nRegards,\n--\nSilvio Moioli\nSUSE Manager Development Team", "msg_date": "Mon, 30 Mar 2020 12:12:33 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing work_mem slows down query, why?" }, { "msg_contents": "On 3/30/20 12:12 PM, Pavel Stehule wrote:\n> Do you have some planner variables changed - like seq_page_cost?\n\nThat one was not changed but another one is - cpu_tuple_cost (to 0.5). Indeed bringing it back to its default does improve the query time significantly:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=36735.61..36735.63 rows=2 width=36) (actual time=357.825..358.036 rows=1100 loops=1)\n Output: wanted_capability.ordering, inserted_capability.id\n Buffers: shared hit=14443\n CTE wanted_capability\n -> Values Scan on \"*VALUES*\" (cost=0.00..13.75 rows=1100 width=68) (actual time=0.001..0.355 rows=1100 loops=1)\n Output: \"*VALUES*\".column1, \"*VALUES*\".column2, \"*VALUES*\".column3\n CTE missing_capability\n -> Hash Left Join (cost=18263.69..18347.78 rows=1 width=68) (actual time=183.826..183.826 rows=0 loops=1)\n Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n Hash Cond: (wanted_capability_2.name = (rhnpackagecapability_1.name)::text)\n Join Filter: (NOT (wanted_capability_2.version IS DISTINCT FROM (rhnpackagecapability_1.version)::text))\n Filter: (rhnpackagecapability_1.id IS NULL)\n Rows Removed by Filter: 1100\n Buffers: shared hit=7217\n -> CTE Scan on wanted_capability wanted_capability_2 (cost=0.00..22.00 rows=1100 width=68) (actual time=0.000..0.729 rows=1100 loops=1)\n Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n -> Hash (cost=12126.64..12126.64 rows=490964 width=79) (actual time=181.477..181.477 rows=490964 loops=1)\n Output: rhnpackagecapability_1.name, rhnpackagecapability_1.version, rhnpackagecapability_1.id\n Buckets: 524288 Batches: 1 Memory Usage: 53907kB\n Buffers: shared hit=7217\n -> Seq Scan on public.rhnpackagecapability rhnpackagecapability_1 (cost=0.00..12126.64 rows=490964 width=79) (actual time=0.009..57.663 rows=490964 loops=1)\n Output: rhnpackagecapability_1.name, rhnpackagecapability_1.version, rhnpackagecapability_1.id\n Buffers: shared hit=7217\n CTE inserted_capability\n -> Insert on public.rhnpackagecapability rhnpackagecapability_2 (cost=0.00..0.04 rows=1 width=1080) (actual time=183.828..183.828 rows=0 loops=1)\n Output: rhnpackagecapability_2.id, rhnpackagecapability_2.name, rhnpackagecapability_2.version\n Conflict Resolution: NOTHING\n Tuples Inserted: 0\n Conflicting Tuples: 0\n Buffers: shared hit=7217\n -> Subquery Scan on \"*SELECT*\" (cost=0.00..0.04 rows=1 width=1080) (actual time=183.827..183.827 rows=0 loops=1)\n Output: \"*SELECT*\".nextval, \"*SELECT*\".name, \"*SELECT*\".version, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP\n Buffers: shared hit=7217\n -> CTE Scan on missing_capability (cost=0.00..0.02 rows=1 width=72) (actual time=183.827..183.827 rows=0 loops=1)\n Output: nextval('rhn_pkg_capability_id_seq'::regclass), missing_capability.name, missing_capability.version\n Buffers: shared hit=7217\n -> Sort (cost=18374.04..18374.04 rows=2 width=36) (actual time=357.825..357.862 rows=1100 loops=1)\n Output: wanted_capability.ordering, inserted_capability.id\n Sort Key: wanted_capability.ordering, inserted_capability.id\n Sort Method: quicksort Memory: 100kB\n Buffers: shared hit=14443\n -> Append (cost=0.03..18374.03 rows=2 width=36) (actual time=357.071..357.660 rows=1100 loops=1)\n Buffers: shared hit=14437\n -> Hash Join (cost=0.03..26.23 rows=1 width=36) (actual time=183.847..183.847 rows=0 loops=1)\n Output: wanted_capability.ordering, inserted_capability.id\n Hash Cond: (wanted_capability.name = (inserted_capability.name)::text)\n Join Filter: (NOT (wanted_capability.version IS DISTINCT FROM (inserted_capability.version)::text))\n Buffers: shared hit=7220\n -> CTE Scan on wanted_capability (cost=0.00..22.00 rows=1100 width=68) (actual time=0.002..0.002 rows=1 loops=1)\n Output: wanted_capability.ordering, wanted_capability.name, wanted_capability.version\n -> Hash (cost=0.02..0.02 rows=1 width=1064) (actual time=183.829..183.829 rows=0 loops=1)\n Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n Buckets: 1024 Batches: 1 Memory Usage: 8kB\n Buffers: shared hit=7217\n -> CTE Scan on inserted_capability (cost=0.00..0.02 rows=1 width=1064) (actual time=183.828..183.828 rows=0 loops=1)\n Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n Buffers: shared hit=7217\n -> Hash Join (cost=18263.69..18347.78 rows=1 width=10) (actual time=173.223..173.750 rows=1100 loops=1)\n Output: wanted_capability_1.ordering, rhnpackagecapability.id\n Hash Cond: (wanted_capability_1.name = (rhnpackagecapability.name)::text)\n Join Filter: (NOT (wanted_capability_1.version IS DISTINCT FROM (rhnpackagecapability.version)::text))\n Buffers: shared hit=7217\n -> CTE Scan on wanted_capability wanted_capability_1 (cost=0.00..22.00 rows=1100 width=68) (actual time=0.000..0.070 rows=1100 loops=1)\n Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n -> Hash (cost=12126.64..12126.64 rows=490964 width=79) (actual time=172.220..172.220 rows=490964 loops=1)\n Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n Buckets: 524288 Batches: 1 Memory Usage: 53922kB\n Buffers: shared hit=7217\n -> Seq Scan on public.rhnpackagecapability (cost=0.00..12126.64 rows=490964 width=79) (actual time=0.008..52.573 rows=490964 loops=1)\n Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n Buffers: shared hit=7217\n Planning time: 2.145 ms\n Execution time: 358.773 ms\n\n\nIs that an unreasonable value? For the sake of this discussison, I am targeting fairly average bare-metal SSD-backed servers with recent CPUs (let's say 3 year old maximum), with ample available RAM.\n\nThanks!\n\nRegards,\n--\nSilvio Moioli\nSUSE Manager Development Team\n\n\n", "msg_date": "Mon, 30 Mar 2020 15:09:12 +0200", "msg_from": "Silvio Moioli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing work_mem slows down query, why?" }, { "msg_contents": "po 30. 3. 2020 v 15:09 odesílatel Silvio Moioli <[email protected]> napsal:\n\n> On 3/30/20 12:12 PM, Pavel Stehule wrote:\n> > Do you have some planner variables changed - like seq_page_cost?\n>\n> That one was not changed but another one is - cpu_tuple_cost (to 0.5).\n> Indeed bringing it back to its default does improve the query time\n> significantly:\n>\n>\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=36735.61..36735.63 rows=2 width=36) (actual\n> time=357.825..358.036 rows=1100 loops=1)\n> Output: wanted_capability.ordering, inserted_capability.id\n> Buffers: shared hit=14443\n> CTE wanted_capability\n> -> Values Scan on \"*VALUES*\" (cost=0.00..13.75 rows=1100 width=68)\n> (actual time=0.001..0.355 rows=1100 loops=1)\n> Output: \"*VALUES*\".column1, \"*VALUES*\".column2,\n> \"*VALUES*\".column3\n> CTE missing_capability\n> -> Hash Left Join (cost=18263.69..18347.78 rows=1 width=68) (actual\n> time=183.826..183.826 rows=0 loops=1)\n> Output: wanted_capability_2.ordering, wanted_capability_2.name,\n> wanted_capability_2.version\n> Hash Cond: (wanted_capability_2.name = (\n> rhnpackagecapability_1.name)::text)\n> Join Filter: (NOT (wanted_capability_2.version IS DISTINCT FROM\n> (rhnpackagecapability_1.version)::text))\n> Filter: (rhnpackagecapability_1.id IS NULL)\n> Rows Removed by Filter: 1100\n> Buffers: shared hit=7217\n> -> CTE Scan on wanted_capability wanted_capability_2\n> (cost=0.00..22.00 rows=1100 width=68) (actual time=0.000..0.729 rows=1100\n> loops=1)\n> Output: wanted_capability_2.ordering,\n> wanted_capability_2.name, wanted_capability_2.version\n> -> Hash (cost=12126.64..12126.64 rows=490964 width=79)\n> (actual time=181.477..181.477 rows=490964 loops=1)\n> Output: rhnpackagecapability_1.name,\n> rhnpackagecapability_1.version, rhnpackagecapability_1.id\n> Buckets: 524288 Batches: 1 Memory Usage: 53907kB\n> Buffers: shared hit=7217\n> -> Seq Scan on public.rhnpackagecapability\n> rhnpackagecapability_1 (cost=0.00..12126.64 rows=490964 width=79) (actual\n> time=0.009..57.663 rows=490964 loops=1)\n> Output: rhnpackagecapability_1.name,\n> rhnpackagecapability_1.version, rhnpackagecapability_1.id\n> Buffers: shared hit=7217\n> CTE inserted_capability\n> -> Insert on public.rhnpackagecapability rhnpackagecapability_2\n> (cost=0.00..0.04 rows=1 width=1080) (actual time=183.828..183.828 rows=0\n> loops=1)\n> Output: rhnpackagecapability_2.id, rhnpackagecapability_2.name,\n> rhnpackagecapability_2.version\n> Conflict Resolution: NOTHING\n> Tuples Inserted: 0\n> Conflicting Tuples: 0\n> Buffers: shared hit=7217\n> -> Subquery Scan on \"*SELECT*\" (cost=0.00..0.04 rows=1\n> width=1080) (actual time=183.827..183.827 rows=0 loops=1)\n> Output: \"*SELECT*\".nextval, \"*SELECT*\".name,\n> \"*SELECT*\".version, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP\n> Buffers: shared hit=7217\n> -> CTE Scan on missing_capability (cost=0.00..0.02\n> rows=1 width=72) (actual time=183.827..183.827 rows=0 loops=1)\n> Output:\n> nextval('rhn_pkg_capability_id_seq'::regclass), missing_capability.name,\n> missing_capability.version\n> Buffers: shared hit=7217\n> -> Sort (cost=18374.04..18374.04 rows=2 width=36) (actual\n> time=357.825..357.862 rows=1100 loops=1)\n> Output: wanted_capability.ordering, inserted_capability.id\n> Sort Key: wanted_capability.ordering, inserted_capability.id\n> Sort Method: quicksort Memory: 100kB\n> Buffers: shared hit=14443\n> -> Append (cost=0.03..18374.03 rows=2 width=36) (actual\n> time=357.071..357.660 rows=1100 loops=1)\n> Buffers: shared hit=14437\n> -> Hash Join (cost=0.03..26.23 rows=1 width=36) (actual\n> time=183.847..183.847 rows=0 loops=1)\n> Output: wanted_capability.ordering,\n> inserted_capability.id\n> Hash Cond: (wanted_capability.name = (\n> inserted_capability.name)::text)\n> Join Filter: (NOT (wanted_capability.version IS\n> DISTINCT FROM (inserted_capability.version)::text))\n> Buffers: shared hit=7220\n> -> CTE Scan on wanted_capability (cost=0.00..22.00\n> rows=1100 width=68) (actual time=0.002..0.002 rows=1 loops=1)\n> Output: wanted_capability.ordering,\n> wanted_capability.name, wanted_capability.version\n> -> Hash (cost=0.02..0.02 rows=1 width=1064) (actual\n> time=183.829..183.829 rows=0 loops=1)\n> Output: inserted_capability.id,\n> inserted_capability.name, inserted_capability.version\n> Buckets: 1024 Batches: 1 Memory Usage: 8kB\n> Buffers: shared hit=7217\n> -> CTE Scan on inserted_capability\n> (cost=0.00..0.02 rows=1 width=1064) (actual time=183.828..183.828 rows=0\n> loops=1)\n> Output: inserted_capability.id,\n> inserted_capability.name, inserted_capability.version\n> Buffers: shared hit=7217\n> -> Hash Join (cost=18263.69..18347.78 rows=1 width=10)\n> (actual time=173.223..173.750 rows=1100 loops=1)\n> Output: wanted_capability_1.ordering,\n> rhnpackagecapability.id\n> Hash Cond: (wanted_capability_1.name = (\n> rhnpackagecapability.name)::text)\n> Join Filter: (NOT (wanted_capability_1.version IS\n> DISTINCT FROM (rhnpackagecapability.version)::text))\n> Buffers: shared hit=7217\n> -> CTE Scan on wanted_capability\n> wanted_capability_1 (cost=0.00..22.00 rows=1100 width=68) (actual\n> time=0.000..0.070 rows=1100 loops=1)\n> Output: wanted_capability_1.ordering,\n> wanted_capability_1.name, wanted_capability_1.version\n> -> Hash (cost=12126.64..12126.64 rows=490964\n> width=79) (actual time=172.220..172.220 rows=490964 loops=1)\n> Output: rhnpackagecapability.id,\n> rhnpackagecapability.name, rhnpackagecapability.version\n> Buckets: 524288 Batches: 1 Memory Usage:\n> 53922kB\n> Buffers: shared hit=7217\n> -> Seq Scan on public.rhnpackagecapability\n> (cost=0.00..12126.64 rows=490964 width=79) (actual time=0.008..52.573\n> rows=490964 loops=1)\n> Output: rhnpackagecapability.id,\n> rhnpackagecapability.name, rhnpackagecapability.version\n> Buffers: shared hit=7217\n> Planning time: 2.145 ms\n> Execution time: 358.773 ms\n>\n>\n> Is that an unreasonable value? For the sake of this discussison, I am\n> targeting fairly average bare-metal SSD-backed servers with recent CPUs\n> (let's say 3 year old maximum), with ample available RAM.\n>\n\nthese numbers are artificial - important is stable behave, and it's hard to\nsay, what is correct value. But when these values are out of good range,\nthen some calculations can be unstable and generated plans can be strange.\n\nThere is interesting another fact, new plan uses hash from bigger table,\nand then hash join is slower. This is strange.\n\n\n -> Hash Join (cost=18263.69..18347.78 rows=1 width=10)\n(actual time=173.223..173.750 rows=1100 loops=1)\n Output: wanted_capability_1.ordering,\nrhnpackagecapability.id\n Hash Cond: (wanted_capability_1.name = (\nrhnpackagecapability.name)::text)\n Join Filter: (NOT (wanted_capability_1.version IS\nDISTINCT FROM (rhnpackagecapability.version)::text))\n Buffers: shared hit=7217\n -> CTE Scan on wanted_capability wanted_capability_1\n(cost=0.00..22.00 rows=1100 width=68) (actual time=0.000..0.070 rows=1100\nloops=1)\n Output: wanted_capability_1.ordering,\nwanted_capability_1.name, wanted_capability_1.version\n -> Hash (cost=12126.64..12126.64 rows=490964\nwidth=79) (actual time=172.220..172.220 rows=490964 loops=1)\n Output: rhnpackagecapability.id,\nrhnpackagecapability.name, rhnpackagecapability.version\n Buckets: 524288 Batches: 1 Memory Usage:\n53922kB\n Buffers: shared hit=7217\n -> Seq Scan on public.rhnpackagecapability\n(cost=0.00..12126.64 rows=490964 width=79) (actual time=0.008..52.573\nrows=490964 loops=1)\n Output: rhnpackagecapability.id,\nrhnpackagecapability.name, rhnpackagecapability.version\n Buffers: shared hit=7217\n\nversus\n\n -> Hash Join (cost=1652.75..323644.99 rows=1 width=10)\n(actual time=31.846..83.234 rows=1100 loops=1)\n Output: wanted_capability_1.ordering,\nrhnpackagecapability.id\n Hash Cond: ((rhnpackagecapability.name)::text =\nwanted_capability_1.name)\n Join Filter: (NOT (wanted_capability_1.version IS\nDISTINCT FROM (rhnpackagecapability.version)::text))\n Buffers: shared hit=7217\n -> Seq Scan on public.rhnpackagecapability\n(cost=0.00..252699.00 rows=490964 width=79) (actual time=0.007..29.702\nrows=490964 loops=1)\n Output: rhnpackagecapability.id,\nrhnpackagecapability.name, rhnpackagecapability.version,\nrhnpackagecapability.created, rhnpackagecapability.modified\n Buffers: shared hit=7217\n -> Hash (cost=1100.00..1100.00 rows=1100 width=68)\n(actual time=0.257..0.257 rows=1100 loops=1)\n Output: wanted_capability_1.ordering,\nwanted_capability_1.name, wanted_capability_1.version\n Buckets: 2048 Batches: 1 Memory Usage: 134kB\n -> CTE Scan on wanted_capability\nwanted_capability_1 (cost=0.00..1100.00 rows=1100 width=68) (actual\ntime=0.001..0.067 rows=1100 loops=1)\n Output: wanted_capability_1.ordering,\nwanted_capability_1.name, wanted_capability_1.version\n\n\n> Thanks!\n>\n> Regards,\n> --\n> Silvio Moioli\n> SUSE Manager Development Team\n>\n>\n>\n\npo 30. 3. 2020 v 15:09 odesílatel Silvio Moioli <[email protected]> napsal:On 3/30/20 12:12 PM, Pavel Stehule wrote:\n> Do you have some planner variables changed - like seq_page_cost?\n\nThat one was not changed but another one is - cpu_tuple_cost (to 0.5). Indeed bringing it back to its default does improve the query time significantly:\n\n                                                                                   QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique  (cost=36735.61..36735.63 rows=2 width=36) (actual time=357.825..358.036 rows=1100 loops=1)\n   Output: wanted_capability.ordering, inserted_capability.id\n   Buffers: shared hit=14443\n   CTE wanted_capability\n     ->  Values Scan on \"*VALUES*\"  (cost=0.00..13.75 rows=1100 width=68) (actual time=0.001..0.355 rows=1100 loops=1)\n           Output: \"*VALUES*\".column1, \"*VALUES*\".column2, \"*VALUES*\".column3\n   CTE missing_capability\n     ->  Hash Left Join  (cost=18263.69..18347.78 rows=1 width=68) (actual time=183.826..183.826 rows=0 loops=1)\n           Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n           Hash Cond: (wanted_capability_2.name = (rhnpackagecapability_1.name)::text)\n           Join Filter: (NOT (wanted_capability_2.version IS DISTINCT FROM (rhnpackagecapability_1.version)::text))\n           Filter: (rhnpackagecapability_1.id IS NULL)\n           Rows Removed by Filter: 1100\n           Buffers: shared hit=7217\n           ->  CTE Scan on wanted_capability wanted_capability_2  (cost=0.00..22.00 rows=1100 width=68) (actual time=0.000..0.729 rows=1100 loops=1)\n                 Output: wanted_capability_2.ordering, wanted_capability_2.name, wanted_capability_2.version\n           ->  Hash  (cost=12126.64..12126.64 rows=490964 width=79) (actual time=181.477..181.477 rows=490964 loops=1)\n                 Output: rhnpackagecapability_1.name, rhnpackagecapability_1.version, rhnpackagecapability_1.id\n                 Buckets: 524288  Batches: 1  Memory Usage: 53907kB\n                 Buffers: shared hit=7217\n                 ->  Seq Scan on public.rhnpackagecapability rhnpackagecapability_1  (cost=0.00..12126.64 rows=490964 width=79) (actual time=0.009..57.663 rows=490964 loops=1)\n                       Output: rhnpackagecapability_1.name, rhnpackagecapability_1.version, rhnpackagecapability_1.id\n                       Buffers: shared hit=7217\n   CTE inserted_capability\n     ->  Insert on public.rhnpackagecapability rhnpackagecapability_2  (cost=0.00..0.04 rows=1 width=1080) (actual time=183.828..183.828 rows=0 loops=1)\n           Output: rhnpackagecapability_2.id, rhnpackagecapability_2.name, rhnpackagecapability_2.version\n           Conflict Resolution: NOTHING\n           Tuples Inserted: 0\n           Conflicting Tuples: 0\n           Buffers: shared hit=7217\n           ->  Subquery Scan on \"*SELECT*\"  (cost=0.00..0.04 rows=1 width=1080) (actual time=183.827..183.827 rows=0 loops=1)\n                 Output: \"*SELECT*\".nextval, \"*SELECT*\".name, \"*SELECT*\".version, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP\n                 Buffers: shared hit=7217\n                 ->  CTE Scan on missing_capability  (cost=0.00..0.02 rows=1 width=72) (actual time=183.827..183.827 rows=0 loops=1)\n                       Output: nextval('rhn_pkg_capability_id_seq'::regclass), missing_capability.name, missing_capability.version\n                       Buffers: shared hit=7217\n   ->  Sort  (cost=18374.04..18374.04 rows=2 width=36) (actual time=357.825..357.862 rows=1100 loops=1)\n         Output: wanted_capability.ordering, inserted_capability.id\n         Sort Key: wanted_capability.ordering, inserted_capability.id\n         Sort Method: quicksort  Memory: 100kB\n         Buffers: shared hit=14443\n         ->  Append  (cost=0.03..18374.03 rows=2 width=36) (actual time=357.071..357.660 rows=1100 loops=1)\n               Buffers: shared hit=14437\n               ->  Hash Join  (cost=0.03..26.23 rows=1 width=36) (actual time=183.847..183.847 rows=0 loops=1)\n                     Output: wanted_capability.ordering, inserted_capability.id\n                     Hash Cond: (wanted_capability.name = (inserted_capability.name)::text)\n                     Join Filter: (NOT (wanted_capability.version IS DISTINCT FROM (inserted_capability.version)::text))\n                     Buffers: shared hit=7220\n                     ->  CTE Scan on wanted_capability  (cost=0.00..22.00 rows=1100 width=68) (actual time=0.002..0.002 rows=1 loops=1)\n                           Output: wanted_capability.ordering, wanted_capability.name, wanted_capability.version\n                     ->  Hash  (cost=0.02..0.02 rows=1 width=1064) (actual time=183.829..183.829 rows=0 loops=1)\n                           Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n                           Buckets: 1024  Batches: 1  Memory Usage: 8kB\n                           Buffers: shared hit=7217\n                           ->  CTE Scan on inserted_capability  (cost=0.00..0.02 rows=1 width=1064) (actual time=183.828..183.828 rows=0 loops=1)\n                                 Output: inserted_capability.id, inserted_capability.name, inserted_capability.version\n                                 Buffers: shared hit=7217\n               ->  Hash Join  (cost=18263.69..18347.78 rows=1 width=10) (actual time=173.223..173.750 rows=1100 loops=1)\n                     Output: wanted_capability_1.ordering, rhnpackagecapability.id\n                     Hash Cond: (wanted_capability_1.name = (rhnpackagecapability.name)::text)\n                     Join Filter: (NOT (wanted_capability_1.version IS DISTINCT FROM (rhnpackagecapability.version)::text))\n                     Buffers: shared hit=7217\n                     ->  CTE Scan on wanted_capability wanted_capability_1  (cost=0.00..22.00 rows=1100 width=68) (actual time=0.000..0.070 rows=1100 loops=1)\n                           Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n                     ->  Hash  (cost=12126.64..12126.64 rows=490964 width=79) (actual time=172.220..172.220 rows=490964 loops=1)\n                           Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n                           Buckets: 524288  Batches: 1  Memory Usage: 53922kB\n                           Buffers: shared hit=7217\n                           ->  Seq Scan on public.rhnpackagecapability  (cost=0.00..12126.64 rows=490964 width=79) (actual time=0.008..52.573 rows=490964 loops=1)\n                                 Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n                                 Buffers: shared hit=7217\n Planning time: 2.145 ms\n Execution time: 358.773 ms\n\n\nIs that an unreasonable value? For the sake of this discussison, I am targeting fairly average bare-metal SSD-backed servers with recent CPUs (let's say 3 year old maximum), with ample available RAM.these numbers are artificial - important is stable behave, and it's hard to say, what is correct value. But when these values are out of good range, then some calculations can be unstable and generated plans can be strange.There is interesting another fact, new plan uses hash from bigger table, and then hash join is slower. This is strange.               ->  Hash Join  (cost=18263.69..18347.78 rows=1 width=10) (actual time=173.223..173.750 rows=1100 loops=1)\n                     Output: wanted_capability_1.ordering, rhnpackagecapability.id\n                     Hash Cond: (wanted_capability_1.name = (rhnpackagecapability.name)::text)\n                     Join Filter: (NOT (wanted_capability_1.version IS DISTINCT FROM (rhnpackagecapability.version)::text))\n                     Buffers: shared hit=7217\n                     ->  CTE Scan on wanted_capability wanted_capability_1  (cost=0.00..22.00 rows=1100 width=68) (actual time=0.000..0.070 rows=1100 loops=1)\n                           Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n                     ->  Hash  (cost=12126.64..12126.64 rows=490964 width=79) (actual time=172.220..172.220 rows=490964 loops=1)\n                           Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n                           Buckets: 524288  Batches: 1  Memory Usage: 53922kB\n                           Buffers: shared hit=7217\n                           ->  Seq Scan on public.rhnpackagecapability  (cost=0.00..12126.64 rows=490964 width=79) (actual time=0.008..52.573 rows=490964 loops=1)\n                                 Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n                                 Buffers: shared hit=7217versus               ->  Hash Join  (cost=1652.75..323644.99 rows=1 width=10) (actual time=31.846..83.234 rows=1100 loops=1)\n                     Output: wanted_capability_1.ordering, rhnpackagecapability.id\n                     Hash Cond: ((rhnpackagecapability.name)::text = wanted_capability_1.name)\n                     Join Filter: (NOT (wanted_capability_1.version IS DISTINCT FROM (rhnpackagecapability.version)::text))\n                     Buffers: shared hit=7217\n                     ->  Seq Scan on public.rhnpackagecapability  \n(cost=0.00..252699.00 rows=490964 width=79) (actual time=0.007..29.702 \nrows=490964 loops=1)\n                           Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version, rhnpackagecapability.created, rhnpackagecapability.modified\n                           Buffers: shared hit=7217\n                     ->  Hash  (cost=1100.00..1100.00 rows=1100 width=68) (actual time=0.257..0.257 rows=1100 loops=1)\n                           Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n                           Buckets: 2048  Batches: 1  Memory Usage: 134kB\n                           ->  CTE Scan on wanted_capability \nwanted_capability_1  (cost=0.00..1100.00 rows=1100 width=68) (actual \ntime=0.001..0.067 rows=1100 loops=1)\n                                 Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n\nThanks!\n\nRegards,\n--\nSilvio Moioli\nSUSE Manager Development Team", "msg_date": "Mon, 30 Mar 2020 15:33:32 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing work_mem slows down query, why?" }, { "msg_contents": ">\n> Is that an unreasonable value? For the sake of this discussison, I am\n> targeting fairly average bare-metal SSD-backed servers with recent CPUs\n> (let's say 3 year old maximum), with ample available RAM.\n>\n\nif you have SSD, then you can decrease RANDOM_PAGE_COST to 2 maybe 1.5. But\nprobably there will not be impact to this query.\n\n\n> Thanks!\n>\n> Regards,\n> --\n> Silvio Moioli\n> SUSE Manager Development Team\n>\n>\n>\n\n\n\nIs that an unreasonable value? For the sake of this discussison, I am targeting fairly average bare-metal SSD-backed servers with recent CPUs (let's say 3 year old maximum), with ample available RAM.if you have SSD, then you can decrease RANDOM_PAGE_COST to 2 maybe 1.5. But probably there will not be impact to this query. \n\nThanks!\n\nRegards,\n--\nSilvio Moioli\nSUSE Manager Development Team", "msg_date": "Mon, 30 Mar 2020 15:40:11 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing work_mem slows down query, why?" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> po 30. 3. 2020 v 10:12 odesílatel Silvio Moioli <[email protected]> napsal:\n>> -> Sort (cost=299108.00..300335.41 rows=490964 width=79)\n>> (actual time=6475.147..6494.111 rows=462600 loops=1)\n>> Output: rhnpackagecapability_1.name,\n>> rhnpackagecapability_1.version, rhnpackagecapability_1.id\n>> Sort Key: rhnpackagecapability_1.name\n>> Sort Method: quicksort Memory: 79862kB\n>> Buffers: shared hit=7217\n>> -> Seq Scan on public.rhnpackagecapability rhnpackagecapability_1 (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.016..59.976 rows=490964 loops=1)\n\n>> -> Sort (cost=299108.00..300335.41 rows=490964\n>> width=79) (actual time=6458.988..6477.151 rows=462600 loops=1)\n>> Output: rhnpackagecapability.id,\n>> rhnpackagecapability.name, rhnpackagecapability.version\n>> Sort Key: rhnpackagecapability.name\n>> Sort Method: quicksort Memory: 79862kB\n>> Buffers: shared hit=7217\n>> -> Seq Scan on public.rhnpackagecapability (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.012..50.467 rows=490964 loops=1)\n\n> I did some tests and it looks so a penalization for sort long keys is not\n> too high. In your case it is reason why sort is very slow (probably due\n> slow locales). Then the cost of hash join and sort is similar, although in\n> reality it is not true.\n\nYeah, the run time of the slow query seems to be almost entirely expended\nin these two sort steps, while the planner doesn't think that they'll be\nvery expensive. Tweaking unrelated cost settings to work around that is\nnot going to be helpful. What you'd be better off trying to do is fix\nthe slow sorting. Is rhnpackagecapability.name some peculiar datatype?\nIf it's just relatively short text strings, as one would guess from the\ncolumn name, then what you must be looking at is really slow locale-based\nsorting. What's the database's LC_COLLATE setting? Can you get away\nwith switching it to C?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 12:02:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing work_mem slows down query, why?" }, { "msg_contents": "po 30. 3. 2020 v 18:02 odesílatel Tom Lane <[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n> > po 30. 3. 2020 v 10:12 odesílatel Silvio Moioli <[email protected]> napsal:\n> >> -> Sort (cost=299108.00..300335.41 rows=490964 width=79)\n> >> (actual time=6475.147..6494.111 rows=462600 loops=1)\n> >> Output: rhnpackagecapability_1.name,\n> >> rhnpackagecapability_1.version, rhnpackagecapability_1.id\n> >> Sort Key: rhnpackagecapability_1.name\n> >> Sort Method: quicksort Memory: 79862kB\n> >> Buffers: shared hit=7217\n> >> -> Seq Scan on public.rhnpackagecapability\n> rhnpackagecapability_1 (cost=0.00..252699.00 rows=490964 width=79) (actual\n> time=0.016..59.976 rows=490964 loops=1)\n>\n> >> -> Sort (cost=299108.00..300335.41 rows=490964\n> >> width=79) (actual time=6458.988..6477.151 rows=462600 loops=1)\n> >> Output: rhnpackagecapability.id,\n> >> rhnpackagecapability.name, rhnpackagecapability.version\n> >> Sort Key: rhnpackagecapability.name\n> >> Sort Method: quicksort Memory: 79862kB\n> >> Buffers: shared hit=7217\n> >> -> Seq Scan on public.rhnpackagecapability\n> (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.012..50.467\n> rows=490964 loops=1)\n>\n> > I did some tests and it looks so a penalization for sort long keys is not\n> > too high. In your case it is reason why sort is very slow (probably due\n> > slow locales). Then the cost of hash join and sort is similar, although\n> in\n> > reality it is not true.\n>\n> Yeah, the run time of the slow query seems to be almost entirely expended\n> in these two sort steps, while the planner doesn't think that they'll be\n> very expensive. Tweaking unrelated cost settings to work around that is\n> not going to be helpful. What you'd be better off trying to do is fix\n> the slow sorting. Is rhnpackagecapability.name some peculiar datatype?\n> If it's just relatively short text strings, as one would guess from the\n> column name, then what you must be looking at is really slow locale-based\n> sorting. What's the database's LC_COLLATE setting? Can you get away\n> with switching it to C?\n>\n\nThere is another interesting thing\n\n -> Hash Join (cost=18263.69..18347.78 rows=1 width=10)\n(actual time=173.223..173.750 rows=1100 loops=1)\n Output: wanted_capability_1.ordering,\nrhnpackagecapability.id\n Hash Cond: (wanted_capability_1.name = (\nrhnpackagecapability.name)::text)\n Join Filter: (NOT (wanted_capability_1.version IS\nDISTINCT FROM (rhnpackagecapability.version)::text))\n Buffers: shared hit=7217\n -> CTE Scan on wanted_capability wanted_capability_1\n(cost=0.00..22.00 rows=1100 width=68) (actual time=0.000..0.070 rows=1100\nloops=1)\n Output: wanted_capability_1.ordering,\nwanted_capability_1.name, wanted_capability_1.version\n -> Hash (cost=12126.64..12126.64 rows=490964\nwidth=79) (actual time=172.220..172.220 rows=490964 loops=1)\n Output: rhnpackagecapability.id,\nrhnpackagecapability.name, rhnpackagecapability.version\n Buckets: 524288 Batches: 1 Memory Usage:\n53922kB\n Buffers: shared hit=7217\n -> Seq Scan on public.rhnpackagecapability\n(cost=0.00..12126.64 rows=490964 width=79) (actual time=0.008..52.573\nrows=490964 loops=1)\n Output: rhnpackagecapability.id,\nrhnpackagecapability.name, rhnpackagecapability.version\n Buffers: shared hit=7217\n\nCTE scan has only 1100 rows, public.rhnpackagecapability has 490964 rows.\nBut planner does hash from public.rhnpackagecapability table. It cannot be\nvery effective.\n\nPavel\n\n\n\n> regards, tom lane\n>\n\npo 30. 3. 2020 v 18:02 odesílatel Tom Lane <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n> po 30. 3. 2020 v 10:12 odesílatel Silvio Moioli <[email protected]> napsal:\n>> ->  Sort  (cost=299108.00..300335.41 rows=490964 width=79)\n>>         (actual time=6475.147..6494.111 rows=462600 loops=1)\n>>         Output: rhnpackagecapability_1.name,\n>>         rhnpackagecapability_1.version, rhnpackagecapability_1.id\n>>         Sort Key: rhnpackagecapability_1.name\n>>         Sort Method: quicksort  Memory: 79862kB\n>>         Buffers: shared hit=7217\n>>         ->  Seq Scan on public.rhnpackagecapability rhnpackagecapability_1  (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.016..59.976 rows=490964 loops=1)\n\n>> ->  Sort  (cost=299108.00..300335.41 rows=490964\n>>         width=79) (actual time=6458.988..6477.151 rows=462600 loops=1)\n>>         Output: rhnpackagecapability.id,\n>>         rhnpackagecapability.name, rhnpackagecapability.version\n>>         Sort Key: rhnpackagecapability.name\n>>         Sort Method: quicksort  Memory: 79862kB\n>>         Buffers: shared hit=7217\n>>         ->  Seq Scan on public.rhnpackagecapability (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.012..50.467 rows=490964 loops=1)\n\n> I did some tests and it looks so a penalization for sort long keys is not\n> too high. In your case it is reason why sort is very slow (probably due\n> slow locales). Then the cost of hash join and sort is similar, although in\n> reality it is not true.\n\nYeah, the run time of the slow query seems to be almost entirely expended\nin these two sort steps, while the planner doesn't think that they'll be\nvery expensive.  Tweaking unrelated cost settings to work around that is\nnot going to be helpful.  What you'd be better off trying to do is fix\nthe slow sorting.  Is rhnpackagecapability.name some peculiar datatype?\nIf it's just relatively short text strings, as one would guess from the\ncolumn name, then what you must be looking at is really slow locale-based\nsorting.  What's the database's LC_COLLATE setting?  Can you get away\nwith switching it to C?There is another interesting thing                ->  Hash Join  (cost=18263.69..18347.78 rows=1 width=10) (actual time=173.223..173.750 rows=1100 loops=1)\n                     Output: wanted_capability_1.ordering, rhnpackagecapability.id\n                     Hash Cond: (wanted_capability_1.name = (rhnpackagecapability.name)::text)\n                     Join Filter: (NOT (wanted_capability_1.version IS DISTINCT FROM (rhnpackagecapability.version)::text))\n                     Buffers: shared hit=7217\n                     ->  CTE Scan on wanted_capability \nwanted_capability_1  (cost=0.00..22.00 rows=1100 width=68) (actual \ntime=0.000..0.070 rows=1100 loops=1)\n                           Output: wanted_capability_1.ordering, wanted_capability_1.name, wanted_capability_1.version\n                     ->  Hash  (cost=12126.64..12126.64 rows=490964 \nwidth=79) (actual time=172.220..172.220 rows=490964 loops=1)\n                           Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n                           Buckets: 524288  Batches: 1  Memory Usage: 53922kB\n                           Buffers: shared hit=7217\n                           ->  Seq Scan on \npublic.rhnpackagecapability  (cost=0.00..12126.64 rows=490964 width=79) \n(actual time=0.008..52.573 rows=490964 loops=1)\n                                 Output: rhnpackagecapability.id, rhnpackagecapability.name, rhnpackagecapability.version\n                                 Buffers: shared hit=7217CTE scan has only 1100 rows, public.rhnpackagecapability  has 490964 rows. But planner does hash from public.rhnpackagecapability table. It cannot be very effective.Pavel\n\n                        regards, tom lane", "msg_date": "Mon, 30 Mar 2020 18:18:17 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing work_mem slows down query, why?" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> CTE scan has only 1100 rows, public.rhnpackagecapability has 490964 rows.\n> But planner does hash from public.rhnpackagecapability table. It cannot be\n> very effective.\n\n[ shrug... ] Without stats on the CTE output, the planner is very\nleery of putting it on the inside of a hash join. The CTE might\nproduce output that ends up in just a few hash buckets, degrading\nthe join to something not much better than a nested loop. As long\nas there's enough memory to hash the known-well-distributed table,\nputting it on the inside is safer and no costlier.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 12:36:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing work_mem slows down query, why?" }, { "msg_contents": "po 30. 3. 2020 v 18:36 odesílatel Tom Lane <[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n> > CTE scan has only 1100 rows, public.rhnpackagecapability has 490964\n> rows.\n> > But planner does hash from public.rhnpackagecapability table. It cannot\n> be\n> > very effective.\n>\n> [ shrug... ] Without stats on the CTE output, the planner is very\n> leery of putting it on the inside of a hash join. The CTE might\n> produce output that ends up in just a few hash buckets, degrading\n> the join to something not much better than a nested loop. As long\n> as there's enough memory to hash the known-well-distributed table,\n> putting it on the inside is safer and no costlier.\n>\n\nok\n\nRegards\n\nPavel\n\n\n\n> regards, tom lane\n>\n\npo 30. 3. 2020 v 18:36 odesílatel Tom Lane <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n> CTE scan has only 1100 rows, public.rhnpackagecapability  has 490964 rows.\n> But planner does hash from public.rhnpackagecapability table. It cannot be\n> very effective.\n\n[ shrug... ]  Without stats on the CTE output, the planner is very\nleery of putting it on the inside of a hash join.  The CTE might\nproduce output that ends up in just a few hash buckets, degrading\nthe join to something not much better than a nested loop.  As long\nas there's enough memory to hash the known-well-distributed table,\nputting it on the inside is safer and no costlier.okRegardsPavel\n\n                        regards, tom lane", "msg_date": "Mon, 30 Mar 2020 18:49:22 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing work_mem slows down query, why?" }, { "msg_contents": "On 3/30/20 6:02 PM, Tom Lane wrote:\n> Yeah, the run time of the slow query seems to be almost entirely expended\n> in these two sort steps, while the planner doesn't think that they'll be\n> very expensive. Tweaking unrelated cost settings to work around that is\n> not going to be helpful. What you'd be better off trying to do is fix\n> the slow sorting. Is rhnpackagecapability.name some peculiar datatype?\n> If it's just relatively short text strings, as one would guess from the\n> column name, then what you must be looking at is really slow locale-based\n> sorting. What's the database's LC_COLLATE setting? Can you get away\n> with switching it to C?\n\nLC_COLLATE is en_US.UTF-8, and I cannot really change that for the whole database. I could, in principle, use the \"C\" collation for this particular column, I tried that and it helps (time goes down from ~13s to ~500ms).\n\nNevertheless, adding an explicit new index on the column (CREATE INDEX rhn_pkg_cap_name ON rhnPackageCapability (name)) helps even more, with the query time going down to ~60ms, no matter work_mem.\n\nSo ultimately I think I am going to remove the custom cpu_tuple_cost parameter and add the index, unless you have different suggestions.\n\nThank you very much so far!\n\nRegards,\n--\nSilvio Moioli\nSUSE Manager Development Team\n\n\n", "msg_date": "Fri, 3 Apr 2020 23:46:53 +0200", "msg_from": "Silvio Moioli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Increasing work_mem slows down query, why?" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16334\nLogged by: Tejaswini GC\nEmail address: [email protected]\nPostgreSQL version: 10.10\nOperating system: Centos 7\nDescription: \n\nHello Team,\r\n\r\nWe have upgraded our database into version 10.10.\r\nAfter upgrading we could see that the system performance is bad , and one of\nthe applications linked to it via web service is not working.\r\n\r\nDuring this upgrade we have not done any code changes either on the\napplication side or on our ERP side.\r\n\r\nWe are trying to debug everything from application perse, but till now we do\nnot have any lead.\r\n\r\nCan you tell us are there any measures that we need to take after upgrade.\n\r\nWhat is the ideal setup or paramters that we should have when we upgrade to\nv 10.10\r\n\r\nThanks", "msg_date": "Thu, 02 Apr 2020 03:52:25 +0000", "msg_from": "PG Bug reporting form <[email protected]>", "msg_from_op": true, "msg_subject": "BUG #16334: We recently upgraded PG version from 9.5 to 10.10 and\n system performance is not so good" }, { "msg_contents": "Hi,\n\nOn Thu, Apr 02, 2020 at 03:52:25AM +0000, PG Bug reporting form wrote:\n> The following bug has been logged on the website:\n> \n> Bug reference: 16334\n> Logged by: Tejaswini GC\n> Email address: [email protected]\n> PostgreSQL version: 10.10\n> Operating system: Centos 7\n> Description: \n\n\nFirst of all, this is not a bug. You should have instead started a discussion\non pgsql-general or pgsql-performance. I'm redirecting the discussion on\n-performance.\n\n\n> We have upgraded our database into version 10.10.\n\n\nHow did you upgrade?\n\n\n> After upgrading we could see that the system performance is bad , and one of\n> the applications linked to it via web service is not working.\n\n\nDo you have any errors in the postgres logs?\n\n\n> During this upgrade we have not done any code changes either on the\n> application side or on our ERP side.\n> \n> We are trying to debug everything from application perse, but till now we do\n> not have any lead.\n> \n> Can you tell us are there any measures that we need to take after upgrade.\n\n\nIt depends on how you did the upgrade. If you used pg_upgrade, did you run the\ngenerated script as documented in step 13 at\nhttps://www.postgresql.org/docs/current/pgupgrade.html?\n\nOtherwise, at least a database-wide VACUUM ANALYZE on every database is the\nbare minimum to run after an upgrade.\n\n\n", "msg_date": "Thu, 2 Apr 2020 10:46:58 +0200", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10\n and system performance is not so good" }, { "msg_contents": "Hello Julien,\n\nThanks for your response.\n\nI'm in touch with our hosting team to get more information for your queries.\nAs of now I can share these details.\nWe did analyze on the DB, but not the vacuum,\nWe are using AWS RDS.\n\nPlease help me with these queries as well.\n\n1) Will the process change if we use AWS RDS.\n2) What kind of vacuum should be done on the DB, as there are many types of\nvacuum.\n\nAwaiting your reply!\nThanks!\n\n*Regards*\n*Tejaswini G C*\n*IT Retail Team*\n\n\nOn Thu, Apr 2, 2020 at 2:17 PM Julien Rouhaud <[email protected]> wrote:\n\n> Hi,\n>\n> On Thu, Apr 02, 2020 at 03:52:25AM +0000, PG Bug reporting form wrote:\n> > The following bug has been logged on the website:\n> >\n> > Bug reference: 16334\n> > Logged by: Tejaswini GC\n> > Email address: [email protected]\n> > PostgreSQL version: 10.10\n> > Operating system: Centos 7\n> > Description:\n>\n>\n> First of all, this is not a bug. You should have instead started a\n> discussion\n> on pgsql-general or pgsql-performance. I'm redirecting the discussion on\n> -performance.\n>\n>\n> > We have upgraded our database into version 10.10.\n>\n>\n> How did you upgrade?\n>\n>\n> > After upgrading we could see that the system performance is bad , and\n> one of\n> > the applications linked to it via web service is not working.\n>\n>\n> Do you have any errors in the postgres logs?\n>\n>\n> > During this upgrade we have not done any code changes either on the\n> > application side or on our ERP side.\n> >\n> > We are trying to debug everything from application perse, but till now\n> we do\n> > not have any lead.\n> >\n> > Can you tell us are there any measures that we need to take after\n> upgrade.\n>\n>\n> It depends on how you did the upgrade. If you used pg_upgrade, did you\n> run the\n> generated script as documented in step 13 at\n> https://www.postgresql.org/docs/current/pgupgrade.html?\n>\n> Otherwise, at least a database-wide VACUUM ANALYZE on every database is the\n> bare minimum to run after an upgrade.\n>\n\nHello Julien,Thanks for your response.I'm in touch with our hosting team to get more information for your queries.As of now I can share these details.We did analyze on the DB, but not the vacuum,We are using AWS RDS.Please help me with these queries as well.1) Will the process change if we use AWS RDS.2) What kind of vacuum should be done on the DB, as there are many types of vacuum.Awaiting your reply!Thanks!RegardsTejaswini G CIT Retail TeamOn Thu, Apr 2, 2020 at 2:17 PM Julien Rouhaud <[email protected]> wrote:Hi,\n\nOn Thu, Apr 02, 2020 at 03:52:25AM +0000, PG Bug reporting form wrote:\n> The following bug has been logged on the website:\n> \n> Bug reference:      16334\n> Logged by:          Tejaswini GC\n> Email address:      [email protected]\n> PostgreSQL version: 10.10\n> Operating system:   Centos 7\n> Description:        \n\n\nFirst of all, this is not a bug.  You should have instead started a discussion\non pgsql-general or pgsql-performance.  I'm redirecting the discussion on\n-performance.\n\n\n> We have upgraded our database into version 10.10.\n\n\nHow did you upgrade?\n\n\n> After upgrading we could see that the system performance is bad , and one of\n> the applications linked to it via web service is not working.\n\n\nDo you have any errors in the postgres logs?\n\n\n> During this upgrade we have not done any code changes either on the\n> application side or on our ERP side.\n> \n> We are trying to debug everything from application perse, but till now we do\n> not have any lead.\n> \n> Can you tell us are there any measures that we need to take after upgrade.\n\n\nIt depends on how you did the upgrade.  If you used pg_upgrade, did you run the\ngenerated script as documented in step 13 at\nhttps://www.postgresql.org/docs/current/pgupgrade.html?\n\nOtherwise, at least a database-wide VACUUM ANALYZE on every database is the\nbare minimum to run after an upgrade.", "msg_date": "Thu, 2 Apr 2020 15:52:56 +0530", "msg_from": "Tejaswini GC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10 and\n system performance is not so good" }, { "msg_contents": "Please don't top-post, it makes it hard to follow the discussion.\n\nOn Thu, Apr 02, 2020 at 03:52:56PM +0530, Tejaswini GC wrote:\n> \n> I'm in touch with our hosting team to get more information for your queries.\n> As of now I can share these details.\n> We did analyze on the DB, but not the vacuum,\n> We are using AWS RDS.\n> \n> Please help me with these queries as well.\n> \n> 1) Will the process change if we use AWS RDS.\n\n\nNo idea, that's a question you should ask them.\n\n\n> 2) What kind of vacuum should be done on the DB, as there are many types of\n> vacuum.\n\n\nA regular vacuum, as in:\n\nVACUUM ANALYZE\n\nin all your databases.\n\n\n> On Thu, Apr 2, 2020 at 2:17 PM Julien Rouhaud <[email protected]> wrote:\n> \n> > Hi,\n> >\n> > On Thu, Apr 02, 2020 at 03:52:25AM +0000, PG Bug reporting form wrote:\n> > > The following bug has been logged on the website:\n> > >\n> > > Bug reference: 16334\n> > > Logged by: Tejaswini GC\n> > > Email address: [email protected]\n> > > PostgreSQL version: 10.10\n> > > Operating system: Centos 7\n> > > Description:\n> >\n> >\n> > First of all, this is not a bug. You should have instead started a\n> > discussion\n> > on pgsql-general or pgsql-performance. I'm redirecting the discussion on\n> > -performance.\n> >\n> >\n> > > We have upgraded our database into version 10.10.\n> >\n> >\n> > How did you upgrade?\n> >\n> >\n> > > After upgrading we could see that the system performance is bad , and\n> > one of\n> > > the applications linked to it via web service is not working.\n> >\n> >\n> > Do you have any errors in the postgres logs?\n> >\n> >\n> > > During this upgrade we have not done any code changes either on the\n> > > application side or on our ERP side.\n> > >\n> > > We are trying to debug everything from application perse, but till now\n> > we do\n> > > not have any lead.\n> > >\n> > > Can you tell us are there any measures that we need to take after\n> > upgrade.\n> >\n> >\n> > It depends on how you did the upgrade. If you used pg_upgrade, did you\n> > run the\n> > generated script as documented in step 13 at\n> > https://www.postgresql.org/docs/current/pgupgrade.html?\n> >\n> > Otherwise, at least a database-wide VACUUM ANALYZE on every database is the\n> > bare minimum to run after an upgrade.\n> >\n\n\n", "msg_date": "Thu, 2 Apr 2020 13:04:36 +0200", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10\n and system performance is not so good" }, { "msg_contents": "Hello Julien,\n\nThanks for your inputs,\nI shall get back to you If some information is needed.\n\n\n*Regards*\n*Tejaswini G C*\n*IT Retail Team*\n\n\nOn Thu, Apr 2, 2020 at 4:35 PM Julien Rouhaud <[email protected]> wrote:\n\n> Please don't top-post, it makes it hard to follow the discussion.\n>\n> On Thu, Apr 02, 2020 at 03:52:56PM +0530, Tejaswini GC wrote:\n> >\n> > I'm in touch with our hosting team to get more information for your\n> queries.\n> > As of now I can share these details.\n> > We did analyze on the DB, but not the vacuum,\n> > We are using AWS RDS.\n> >\n> > Please help me with these queries as well.\n> >\n> > 1) Will the process change if we use AWS RDS.\n>\n>\n> No idea, that's a question you should ask them.\n>\n>\n> > 2) What kind of vacuum should be done on the DB, as there are many types\n> of\n> > vacuum.\n>\n>\n> A regular vacuum, as in:\n>\n> VACUUM ANALYZE\n>\n> in all your databases.\n>\n>\n> > On Thu, Apr 2, 2020 at 2:17 PM Julien Rouhaud <[email protected]>\n> wrote:\n> >\n> > > Hi,\n> > >\n> > > On Thu, Apr 02, 2020 at 03:52:25AM +0000, PG Bug reporting form wrote:\n> > > > The following bug has been logged on the website:\n> > > >\n> > > > Bug reference: 16334\n> > > > Logged by: Tejaswini GC\n> > > > Email address: [email protected]\n> > > > PostgreSQL version: 10.10\n> > > > Operating system: Centos 7\n> > > > Description:\n> > >\n> > >\n> > > First of all, this is not a bug. You should have instead started a\n> > > discussion\n> > > on pgsql-general or pgsql-performance. I'm redirecting the discussion\n> on\n> > > -performance.\n> > >\n> > >\n> > > > We have upgraded our database into version 10.10.\n> > >\n> > >\n> > > How did you upgrade?\n> > >\n> > >\n> > > > After upgrading we could see that the system performance is bad , and\n> > > one of\n> > > > the applications linked to it via web service is not working.\n> > >\n> > >\n> > > Do you have any errors in the postgres logs?\n> > >\n> > >\n> > > > During this upgrade we have not done any code changes either on the\n> > > > application side or on our ERP side.\n> > > >\n> > > > We are trying to debug everything from application perse, but till\n> now\n> > > we do\n> > > > not have any lead.\n> > > >\n> > > > Can you tell us are there any measures that we need to take after\n> > > upgrade.\n> > >\n> > >\n> > > It depends on how you did the upgrade. If you used pg_upgrade, did you\n> > > run the\n> > > generated script as documented in step 13 at\n> > > https://www.postgresql.org/docs/current/pgupgrade.html?\n> > >\n> > > Otherwise, at least a database-wide VACUUM ANALYZE on every database\n> is the\n> > > bare minimum to run after an upgrade.\n> > >\n>\n\nHello Julien,Thanks for your inputs,I shall get back to you If some information is needed.RegardsTejaswini G CIT Retail TeamOn Thu, Apr 2, 2020 at 4:35 PM Julien Rouhaud <[email protected]> wrote:Please don't top-post, it makes it hard to follow the discussion.\n\nOn Thu, Apr 02, 2020 at 03:52:56PM +0530, Tejaswini GC wrote:\n> \n> I'm in touch with our hosting team to get more information for your queries.\n> As of now I can share these details.\n> We did analyze on the DB, but not the vacuum,\n> We are using AWS RDS.\n> \n> Please help me with these queries as well.\n> \n> 1) Will the process change if we use AWS RDS.\n\n\nNo idea, that's a question you should ask them.\n\n\n> 2) What kind of vacuum should be done on the DB, as there are many types of\n> vacuum.\n\n\nA regular vacuum, as in:\n\nVACUUM ANALYZE\n\nin all your databases.\n\n\n> On Thu, Apr 2, 2020 at 2:17 PM Julien Rouhaud <[email protected]> wrote:\n> \n> > Hi,\n> >\n> > On Thu, Apr 02, 2020 at 03:52:25AM +0000, PG Bug reporting form wrote:\n> > > The following bug has been logged on the website:\n> > >\n> > > Bug reference:      16334\n> > > Logged by:          Tejaswini GC\n> > > Email address:      [email protected]\n> > > PostgreSQL version: 10.10\n> > > Operating system:   Centos 7\n> > > Description:\n> >\n> >\n> > First of all, this is not a bug.  You should have instead started a\n> > discussion\n> > on pgsql-general or pgsql-performance.  I'm redirecting the discussion on\n> > -performance.\n> >\n> >\n> > > We have upgraded our database into version 10.10.\n> >\n> >\n> > How did you upgrade?\n> >\n> >\n> > > After upgrading we could see that the system performance is bad , and\n> > one of\n> > > the applications linked to it via web service is not working.\n> >\n> >\n> > Do you have any errors in the postgres logs?\n> >\n> >\n> > > During this upgrade we have not done any code changes either on the\n> > > application side or on our ERP side.\n> > >\n> > > We are trying to debug everything from application perse, but till now\n> > we do\n> > > not have any lead.\n> > >\n> > > Can you tell us are there any measures that we need to take after\n> > upgrade.\n> >\n> >\n> > It depends on how you did the upgrade.  If you used pg_upgrade, did you\n> > run the\n> > generated script as documented in step 13 at\n> > https://www.postgresql.org/docs/current/pgupgrade.html?\n> >\n> > Otherwise, at least a database-wide VACUUM ANALYZE on every database is the\n> > bare minimum to run after an upgrade.\n> >", "msg_date": "Thu, 2 Apr 2020 16:39:41 +0530", "msg_from": "Tejaswini GC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10 and\n system performance is not so good" }, { "msg_contents": "Hello Julien,\n\nThe procedure for doing the upgrade is different for AWS.\n\nAfter the PG upgrade we can see many locks in our system we believe this is\nthe main reason for the issue we are facing,\nI can see these locks when using pg_stat_activity.\n\nAlong with that few queries which were executing within millisecond are\ntaking more than few hours.\n\nKindly check and let us know how to fix this.\n\nAppreciate the support.\n\n*Regards*\n*Tejaswini G C*\n*IT Retail Team*\n\n\nOn Thu, Apr 2, 2020 at 4:39 PM Tejaswini GC <[email protected]>\nwrote:\n\n> Hello Julien,\n>\n> Thanks for your inputs,\n> I shall get back to you If some information is needed.\n>\n>\n> *Regards*\n> *Tejaswini G C*\n> *IT Retail Team*\n>\n>\n> On Thu, Apr 2, 2020 at 4:35 PM Julien Rouhaud <[email protected]> wrote:\n>\n>> Please don't top-post, it makes it hard to follow the discussion.\n>>\n>> On Thu, Apr 02, 2020 at 03:52:56PM +0530, Tejaswini GC wrote:\n>> >\n>> > I'm in touch with our hosting team to get more information for your\n>> queries.\n>> > As of now I can share these details.\n>> > We did analyze on the DB, but not the vacuum,\n>> > We are using AWS RDS.\n>> >\n>> > Please help me with these queries as well.\n>> >\n>> > 1) Will the process change if we use AWS RDS.\n>>\n>>\n>> No idea, that's a question you should ask them.\n>>\n>>\n>> > 2) What kind of vacuum should be done on the DB, as there are many\n>> types of\n>> > vacuum.\n>>\n>>\n>> A regular vacuum, as in:\n>>\n>> VACUUM ANALYZE\n>>\n>> in all your databases.\n>>\n>>\n>> > On Thu, Apr 2, 2020 at 2:17 PM Julien Rouhaud <[email protected]>\n>> wrote:\n>> >\n>> > > Hi,\n>> > >\n>> > > On Thu, Apr 02, 2020 at 03:52:25AM +0000, PG Bug reporting form wrote:\n>> > > > The following bug has been logged on the website:\n>> > > >\n>> > > > Bug reference: 16334\n>> > > > Logged by: Tejaswini GC\n>> > > > Email address: [email protected]\n>> > > > PostgreSQL version: 10.10\n>> > > > Operating system: Centos 7\n>> > > > Description:\n>> > >\n>> > >\n>> > > First of all, this is not a bug. You should have instead started a\n>> > > discussion\n>> > > on pgsql-general or pgsql-performance. I'm redirecting the\n>> discussion on\n>> > > -performance.\n>> > >\n>> > >\n>> > > > We have upgraded our database into version 10.10.\n>> > >\n>> > >\n>> > > How did you upgrade?\n>> > >\n>> > >\n>> > > > After upgrading we could see that the system performance is bad ,\n>> and\n>> > > one of\n>> > > > the applications linked to it via web service is not working.\n>> > >\n>> > >\n>> > > Do you have any errors in the postgres logs?\n>> > >\n>> > >\n>> > > > During this upgrade we have not done any code changes either on the\n>> > > > application side or on our ERP side.\n>> > > >\n>> > > > We are trying to debug everything from application perse, but till\n>> now\n>> > > we do\n>> > > > not have any lead.\n>> > > >\n>> > > > Can you tell us are there any measures that we need to take after\n>> > > upgrade.\n>> > >\n>> > >\n>> > > It depends on how you did the upgrade. If you used pg_upgrade, did\n>> you\n>> > > run the\n>> > > generated script as documented in step 13 at\n>> > > https://www.postgresql.org/docs/current/pgupgrade.html?\n>> > >\n>> > > Otherwise, at least a database-wide VACUUM ANALYZE on every database\n>> is the\n>> > > bare minimum to run after an upgrade.\n>> > >\n>>\n>\n\nHello Julien,The procedure for doing the upgrade is different for AWS.After the PG upgrade we can see many locks in our system we believe this is the main reason for the issue we are facing, I can see these locks when using pg_stat_activity.Along with that few queries which were executing within millisecond are taking more than few hours.Kindly check and let us know how to fix this.Appreciate the support.RegardsTejaswini G CIT Retail TeamOn Thu, Apr 2, 2020 at 4:39 PM Tejaswini GC <[email protected]> wrote:Hello Julien,Thanks for your inputs,I shall get back to you If some information is needed.RegardsTejaswini G CIT Retail TeamOn Thu, Apr 2, 2020 at 4:35 PM Julien Rouhaud <[email protected]> wrote:Please don't top-post, it makes it hard to follow the discussion.\n\nOn Thu, Apr 02, 2020 at 03:52:56PM +0530, Tejaswini GC wrote:\n> \n> I'm in touch with our hosting team to get more information for your queries.\n> As of now I can share these details.\n> We did analyze on the DB, but not the vacuum,\n> We are using AWS RDS.\n> \n> Please help me with these queries as well.\n> \n> 1) Will the process change if we use AWS RDS.\n\n\nNo idea, that's a question you should ask them.\n\n\n> 2) What kind of vacuum should be done on the DB, as there are many types of\n> vacuum.\n\n\nA regular vacuum, as in:\n\nVACUUM ANALYZE\n\nin all your databases.\n\n\n> On Thu, Apr 2, 2020 at 2:17 PM Julien Rouhaud <[email protected]> wrote:\n> \n> > Hi,\n> >\n> > On Thu, Apr 02, 2020 at 03:52:25AM +0000, PG Bug reporting form wrote:\n> > > The following bug has been logged on the website:\n> > >\n> > > Bug reference:      16334\n> > > Logged by:          Tejaswini GC\n> > > Email address:      [email protected]\n> > > PostgreSQL version: 10.10\n> > > Operating system:   Centos 7\n> > > Description:\n> >\n> >\n> > First of all, this is not a bug.  You should have instead started a\n> > discussion\n> > on pgsql-general or pgsql-performance.  I'm redirecting the discussion on\n> > -performance.\n> >\n> >\n> > > We have upgraded our database into version 10.10.\n> >\n> >\n> > How did you upgrade?\n> >\n> >\n> > > After upgrading we could see that the system performance is bad , and\n> > one of\n> > > the applications linked to it via web service is not working.\n> >\n> >\n> > Do you have any errors in the postgres logs?\n> >\n> >\n> > > During this upgrade we have not done any code changes either on the\n> > > application side or on our ERP side.\n> > >\n> > > We are trying to debug everything from application perse, but till now\n> > we do\n> > > not have any lead.\n> > >\n> > > Can you tell us are there any measures that we need to take after\n> > upgrade.\n> >\n> >\n> > It depends on how you did the upgrade.  If you used pg_upgrade, did you\n> > run the\n> > generated script as documented in step 13 at\n> > https://www.postgresql.org/docs/current/pgupgrade.html?\n> >\n> > Otherwise, at least a database-wide VACUUM ANALYZE on every database is the\n> > bare minimum to run after an upgrade.\n> >", "msg_date": "Sat, 4 Apr 2020 11:57:02 +0530", "msg_from": "Tejaswini GC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10 and\n system performance is not so good" }, { "msg_contents": "Once again, please don't top post.\n\nOn Sat, Apr 04, 2020 at 11:57:02AM +0530, Tejaswini GC wrote:\n> Hello Julien,\n> \n> The procedure for doing the upgrade is different for AWS.\n> \n\n\nAnd is it possible to know what the procedure was?\n\n\n> \n> After the PG upgrade we can see many locks in our system we believe this is\n> the main reason for the issue we are facing,\n> I can see these locks when using pg_stat_activity.\n> \n\n\nAny more details? Are you talking of heavyweight locks or lightweight locks?\nSince version 9.6 both are visible in pg_stat_activity.\n\n\n> Along with that few queries which were executing within millisecond are\n> taking more than few hours.\n> \n\n\nPlease follow https://wiki.postgresql.org/wiki/Slow_Query_Questions to provide\nmore details.\n\n\n", "msg_date": "Sat, 4 Apr 2020 09:06:38 +0200", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #16334: We recently upgraded PG version from 9.5 to 10.10\n and system performance is not so good" } ]
[ { "msg_contents": "Hi Team,\n\nGood Evening,\n\nCould someone please help us share the procedure to troubleshoot the locks\non proc issues.\n\nEnvironment:\n============\n 1 pgpool server (Master Pool Node) using Straming replication with load\nbalancing\n 4 DB nodes (1Master and 3 Slaves).\n\n Versions:\n 1. postgres: 9.5.15\n 2. pgpool : 3.9\n 3. repmgr: 4.1\n\nWe are continuously facing locking issues for below procedures , due to\nthis the rest of the call for these procs going into waiting state.Which\ncause the DB got hung. Below are the procs running with DB_User2 from the\napplication.\n\n1. select * from Schema1.duct_remove_validation($1,$2,$3,$4) ==> This proc\nit self calling Schema1.cable_remove_validation($1,$2).\n2. select * from Schema1.cable_remove_validation($1,$2) ==> This is also\ncalling from the applications\n\nif we ran explain analyze, its taking msec only, but if we run\nsimultaneouly from application getting locked and waiting state.\n\nWe have ran below query for showing blocking queries and attached output in\nBlocking_Queries_with_PID.csv file:\n\nSELECT\npl.pid as blocked_pid\n,psa.usename as blocked_user\n,pl2.pid as blocking_pid\n,psa2.usename as blocking_user\n,psa.query as blocked_statement\nFROM pg_catalog.pg_locks pl\nJOIN pg_catalog.pg_stat_activity psa\nON pl.pid = psa.pid\nJOIN pg_catalog.pg_locks pl2\nJOIN pg_catalog.pg_stat_activity psa2\nON pl2.pid = psa2.pid\nON pl.transactionid = pl2.transactionid\nAND pl.pid != pl2.pid\nWHERE NOT pl.granted;\n\nOutput: attached output in Blocking_Queries_with_PID.csv file\n\n\nThe waiting connections are keep on accumulating and cause DB hung.\nI have attached pg_stat_activity excel file with the user along with the\nproc queries which cause waiting state.\n\nFinds:\n\nThere are total 18 connections for DB_User2 which are running only above 2\nprocs, Out of that only one connection with 18732 is running proc (select *\nfrom Schema1.duct_remove_validation($1,$2,$3,$4))from long time and reset\nof all 17 connections are in waiting state from the long time.\n\nThere are many exclusive locks on table for 18732 and other process as\nwell. I have attached pg_locks reference excel(Lock_Reference_For_PROC)\nwith highlighted pid 18732.\n\nCould someone please suggest the procedure to troubleshoot this issue.\nPlease find the attachment for reference.\n\nThanks,\nPostgann.", "msg_date": "Fri, 3 Apr 2020 01:07:43 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Could someone please help us share the procedure to troubleshoot the\n locks on proc issues." }, { "msg_contents": "On 4/2/20 12:37 PM, postgann2020 s wrote:\n> Hi Team,\n> \n> Good Evening,\n> \n> Could someone please help us share the procedure to troubleshoot the \n> locks on proc issues.\n> \n> Environment:\n> ============\n>  1 pgpool server (Master Pool Node) using Straming replication with \n> load balancing\n>  4 DB nodes (1Master and 3 Slaves).\n> \n>  Versions:\n>  1. postgres: 9.5.15\n>  2. pgpool   : 3.9\n>  3. repmgr:  4.1\n> \n> We are continuously facing locking issues for below procedures , due to \n> this the  rest of the call for these procs going into waiting \n> state.Which cause the DB got hung. Below are the procs  running with \n> DB_User2 from the application.\n> \n> 1. select * from Schema1.duct_remove_validation($1,$2,$3,$4)  ==> This \n> proc it self calling Schema1.cable_remove_validation($1,$2).\n> 2. select * from Schema1.cable_remove_validation($1,$2)  ==> This is \n> also calling from the applications\n\nTo figure out below we need to see what is happening in above.\n\n> \n> if we ran explain analyze, its taking msec only, but if we run \n> simultaneouly from application getting locked and waiting state.\n> \n> We have ran below query for showing blocking queries and attached output \n> in Blocking_Queries_with_PID.csv file:\n> \n> SELECT\n> pl.pid as blocked_pid\n> ,psa.usename as blocked_user\n> ,pl2.pid as blocking_pid\n> ,psa2.usename as blocking_user\n> ,psa.query as blocked_statement\n> FROM pg_catalog.pg_locks pl\n> JOIN pg_catalog.pg_stat_activity psa\n> ON pl.pid = psa.pid\n> JOIN pg_catalog.pg_locks pl2\n> JOIN pg_catalog.pg_stat_activity psa2\n> ON pl2.pid = psa2.pid\n> ON pl.transactionid = pl2.transactionid\n> AND pl.pid != pl2.pid\n> WHERE NOT pl.granted;\n> \n> Output: attached output in Blocking_Queries_with_PID.csv file\n> \n> \n> The waiting connections are keep on accumulating and cause DB hung.\n> I have attached pg_stat_activity excel file with the user along with the \n> proc queries which cause waiting state.\n> \n> Finds:\n> \n> There are total 18 connections for DB_User2 which are running only above \n> 2 procs, Out of that only one connection with 18732 is running proc \n> (select * from Schema1.duct_remove_validation($1,$2,$3,$4))from long \n> time  and reset of all 17 connections are in waiting state from the long \n> time.\n> \n> There are many exclusive locks on table for 18732 and other process as \n> well. I have attached pg_locks reference excel(Lock_Reference_For_PROC) \n> with highlighted pid 18732.\n> \n> Could someone please suggest the procedure to troubleshoot this issue.\n> Please find the attachment for reference.\n> \n> Thanks,\n> Postgann.\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n", "msg_date": "Thu, 2 Apr 2020 16:00:12 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Could someone please help us share the procedure to troubleshoot\n the locks on proc issues." }, { "msg_contents": "Thanks Adrian, will share the details.\n\nOn Fri, Apr 3, 2020 at 4:30 AM Adrian Klaver <[email protected]>\nwrote:\n\n> On 4/2/20 12:37 PM, postgann2020 s wrote:\n> > Hi Team,\n> >\n> > Good Evening,\n> >\n> > Could someone please help us share the procedure to troubleshoot the\n> > locks on proc issues.\n> >\n> > Environment:\n> > ============\n> > 1 pgpool server (Master Pool Node) using Straming replication with\n> > load balancing\n> > 4 DB nodes (1Master and 3 Slaves).\n> >\n> > Versions:\n> > 1. postgres: 9.5.15\n> > 2. pgpool : 3.9\n> > 3. repmgr: 4.1\n> >\n> > We are continuously facing locking issues for below procedures , due to\n> > this the rest of the call for these procs going into waiting\n> > state.Which cause the DB got hung. Below are the procs running with\n> > DB_User2 from the application.\n> >\n> > 1. select * from Schema1.duct_remove_validation($1,$2,$3,$4) ==> This\n> > proc it self calling Schema1.cable_remove_validation($1,$2).\n> > 2. select * from Schema1.cable_remove_validation($1,$2) ==> This is\n> > also calling from the applications\n>\n> To figure out below we need to see what is happening in above.\n>\n> >\n> > if we ran explain analyze, its taking msec only, but if we run\n> > simultaneouly from application getting locked and waiting state.\n> >\n> > We have ran below query for showing blocking queries and attached output\n> > in Blocking_Queries_with_PID.csv file:\n> >\n> > SELECT\n> > pl.pid as blocked_pid\n> > ,psa.usename as blocked_user\n> > ,pl2.pid as blocking_pid\n> > ,psa2.usename as blocking_user\n> > ,psa.query as blocked_statement\n> > FROM pg_catalog.pg_locks pl\n> > JOIN pg_catalog.pg_stat_activity psa\n> > ON pl.pid = psa.pid\n> > JOIN pg_catalog.pg_locks pl2\n> > JOIN pg_catalog.pg_stat_activity psa2\n> > ON pl2.pid = psa2.pid\n> > ON pl.transactionid = pl2.transactionid\n> > AND pl.pid != pl2.pid\n> > WHERE NOT pl.granted;\n> >\n> > Output: attached output in Blocking_Queries_with_PID.csv file\n> >\n> >\n> > The waiting connections are keep on accumulating and cause DB hung.\n> > I have attached pg_stat_activity excel file with the user along with the\n> > proc queries which cause waiting state.\n> >\n> > Finds:\n> >\n> > There are total 18 connections for DB_User2 which are running only above\n> > 2 procs, Out of that only one connection with 18732 is running proc\n> > (select * from Schema1.duct_remove_validation($1,$2,$3,$4))from long\n> > time and reset of all 17 connections are in waiting state from the long\n> > time.\n> >\n> > There are many exclusive locks on table for 18732 and other process as\n> > well. I have attached pg_locks reference excel(Lock_Reference_For_PROC)\n> > with highlighted pid 18732.\n> >\n> > Could someone please suggest the procedure to troubleshoot this issue.\n> > Please find the attachment for reference.\n> >\n> > Thanks,\n> > Postgann.\n>\n>\n> --\n> Adrian Klaver\n> [email protected]\n>\n\nThanks Adrian, will share the details.On Fri, Apr 3, 2020 at 4:30 AM Adrian Klaver <[email protected]> wrote:On 4/2/20 12:37 PM, postgann2020 s wrote:\n> Hi Team,\n> \n> Good Evening,\n> \n> Could someone please help us share the procedure to troubleshoot the \n> locks on proc issues.\n> \n> Environment:\n> ============\n>   1 pgpool server (Master Pool Node) using Straming replication with \n> load balancing\n>   4 DB nodes (1Master and 3 Slaves).\n> \n>   Versions:\n>   1. postgres: 9.5.15\n>   2. pgpool   : 3.9\n>   3. repmgr:  4.1\n> \n> We are continuously facing locking issues for below procedures , due to \n> this the  rest of the call for these procs going into waiting \n> state.Which cause the DB got hung. Below are the procs  running with \n> DB_User2 from the application.\n> \n> 1. select * from Schema1.duct_remove_validation($1,$2,$3,$4)  ==> This \n> proc it self calling Schema1.cable_remove_validation($1,$2).\n> 2. select * from Schema1.cable_remove_validation($1,$2)  ==> This is \n> also calling from the applications\n\nTo figure out below we need to see what is happening in above.\n\n> \n> if we ran explain analyze, its taking msec only, but if we run \n> simultaneouly from application getting locked and waiting state.\n> \n> We have ran below query for showing blocking queries and attached output \n> in Blocking_Queries_with_PID.csv file:\n> \n> SELECT\n> pl.pid as blocked_pid\n> ,psa.usename as blocked_user\n> ,pl2.pid as blocking_pid\n> ,psa2.usename as blocking_user\n> ,psa.query as blocked_statement\n> FROM pg_catalog.pg_locks pl\n> JOIN pg_catalog.pg_stat_activity psa\n> ON pl.pid = psa.pid\n> JOIN pg_catalog.pg_locks pl2\n> JOIN pg_catalog.pg_stat_activity psa2\n> ON pl2.pid = psa2.pid\n> ON pl.transactionid = pl2.transactionid\n> AND pl.pid != pl2.pid\n> WHERE NOT pl.granted;\n> \n> Output: attached output in Blocking_Queries_with_PID.csv file\n> \n> \n> The waiting connections are keep on accumulating and cause DB hung.\n> I have attached pg_stat_activity excel file with the user along with the \n> proc queries which cause waiting state.\n> \n> Finds:\n> \n> There are total 18 connections for DB_User2 which are running only above \n> 2 procs, Out of that only one connection with 18732 is running proc \n> (select * from Schema1.duct_remove_validation($1,$2,$3,$4))from long \n> time  and reset of all 17 connections are in waiting state from the long \n> time.\n> \n> There are many exclusive locks on table for 18732 and other process as \n> well. I have attached pg_locks reference excel(Lock_Reference_For_PROC) \n> with highlighted pid 18732.\n> \n> Could someone please suggest the procedure to troubleshoot this issue.\n> Please find the attachment for reference.\n> \n> Thanks,\n> Postgann.\n\n\n-- \nAdrian Klaver\[email protected]", "msg_date": "Fri, 3 Apr 2020 10:41:52 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Could someone please help us share the procedure to troubleshoot\n the locks on proc issues." } ]
[ { "msg_contents": "Dear I have a question to ask you\nI am having a slow problem with a query and I am seeing with the explain\nthat the current cost and time differ by 4 times\nPostgres version 9.5.16 in centos 7.6\nTo try to solve this run the statistics to the table and the same problem\nremains\nIt's a very big table 2 billion tuples\nDo you have any idea what I can do to improve\nThank you very much for your time\nAny data you need I can provide\n\nI share a part of the explain\n\n Hash Right Join (cost=11114339.65..12172907.42 rows=886647 width=158)\n(actual time=1906344.617..1963668.889 rows=3362294 loops=1)\"\n\" Output: ba.bus_ent_inst_id_auto, ba.att_value_1,\nba.att_value_num_1, att_call.ent_inst_att_str_value,\natt_nro.ent_inst_att_str_value, att_bis.ent_inst_att_str_value,\natt_pad.ent_inst_att_str_value, att_manz.ent_inst_att_str_value, att_a\n(...)\"\n\" Hash Cond: ((att_barr.env_id = ba.env_id) AND\n(att_barr.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\n\" Buffers: shared hit=5814458 read=1033324 dirtied=790\"\n\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n", "msg_date": "Fri, 3 Apr 2020 09:03:49 -0700 (MST)", "msg_from": "dangal <[email protected]>", "msg_from_op": true, "msg_subject": "slow query" }, { "msg_contents": "On Fri, Apr 03, 2020 at 09:03:49AM -0700, dangal wrote:\n> Dear I have a question to ask you\n> I am having a slow problem with a query and I am seeing with the explain that the current cost and time differ by 4 times\n\nThe \"cost\" is in arbitrary units, and the time is in units of milliseconds.\nThe cost is not an expected duration.\n\n> Postgres version 9.5.16 in centos 7.6\n> To try to solve this run the statistics to the table and the same problem\n> remains\n> It's a very big table 2 billion tuples\n> Do you have any idea what I can do to improve\n> Thank you very much for your time\n> Any data you need I can provide\n\nPlease check here.\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n> I share a part of the explain\n\nIt's not very useful to see a fragment of it.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 3 Apr 2020 11:07:47 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" }, { "msg_contents": "Justin thank you very much for your answer, as you can also see the number of\nrows differs a lot\nI attach the complete explain, do not attach it because it is large\n\n\"HashAggregate (cost=12640757.46..12713163.46 rows=385 width=720) (actual\ntime=1971962.023..1971962.155 rows=306 loops=1)\"\n\" Output: bi.bus_ent_inst_name_num, bi.att_value_num_7, bi.att_value_10,\n((SubPlan 1)), ((SubPlan 2)), a2.ent_inst_att_str_value, ba.att_value_1,\ndepto2.att_value_1, loc2.att_value_1, att_call.ent_inst_att_str_value,\natt_nro.ent_inst_att_str_value, att_bis (...)\"\n\" Group Key: bi.bus_ent_inst_name_num, bi.att_value_num_7, bi.att_value_10,\n(SubPlan 1), (SubPlan 2), a2.ent_inst_att_str_value, ba.att_value_1,\ndepto2.att_value_1, loc2.att_value_1, att_call.ent_inst_att_str_value,\natt_nro.ent_inst_att_str_value, att_bis. (...)\"\n\" Buffers: shared hit=5817744 read=1034292 dirtied=790, local hit=2\"\n\" -> Nested Loop (cost=11114347.52..12640740.13 rows=385 width=720)\n(actual time=1906401.083..1971959.176 rows=306 loops=1)\"\n\" Output: bi.bus_ent_inst_name_num, bi.att_value_num_7,\nbi.att_value_10, (SubPlan 1), (SubPlan 2), a2.ent_inst_att_str_value,\nba.att_value_1, depto2.att_value_1, loc2.att_value_1,\natt_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value, att_b\n(...)\"\n\" Buffers: shared hit=5817744 read=1034292 dirtied=790, local hit=2\"\n\" -> Hash Join (cost=11114346.94..12228344.41 rows=1427 width=704)\n(actual time=1906372.468..1964409.907 rows=306 loops=1)\"\n\" Output: bi.bus_ent_inst_name_num, bi.att_value_num_7,\nbi.att_value_10, ba.bus_ent_inst_id_auto, ba.att_value_1,\natt_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\natt_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value, a (...)\"\n\" Hash Cond: (ba.att_value_num_1 =\n(bi.bus_ent_inst_name_num)::numeric)\"\n\" Buffers: shared hit=5814458 read=1033324 dirtied=790, local\nhit=2\"\n\" -> Hash Right Join (cost=11114339.65..12172907.42\nrows=886647 width=158) (actual time=1906344.617..1963668.889 rows=3362294\nloops=1)\"\n\" Output: ba.bus_ent_inst_id_auto, ba.att_value_1,\nba.att_value_num_1, att_call.ent_inst_att_str_value,\natt_nro.ent_inst_att_str_value, att_bis.ent_inst_att_str_value,\natt_pad.ent_inst_att_str_value, att_manz.ent_inst_att_str_value, att_a\n(...)\"\n\" Hash Cond: ((att_barr.env_id = ba.env_id) AND\n(att_barr.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\n\" Buffers: shared hit=5814458 read=1033324 dirtied=790\"\n\" -> Index Only Scan using ix_bus_ent_inst_attr_03 on\npublic.bus_ent_inst_attribute att_barr (cost=0.83..1024093.06 rows=4508264\nwidth=24) (actual time=10.435..52888.091 rows=4244011 loops=1)\"\n\" Output: att_barr.att_id,\natt_barr.ent_inst_att_str_value, att_barr.env_id, att_barr.bus_ent_inst_id,\natt_barr.reg_status\"\n\" Index Cond: (att_barr.att_id = 1115)\"\n\" Heap Fetches: 120577\"\n\" Buffers: shared hit=503194 read=31197 dirtied=5\"\n\" -> Hash (cost=11101039.12..11101039.12 rows=886647\nwidth=146) (actual time=1906329.888..1906329.888 rows=3362294 loops=1)\"\n\" Output: ba.bus_ent_inst_id_auto, ba.att_value_1,\nba.env_id, ba.att_value_num_1, att_call.ent_inst_att_str_value,\natt_nro.ent_inst_att_str_value, att_bis.ent_inst_att_str_value,\natt_pad.ent_inst_att_str_value, att_manz.ent_inst_att (...)\"\n\" Buckets: 4194304 (originally 1048576) Batches: 1\n(originally 1) Memory Usage: 396824kB\"\n\" Buffers: shared hit=5311264 read=1002127\ndirtied=785\"\n\" -> Hash Right Join \n(cost=10328938.09..11101039.12 rows=886647 width=146) (actual\ntime=1867557.718..1904218.946 rows=3362294 loops=1)\"\n\" Output: ba.bus_ent_inst_id_auto,\nba.att_value_1, ba.env_id, ba.att_value_num_1,\natt_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\natt_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value,\natt_manz.ent_in (...)\"\n\" Hash Cond: ((att_apt.env_id = ba.env_id)\nAND (att_apt.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\n\" Buffers: shared hit=5311264 read=1002127\ndirtied=785\"\n\" -> Index Only Scan using\nix_bus_ent_inst_attr_03 on public.bus_ent_inst_attribute att_apt \n(cost=0.83..746958.06 rows=3287982 width=24) (actual time=0.091..32788.731\nrows=3491599 loops=1)\"\n\" Output: att_apt.att_id,\natt_apt.ent_inst_att_str_value, att_apt.env_id, att_apt.bus_ent_inst_id,\natt_apt.reg_status\"\n\" Index Cond: (att_apt.att_id = 1113)\"\n\" Heap Fetches: 88910\"\n\" Buffers: shared hit=178090 read=25341\ndirtied=5\"\n\" -> Hash (cost=10315637.55..10315637.55\nrows=886647 width=130) (actual time=1867553.445..1867553.445 rows=3362294\nloops=1)\"\n\" Output: ba.bus_ent_inst_id_auto,\nba.att_value_1, ba.env_id, ba.att_value_num_1,\natt_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\natt_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value, att_manz.\n(...)\"\n\" Buckets: 4194304 (originally 1048576) \nBatches: 1 (originally 1) Memory Usage: 376885kB\"\n\" Buffers: shared hit=5133174\nread=976786 dirtied=780\"\n\" -> Merge Left Join \n(cost=10304076.40..10315637.55 rows=886647 width=130) (actual\ntime=1862979.687..1865773.765 rows=3362294 loops=1)\"\n\" Output:\nba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\natt_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\natt_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value, att (...)\"\n\" Merge Cond: ((ba.env_id =\nloc2.env_id) AND (((att_loc_hecho.ent_inst_att_str_value)::integer) =\nloc2.bus_ent_inst_name_num))\"\n\" Buffers: shared hit=5133174\nread=976786 dirtied=780\"\n\" -> Sort \n(cost=10178591.32..10180807.94 rows=886647 width=141) (actual\ntime=1862965.240..1863856.321 rows=3362294 loops=1)\"\n\" Output:\nba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\natt_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_att_str_value,\natt_nro.ent_inst_att_str_value, att_bis.ent_inst_att_st (...)\"\n\" Sort Key: ba.env_id,\n((att_loc_hecho.ent_inst_att_str_value)::integer)\"\n\" Sort Method: quicksort \nMemory: 544870kB\"\n\" Buffers: shared\nhit=5133062 read=976781 dirtied=780\"\n\" -> Merge Left Join \n(cost=10079438.31..10090999.47 rows=886647 width=141) (actual\ntime=1854085.484..1857592.771 rows=3362294 loops=1)\"\n\" Output:\nba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\natt_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_att_str_value,\natt_nro.ent_inst_att_str_value, att_bis.ent_inst_ (...)\"\n\" Merge Cond:\n((ba.env_id = depto2.env_id) AND\n(((att_dir_hecho.ent_inst_att_str_value)::integer) =\ndepto2.bus_ent_inst_name_num))\"\n\" Buffers: shared\nhit=5133062 read=976781 dirtied=780\"\n\" -> Sort \n(cost=9953953.24..9956169.85 rows=886647 width=152) (actual\ntime=1854079.630..1855329.406 rows=3362294 loops=1)\"\n\" Output:\nba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\natt_dir_hecho.ent_inst_att_str_value, att_loc_hecho.ent_inst_att_str_value,\natt_call.ent_inst_att_str_value, att_n (...)\"\n\" Sort Key:\nba.env_id, ((att_dir_hecho.ent_inst_att_str_value)::integer)\"\n\" Sort Method:\nquicksort Memory: 544857kB\"\n\" Buffers:\nshared hit=5133055 read=976779 dirtied=780\"\n\" -> Hash\nRight Join (cost=9791232.05..9866361.38 rows=886647 width=152) (actual\ntime=1844734.652..1849217.758 rows=3362294 loops=1)\"\n\" Output:\nba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\natt_dir_hecho.ent_inst_att_str_value, att_loc_hecho.ent_inst_att_str_value,\natt_call.ent_inst_att_str_value, (...)\"\n\" Hash\nCond: ((att_rut.env_id = ba.env_id) AND (att_rut.bus_ent_inst_id =\nba.bus_ent_inst_id_auto))\"\n\" \nBuffers: shared hit=5133055 read=976779 dirtied=780\"\n\" -> \nIndex Only Scan using ix_bus_ent_inst_attr_03 on\npublic.bus_ent_inst_attribute att_rut (cost=0.83..72690.43 rows=319036\nwidth=24) (actual time=17.325..3078.312 rows=149644 loops=1)\"\n\" \nOutput: att_rut.att_id, att_rut.ent_inst_att_str_value, att_rut.env_id,\natt_rut.bus_ent_inst_id, att_rut.reg_status\"\n\" \nIndex Cond: (att_rut.att_id = 1138)\"\n\" \nHeap Fetches: 5299\"\n\" \nBuffers: shared hit=26350 read=1137\"\n\" -> \nHash (cost=9777931.51..9777931.51 rows=886647 width=136) (actual\ntime=1844713.350..1844713.350 rows=3362294 loops=1)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\natt_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_att_str_ (...)\"\n\" \nBuckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\nUsage: 329015kB\"\n\" \nBuffers: shared hit=5106705 read=975642 dirtied=780\"\n\" \n-> Hash Right Join (cost=9705206.15..9777931.51 rows=886647 width=136)\n(actual time=1837569.880..1842945.853 rows=3362294 loops=1)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\natt_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_at (...)\"\n\" \nHash Cond: ((att_km.env_id = ba.env_id) AND (att_km.bus_ent_inst_id =\nba.bus_ent_inst_id_auto))\"\n\" \nBuffers: shared hit=5106705 read=975642 dirtied=780\"\n\" \n-> Index Only Scan using ix_bus_ent_inst_attr_02 on\npublic.bus_ent_inst_attribute att_km (cost=0.70..70286.34 rows=319036\nwidth=13) (actual time=0.107..2995.494 rows=149942 l (...)\"\n\" \nOutput: att_km.att_id, att_km.ent_inst_att_num_value, att_km.env_id,\natt_km.bus_ent_inst_id, att_km.reg_status\"\n\" \nIndex Cond: (att_km.att_id = 1132)\"\n\" \nHeap Fetches: 5330\"\n\" \nBuffers: shared hit=59470 read=1171\"\n\" \n-> Hash (cost=9691905.74..9691905.74 rows=886647 width=131) (actual\ntime=1837565.949..1837565.949 rows=3362294 loops=1)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\natt_loc_hecho.ent_inst_att_str_value, att_call.ent_i (...)\"\n\" \nBuckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\nUsage: 328650kB\"\n\" \nBuffers: shared hit=5047235 read=974471 dirtied=780\"\n\" \n-> Hash Right Join (cost=7694366.79..9691905.74 rows=886647 width=131)\n(actual time=1710903.369..1834807.221 rows=3362294 loops=1)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\natt_loc_hecho.ent_inst_att_str_value, att_call (...)\"\n\" \nHash Cond: ((att_bis.env_id = ba.env_id) AND (att_bis.bus_ent_inst_id =\nba.bus_ent_inst_id_auto))\"\n\" \nBuffers: shared hit=5047235 read=974471 dirtied=780\"\n\" \n-> Index Only Scan using ix_bus_ent_inst_attr_03 on\npublic.bus_ent_inst_attribute att_bis (cost=0.83..1932476.93 rows=8508077\nwidth=24) (actual time=6.488..116892 (...)\"\n\" \nOutput: att_bis.att_id, att_bis.ent_inst_att_str_value, att_bis.env_id,\natt_bis.bus_ent_inst_id, att_bis.reg_status\"\n\" \nIndex Cond: (att_bis.att_id = 1117)\"\n\" \nHeap Fetches: 228123\"\n\" \nBuffers: shared hit=218185 read=52064 dirtied=27\"\n\" \n-> Hash (cost=7681066.26..7681066.26 rows=886647 width=115) (actual\ntime=1710893.007..1710893.007 rows=3362294 loops=1)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\natt_loc_hecho.ent_inst_att_str_value, at (...)\"\n\" \nBuckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\nUsage: 309513kB\"\n\" \nBuffers: shared hit=4829050 read=922407 dirtied=753\"\n\" \n-> Hash Right Join (cost=5969990.07..7681066.26 rows=886647 width=115)\n(actual time=1566042.427..1708291.649 rows=3362294 loops=1)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\natt_loc_hecho.ent_inst_att_str_val (...)\"\n\" \nHash Cond: ((att_call.env_id = ba.env_id) AND (att_call.bus_ent_inst_id =\nba.bus_ent_inst_id_auto))\"\n\" \nBuffers: shared hit=4829050 read=922407 dirtied=753\"\n\" \n-> Index Only Scan using ix_bus_ent_inst_attr_03 on\npublic.bus_ent_inst_attribute att_call (cost=0.83..1655345.90 rows=7287794\nwidth=24) (actual time= (...)\"\n\" \nOutput: att_call.att_id, att_call.ent_inst_att_str_value, att_call.env_id,\natt_call.bus_ent_inst_id, att_call.reg_status\"\n\" \nIndex Cond: (att_call.att_id = 1119)\"\n\" \nHeap Fetches: 213801\"\n\" \nBuffers: shared hit=1852588 read=60151 dirtied=23\"\n\" \n-> Hash (cost=5956689.54..5956689.54 rows=886647 width=99) (actual\ntime=1566015.832..1566015.832 rows=3362294 loops=1)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\natt_loc_hecho.ent_inst_att_s (...)\"\n\" \nBuckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\nUsage: 258291kB\"\n\" \nBuffers: shared hit=2976462 read=862256 dirtied=730\"\n\" \n-> Hash Right Join (cost=4253571.63..5956689.54 rows=886647 width=99)\n(actual time=1355922.435..1563760.249 rows=3362294 loops=1)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\natt_loc_hecho.ent_inst (...)\"\n\" \nHash Cond: ((att_dir_hecho.env_id = ba.env_id) AND\n(att_dir_hecho.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\n\" \nBuffers: shared hit=2976462 read=862256 dirtied=730\"\n\" \n-> Index Only Scan using ix_bus_ent_inst_attr_03 on\npublic.bus_ent_inst_attribute att_dir_hecho (cost=0.83..1647646.84\nrows=7253898 width= (...)\"\n\" \nOutput: att_dir_hecho.att_id, att_dir_hecho.ent_inst_att_str_value,\natt_dir_hecho.env_id, att_dir_hecho.bus_ent_inst_id, att_dir_hecho (...)\"\n\" \nIndex Cond: (att_dir_hecho.att_id = 1122)\"\n\" \nHeap Fetches: 221189\"\n\" \nBuffers: shared hit=217265 read=76872 dirtied=96\"\n\" \n-> Hash (cost=4240271.10..4240271.10 rows=886647 width=83) (actual\ntime=1355910.157..1355910.157 rows=3362294 loops=1)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_loc_hecho.ent_inst_att_str_value, att_nro.ent_inst\n(...)\"\n\" \nBuckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\nUsage: 224784kB\"\n\" \nBuffers: shared hit=2759197 read=785384 dirtied=634\"\n\" \n-> Hash Right Join (cost=2672428.25..4240271.10 rows=886647 width=83)\n(actual time=1097647.410..1353630.001 rows=3362294 loops=1)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_loc_hecho.ent_inst_att_str_value, att_nro.en (...)\"\n\" \nHash Cond: ((att_loc_hecho.env_id = ba.env_id) AND\n(att_loc_hecho.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\n\" \nBuffers: shared hit=2759197 read=785384 dirtied=634\"\n\" \n-> Index Only Scan using ix_bus_ent_inst_attr_03 on\npublic.bus_ent_inst_attribute att_loc_hecho (cost=0.83..1516778.41 rows=66\n(...)\"\n\" \nOutput: att_loc_hecho.att_id, att_loc_hecho.ent_inst_att_str_value,\natt_loc_hecho.env_id, att_loc_hecho.bus_ent_inst_id, a (...)\"\n\" \nIndex Cond: (att_loc_hecho.att_id = 1133)\"\n\" \nHeap Fetches: 218787\"\n\" \nBuffers: shared hit=332968 read=93935 dirtied=115\"\n\" \n-> Hash (cost=2659127.72..2659127.72 rows=886647 width=67) (actual\ntime=1097642.027..1097642.027 rows=3362294 loops=1)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_nro.ent_inst_att_str_value, att_pad.en (...)\"\n\" \nBuckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\nUsage: 215839kB\"\n\" \nBuffers: shared hit=2426229 read=691449 dirtied=519\"\n\" \n-> Hash Right Join (cost=1353880.71..2659127.72 rows=886647 width=67)\n(actual time=466534.722..1095259.942 rows=3362294 (...)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_nro.ent_inst_att_str_value, att_ (...)\"\n\" \nHash Cond: ((att_nro.env_id = ba.env_id) AND (att_nro.bus_ent_inst_id =\nba.bus_ent_inst_id_auto))\"\n\" \nBuffers: shared hit=2426229 read=691449 dirtied=519\"\n\" \n-> Index Only Scan using ix_bus_ent_inst_attr_03 on\npublic.bus_ent_inst_attribute att_nro (cost=0.83..1262736.66 r (...)\"\n\" \nOutput: att_nro.att_id, att_nro.ent_inst_att_str_value, att_nro.env_id,\natt_nro.bus_ent_inst_id, att_nro.reg_s (...)\"\n\" \nIndex Cond: (att_nro.att_id = 1135)\"\n\" \nHeap Fetches: 156988\"\n\" \nBuffers: shared hit=1568458 read=151792 dirtied=285\"\n\" \n-> Hash (cost=1340580.18..1340580.18 rows=886647 width=51) (actual\ntime=466528.985..466528.985 rows=3362294 loops= (...)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_pad.ent_inst_att_str_value (...)\"\n\" \nBuckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\nUsage: 204115kB\"\n\" \nBuffers: shared hit=857771 read=539657 dirtied=234\"\n\" \n-> Hash Right Join (cost=1265450.85..1340580.18 rows=886647 width=51)\n(actual time=464578.744..465343.707 ro (...)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_pad.ent_inst_att_str (...)\"\n\" \nHash Cond: ((att_manz.env_id = ba.env_id) AND (att_manz.bus_ent_inst_id =\nba.bus_ent_inst_id_auto))\"\n\" \nBuffers: shared hit=857771 read=539657 dirtied=234\"\n\" \n-> Index Only Scan using ix_bus_ent_inst_attr_03 on\npublic.bus_ent_inst_attribute att_manz (cost=0.83. (...)\"\n\" \nOutput: att_manz.att_id, att_manz.ent_inst_att_str_value, att_manz.env_id,\natt_manz.bus_ent_inst_i (...)\"\n\" \nIndex Cond: (att_manz.att_id = 1134)\"\n\" \nHeap Fetches: 14\"\n\" \nBuffers: shared hit=276 read=15\"\n\" \n-> Hash (cost=1252150.32..1252150.32 rows=886647 width=35) (actual\ntime=464569.271..464569.271 rows=33 (...)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_pad.ent_inst_a (...)\"\n\" \nBuckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\nUsage: 204114kB\"\n\" \nBuffers: shared hit=857495 read=539642 dirtied=234\"\n\" \n-> Hash Right Join (cost=1177020.99..1252150.32 rows=886647 width=35)\n(actual time=184587.973..4 (...)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1, att_pad.ent_ (...)\"\n\" \nHash Cond: ((att_pad.env_id = ba.env_id) AND (att_pad.bus_ent_inst_id =\nba.bus_ent_inst_id_a (...)\"\n\" \nBuffers: shared hit=857495 read=539642 dirtied=234\"\n\" \n-> Index Only Scan using ix_bus_ent_inst_attr_03 on\npublic.bus_ent_inst_attribute att_pad (...)\"\n\" \nOutput: att_pad.att_id, att_pad.ent_inst_att_str_value, att_pad.env_id,\natt_pad.bus_en (...)\"\n\" \nIndex Cond: (att_pad.att_id = 1136)\"\n\" \nHeap Fetches: 54024\"\n\" \nBuffers: shared hit=334762 read=60835 dirtied=136\"\n\" \n-> Hash (cost=1163720.45..1163720.45 rows=886647 width=19) (actual\ntime=184573.023..184573 (...)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1\"\n\" \nBuckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\nUsage: 200216 (...)\"\n\" \nBuffers: shared hit=522733 read=478807 dirtied=98\"\n\" \n-> Bitmap Heap Scan on public.bus_ent_instance ba \n(cost=35242.83..1163720.45 rows=88 (...)\"\n\" \nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\nba.att_value_num_1\"\n\" \nRecheck Cond: ((((ba.bus_ent_inst_name_pre)::text = 'FOTPER'::text) AND\n((ba.att (...)\"\n\" \nHeap Blocks: exact=981056\"\n\" \nBuffers: shared hit=522733 read=478807 dirtied=98\"\n\" \n-> BitmapOr (cost=35242.83..35242.83 rows=896239 width=0) (actual\ntime=33401.6 (...)\"\n\" \nBuffers: shared hit=43 read=20441\"\n\" \n-> Bitmap Index Scan on ix_bus_ent_instance_atts_namenum_01 (cost=0.00..\n(...)\"\n\" \nIndex Cond: (((ba.bus_ent_inst_name_pre)::text = 'FOTPER'::text) AND (...)\"\n\" \nBuffers: shared hit=9 read=5030\"\n\" \n-> Bitmap Index Scan on ix_bus_ent_instance_atts_namenum_01 (cost=0.00..\n(...)\"\n\" \nIndex Cond: (((ba.bus_ent_inst_name_pre)::text = 'FOTPER'::text) AND (...)\"\n\" \nBuffers: shared hit=34 read=15411\"\n\" -> Sort \n(cost=125485.07..125573.87 rows=35520 width=13) (actual time=5.831..312.164\nrows=3217523 loops=1)\"\n\" Output:\ndepto2.att_value_1, depto2.env_id, depto2.bus_ent_inst_name_num\"\n\" Sort Key:\ndepto2.env_id, depto2.bus_ent_inst_name_num\"\n\" Sort Method:\nquicksort Memory: 26kB\"\n\" Buffers:\nshared hit=7 read=2\"\n\" -> Bitmap\nHeap Scan on public.bus_ent_instance depto2 (cost=971.85..122800.41\nrows=35520 width=13) (actual time=5.758..5.776 rows=21 loops=1)\"\n\" Output:\ndepto2.att_value_1, depto2.env_id, depto2.bus_ent_inst_name_num\"\n\" Recheck\nCond: (depto2.bus_ent_id = 1091)\"\n\" Heap\nBlocks: exact=5\"\n\" \nBuffers: shared hit=7 read=2\"\n\" -> \nBitmap Index Scan on ix_bus_ent_instance_01 (cost=0.00..962.97 rows=35520\nwidth=0) (actual time=5.733..5.733 rows=21 loops=1)\"\n\" \nIndex Cond: (depto2.bus_ent_id = 1091)\"\n\" \nBuffers: shared hit=2 read=2\"\n\" -> Sort \n(cost=125485.07..125573.87 rows=35520 width=13) (actual time=14.418..320.637\nrows=3217335 loops=1)\"\n\" Output: loc2.att_value_1,\nloc2.env_id, loc2.bus_ent_inst_name_num\"\n\" Sort Key: loc2.env_id,\nloc2.bus_ent_inst_name_num\"\n\" Sort Method: quicksort \nMemory: 76kB\"\n\" Buffers: shared hit=112\nread=5\"\n\" -> Bitmap Heap Scan on\npublic.bus_ent_instance loc2 (cost=971.85..122800.41 rows=35520 width=13)\n(actual time=13.305..13.922 rows=725 loops=1)\"\n\" Output:\nloc2.att_value_1, loc2.env_id, loc2.bus_ent_inst_name_num\"\n\" Recheck Cond:\n(loc2.bus_ent_id = 1165)\"\n\" Heap Blocks:\nexact=110\"\n\" Buffers: shared\nhit=112 read=5\"\n\" -> Bitmap Index\nScan on ix_bus_ent_instance_01 (cost=0.00..962.97 rows=35520 width=0)\n(actual time=13.262..13.262 rows=725 loops=1)\"\n\" Index Cond:\n(loc2.bus_ent_id = 1165)\"\n\" Buffers:\nshared hit=2 read=5\"\n\" -> Hash (cost=4.35..4.35 rows=235 width=552) (actual\ntime=0.175..0.175 rows=235 loops=1)\"\n\" Output: bi.bus_ent_inst_name_num, bi.att_value_num_7,\nbi.att_value_10\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 19kB\"\n\" Buffers: local hit=2\"\n\" -> Seq Scan on pg_temp_179.temp_table bi \n(cost=0.00..4.35 rows=235 width=552) (actual time=0.015..0.055 rows=235\nloops=1)\"\n\" Output: bi.bus_ent_inst_name_num,\nbi.att_value_num_7, bi.att_value_10\"\n\" Buffers: local hit=2\"\n\" -> Index Scan using ix_bus_ent_inst_attr_01 on\npublic.bus_ent_inst_attribute a2 (cost=0.58..237.03 rows=123 width=20)\n(actual time=23.167..23.168 rows=1 loops=306)\"\n\" Output: a2.env_id, a2.bus_ent_inst_id, a2.att_id,\na2.att_row_id_auto, a2.att_index_id, a2.ent_inst_att_num_value,\na2.ent_inst_att_str_value, a2.ent_inst_att_dte_value,\na2.ent_inst_att_doc_id, a2.ent_inst_att_tran_1, a2.ent_inst_att_tran_2, a2\n(...)\"\n\" Index Cond: ((a2.bus_ent_inst_id = ba.bus_ent_inst_id_auto)\nAND (a2.att_id = 1083))\"\n\" Buffers: shared hit=635 read=895\"\n\" SubPlan 1\"\n\" -> Index Scan using ix_bus_ent_inst_attr_01 on\npublic.bus_ent_inst_attribute a (cost=0.58..141.91 rows=72 width=16)\n(actual time=0.646..0.647 rows=1 loops=306)\"\n\" Output: a.ent_inst_att_str_value\"\n\" Index Cond: ((ba.bus_ent_inst_id_auto = a.bus_ent_inst_id)\nAND (a.att_id = 1071))\"\n\" Filter: (a.reg_status = 0)\"\n\" Buffers: shared hit=1434 read=31\"\n\" SubPlan 2\"\n\" -> Index Scan using ix_bus_ent_inst_attr_01 on\npublic.bus_ent_inst_attribute t (cost=0.58..46.15 rows=21 width=16) (actual\ntime=0.839..0.841 rows=0 loops=306)\"\n\" Output: t.ent_inst_att_str_value\"\n\" Index Cond: ((ba.bus_ent_inst_id_auto = t.bus_ent_inst_id)\nAND (t.att_id = 1141))\"\n\" Filter: (t.reg_status = 0)\"\n\" Buffers: shared hit=1217 read=42\"\n\"Planning time: 18.329 ms\"\n\"Execution time: 1972336.524 ms\"\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n", "msg_date": "Fri, 3 Apr 2020 09:18:23 -0700 (MST)", "msg_from": "dangal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow query" }, { "msg_contents": "That plan looks like it might have been cropped in places, and the\nformatting is making it tricky to help.\n\nCould you try again, pasting the plan into https://explain.depesz.com/ to\nmake it easier to review?\n\nOn Fri, Apr 3, 2020 at 5:18 PM dangal <[email protected]> wrote:\n\n> Justin thank you very much for your answer, as you can also see the number\n> of\n> rows differs a lot\n> I attach the complete explain, do not attach it because it is large\n>\n> \"HashAggregate (cost=12640757.46..12713163.46 rows=385 width=720) (actual\n> time=1971962.023..1971962.155 rows=306 loops=1)\"\n> \" Output: bi.bus_ent_inst_name_num, bi.att_value_num_7, bi.att_value_10,\n> ((SubPlan 1)), ((SubPlan 2)), a2.ent_inst_att_str_value, ba.att_value_1,\n> depto2.att_value_1, loc2.att_value_1, att_call.ent_inst_att_str_value,\n> att_nro.ent_inst_att_str_value, att_bis (...)\"\n> \" Group Key: bi.bus_ent_inst_name_num, bi.att_value_num_7,\n> bi.att_value_10,\n> (SubPlan 1), (SubPlan 2), a2.ent_inst_att_str_value, ba.att_value_1,\n> depto2.att_value_1, loc2.att_value_1, att_call.ent_inst_att_str_value,\n> att_nro.ent_inst_att_str_value, att_bis. (...)\"\n> \" Buffers: shared hit=5817744 read=1034292 dirtied=790, local hit=2\"\n> \" -> Nested Loop (cost=11114347.52..12640740.13 rows=385 width=720)\n> (actual time=1906401.083..1971959.176 rows=306 loops=1)\"\n> \" Output: bi.bus_ent_inst_name_num, bi.att_value_num_7,\n> bi.att_value_10, (SubPlan 1), (SubPlan 2), a2.ent_inst_att_str_value,\n> ba.att_value_1, depto2.att_value_1, loc2.att_value_1,\n> att_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value, att_b\n> (...)\"\n> \" Buffers: shared hit=5817744 read=1034292 dirtied=790, local hit=2\"\n> \" -> Hash Join (cost=11114346.94..12228344.41 rows=1427 width=704)\n> (actual time=1906372.468..1964409.907 rows=306 loops=1)\"\n> \" Output: bi.bus_ent_inst_name_num, bi.att_value_num_7,\n> bi.att_value_10, ba.bus_ent_inst_id_auto, ba.att_value_1,\n> att_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\n> att_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value, a (...)\"\n> \" Hash Cond: (ba.att_value_num_1 =\n> (bi.bus_ent_inst_name_num)::numeric)\"\n> \" Buffers: shared hit=5814458 read=1033324 dirtied=790, local\n> hit=2\"\n> \" -> Hash Right Join (cost=11114339.65..12172907.42\n> rows=886647 width=158) (actual time=1906344.617..1963668.889 rows=3362294\n> loops=1)\"\n> \" Output: ba.bus_ent_inst_id_auto, ba.att_value_1,\n> ba.att_value_num_1, att_call.ent_inst_att_str_value,\n> att_nro.ent_inst_att_str_value, att_bis.ent_inst_att_str_value,\n> att_pad.ent_inst_att_str_value, att_manz.ent_inst_att_str_value, att_a\n> (...)\"\n> \" Hash Cond: ((att_barr.env_id = ba.env_id) AND\n> (att_barr.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\n> \" Buffers: shared hit=5814458 read=1033324 dirtied=790\"\n> \" -> Index Only Scan using ix_bus_ent_inst_attr_03 on\n> public.bus_ent_inst_attribute att_barr (cost=0.83..1024093.06 rows=4508264\n> width=24) (actual time=10.435..52888.091 rows=4244011 loops=1)\"\n> \" Output: att_barr.att_id,\n> att_barr.ent_inst_att_str_value, att_barr.env_id, att_barr.bus_ent_inst_id,\n> att_barr.reg_status\"\n> \" Index Cond: (att_barr.att_id = 1115)\"\n> \" Heap Fetches: 120577\"\n> \" Buffers: shared hit=503194 read=31197 dirtied=5\"\n> \" -> Hash (cost=11101039.12..11101039.12 rows=886647\n> width=146) (actual time=1906329.888..1906329.888 rows=3362294 loops=1)\"\n> \" Output: ba.bus_ent_inst_id_auto, ba.att_value_1,\n> ba.env_id, ba.att_value_num_1, att_call.ent_inst_att_str_value,\n> att_nro.ent_inst_att_str_value, att_bis.ent_inst_att_str_value,\n> att_pad.ent_inst_att_str_value, att_manz.ent_inst_att (...)\"\n> \" Buckets: 4194304 (originally 1048576) Batches:\n> 1\n> (originally 1) Memory Usage: 396824kB\"\n> \" Buffers: shared hit=5311264 read=1002127\n> dirtied=785\"\n> \" -> Hash Right Join\n> (cost=10328938.09..11101039.12 rows=886647 width=146) (actual\n> time=1867557.718..1904218.946 rows=3362294 loops=1)\"\n> \" Output: ba.bus_ent_inst_id_auto,\n> ba.att_value_1, ba.env_id, ba.att_value_num_1,\n> att_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\n> att_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value,\n> att_manz.ent_in (...)\"\n> \" Hash Cond: ((att_apt.env_id = ba.env_id)\n> AND (att_apt.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\n> \" Buffers: shared hit=5311264 read=1002127\n> dirtied=785\"\n> \" -> Index Only Scan using\n> ix_bus_ent_inst_attr_03 on public.bus_ent_inst_attribute att_apt\n> (cost=0.83..746958.06 rows=3287982 width=24) (actual time=0.091..32788.731\n> rows=3491599 loops=1)\"\n> \" Output: att_apt.att_id,\n> att_apt.ent_inst_att_str_value, att_apt.env_id, att_apt.bus_ent_inst_id,\n> att_apt.reg_status\"\n> \" Index Cond: (att_apt.att_id = 1113)\"\n> \" Heap Fetches: 88910\"\n> \" Buffers: shared hit=178090\n> read=25341\n> dirtied=5\"\n> \" -> Hash (cost=10315637.55..10315637.55\n> rows=886647 width=130) (actual time=1867553.445..1867553.445 rows=3362294\n> loops=1)\"\n> \" Output: ba.bus_ent_inst_id_auto,\n> ba.att_value_1, ba.env_id, ba.att_value_num_1,\n> att_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\n> att_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value, att_manz.\n> (...)\"\n> \" Buckets: 4194304 (originally\n> 1048576)\n> Batches: 1 (originally 1) Memory Usage: 376885kB\"\n> \" Buffers: shared hit=5133174\n> read=976786 dirtied=780\"\n> \" -> Merge Left Join\n> (cost=10304076.40..10315637.55 rows=886647 width=130) (actual\n> time=1862979.687..1865773.765 rows=3362294 loops=1)\"\n> \" Output:\n> ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\n> att_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\n> att_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value, att (...)\"\n> \" Merge Cond: ((ba.env_id =\n> loc2.env_id) AND (((att_loc_hecho.ent_inst_att_str_value)::integer) =\n> loc2.bus_ent_inst_name_num))\"\n> \" Buffers: shared hit=5133174\n> read=976786 dirtied=780\"\n> \" -> Sort\n> (cost=10178591.32..10180807.94 rows=886647 width=141) (actual\n> time=1862965.240..1863856.321 rows=3362294 loops=1)\"\n> \" Output:\n> ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\n> att_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_att_str_value,\n> att_nro.ent_inst_att_str_value, att_bis.ent_inst_att_st (...)\"\n> \" Sort Key: ba.env_id,\n> ((att_loc_hecho.ent_inst_att_str_value)::integer)\"\n> \" Sort Method: quicksort\n> Memory: 544870kB\"\n> \" Buffers: shared\n> hit=5133062 read=976781 dirtied=780\"\n> \" -> Merge Left Join\n> (cost=10079438.31..10090999.47 rows=886647 width=141) (actual\n> time=1854085.484..1857592.771 rows=3362294 loops=1)\"\n> \" Output:\n> ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\n> att_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_att_str_value,\n> att_nro.ent_inst_att_str_value, att_bis.ent_inst_ (...)\"\n> \" Merge Cond:\n> ((ba.env_id = depto2.env_id) AND\n> (((att_dir_hecho.ent_inst_att_str_value)::integer) =\n> depto2.bus_ent_inst_name_num))\"\n> \" Buffers: shared\n> hit=5133062 read=976781 dirtied=780\"\n> \" -> Sort\n> (cost=9953953.24..9956169.85 rows=886647 width=152) (actual\n> time=1854079.630..1855329.406 rows=3362294 loops=1)\"\n> \" Output:\n> ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\n> att_dir_hecho.ent_inst_att_str_value, att_loc_hecho.ent_inst_att_str_value,\n> att_call.ent_inst_att_str_value, att_n (...)\"\n> \" Sort Key:\n> ba.env_id, ((att_dir_hecho.ent_inst_att_str_value)::integer)\"\n> \" Sort Method:\n> quicksort Memory: 544857kB\"\n> \" Buffers:\n> shared hit=5133055 read=976779 dirtied=780\"\n> \" -> Hash\n> Right Join (cost=9791232.05..9866361.38 rows=886647 width=152) (actual\n> time=1844734.652..1849217.758 rows=3362294 loops=1)\"\n> \"\n> Output:\n> ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\n> att_dir_hecho.ent_inst_att_str_value, att_loc_hecho.ent_inst_att_str_value,\n> att_call.ent_inst_att_str_value, (...)\"\n> \" Hash\n> Cond: ((att_rut.env_id = ba.env_id) AND (att_rut.bus_ent_inst_id =\n> ba.bus_ent_inst_id_auto))\"\n> \"\n> Buffers: shared hit=5133055 read=976779 dirtied=780\"\n> \" ->\n> Index Only Scan using ix_bus_ent_inst_attr_03 on\n> public.bus_ent_inst_attribute att_rut (cost=0.83..72690.43 rows=319036\n> width=24) (actual time=17.325..3078.312 rows=149644 loops=1)\"\n> \"\n> Output: att_rut.att_id, att_rut.ent_inst_att_str_value, att_rut.env_id,\n> att_rut.bus_ent_inst_id, att_rut.reg_status\"\n> \"\n> Index Cond: (att_rut.att_id = 1138)\"\n> \"\n> Heap Fetches: 5299\"\n> \"\n> Buffers: shared hit=26350 read=1137\"\n> \" ->\n> Hash (cost=9777931.51..9777931.51 rows=886647 width=136) (actual\n> time=1844713.350..1844713.350 rows=3362294 loops=1)\"\n> \"\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\n> att_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_att_str_ (...)\"\n> \"\n> Buckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\n> Usage: 329015kB\"\n> \"\n> Buffers: shared hit=5106705 read=975642 dirtied=780\"\n> \"\n> -> Hash Right Join (cost=9705206.15..9777931.51 rows=886647 width=136)\n> (actual time=1837569.880..1842945.853 rows=3362294 loops=1)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\n> att_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_at (...)\"\n> \"\n>\n> Hash Cond: ((att_km.env_id = ba.env_id) AND (att_km.bus_ent_inst_id =\n> ba.bus_ent_inst_id_auto))\"\n> \"\n>\n> Buffers: shared hit=5106705 read=975642 dirtied=780\"\n> \"\n>\n> -> Index Only Scan using ix_bus_ent_inst_attr_02 on\n> public.bus_ent_inst_attribute att_km (cost=0.70..70286.34 rows=319036\n> width=13) (actual time=0.107..2995.494 rows=149942 l (...)\"\n> \"\n>\n> Output: att_km.att_id, att_km.ent_inst_att_num_value, att_km.env_id,\n> att_km.bus_ent_inst_id, att_km.reg_status\"\n> \"\n>\n> Index Cond: (att_km.att_id = 1132)\"\n> \"\n>\n> Heap Fetches: 5330\"\n> \"\n>\n> Buffers: shared hit=59470 read=1171\"\n> \"\n>\n> -> Hash (cost=9691905.74..9691905.74 rows=886647 width=131) (actual\n> time=1837565.949..1837565.949 rows=3362294 loops=1)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\n> att_loc_hecho.ent_inst_att_str_value, att_call.ent_i (...)\"\n> \"\n>\n> Buckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\n> Usage: 328650kB\"\n> \"\n>\n> Buffers: shared hit=5047235 read=974471 dirtied=780\"\n> \"\n>\n> -> Hash Right Join (cost=7694366.79..9691905.74 rows=886647 width=131)\n> (actual time=1710903.369..1834807.221 rows=3362294 loops=1)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\n> att_loc_hecho.ent_inst_att_str_value, att_call (...)\"\n> \"\n>\n> Hash Cond: ((att_bis.env_id = ba.env_id) AND (att_bis.bus_ent_inst_id =\n> ba.bus_ent_inst_id_auto))\"\n> \"\n>\n> Buffers: shared hit=5047235 read=974471 dirtied=780\"\n> \"\n>\n> -> Index Only Scan using ix_bus_ent_inst_attr_03 on\n> public.bus_ent_inst_attribute att_bis (cost=0.83..1932476.93 rows=8508077\n> width=24) (actual time=6.488..116892 (...)\"\n> \"\n>\n> Output: att_bis.att_id, att_bis.ent_inst_att_str_value, att_bis.env_id,\n> att_bis.bus_ent_inst_id, att_bis.reg_status\"\n> \"\n>\n> Index Cond: (att_bis.att_id = 1117)\"\n> \"\n>\n> Heap Fetches: 228123\"\n> \"\n>\n> Buffers: shared hit=218185 read=52064 dirtied=27\"\n> \"\n>\n> -> Hash (cost=7681066.26..7681066.26 rows=886647 width=115) (actual\n> time=1710893.007..1710893.007 rows=3362294 loops=1)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\n> att_loc_hecho.ent_inst_att_str_value, at (...)\"\n> \"\n>\n> Buckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\n> Usage: 309513kB\"\n> \"\n>\n> Buffers: shared hit=4829050 read=922407 dirtied=753\"\n> \"\n>\n> -> Hash Right Join (cost=5969990.07..7681066.26 rows=886647 width=115)\n> (actual time=1566042.427..1708291.649 rows=3362294 loops=1)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\n> att_loc_hecho.ent_inst_att_str_val (...)\"\n> \"\n>\n> Hash Cond: ((att_call.env_id = ba.env_id) AND (att_call.bus_ent_inst_id =\n> ba.bus_ent_inst_id_auto))\"\n> \"\n>\n> Buffers: shared hit=4829050 read=922407 dirtied=753\"\n> \"\n>\n> -> Index Only Scan using ix_bus_ent_inst_attr_03 on\n> public.bus_ent_inst_attribute att_call (cost=0.83..1655345.90 rows=7287794\n> width=24) (actual time= (...)\"\n> \"\n>\n> Output: att_call.att_id, att_call.ent_inst_att_str_value, att_call.env_id,\n> att_call.bus_ent_inst_id, att_call.reg_status\"\n> \"\n>\n> Index Cond: (att_call.att_id = 1119)\"\n> \"\n>\n> Heap Fetches: 213801\"\n> \"\n>\n> Buffers: shared hit=1852588 read=60151 dirtied=23\"\n> \"\n>\n> -> Hash (cost=5956689.54..5956689.54 rows=886647 width=99) (actual\n> time=1566015.832..1566015.832 rows=3362294 loops=1)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\n> att_loc_hecho.ent_inst_att_s (...)\"\n> \"\n>\n> Buckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\n> Usage: 258291kB\"\n> \"\n>\n> Buffers: shared hit=2976462 read=862256 dirtied=730\"\n> \"\n>\n> -> Hash Right Join (cost=4253571.63..5956689.54 rows=886647 width=99)\n> (actual time=1355922.435..1563760.249 rows=3362294 loops=1)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\n> att_loc_hecho.ent_inst (...)\"\n> \"\n>\n> Hash Cond: ((att_dir_hecho.env_id = ba.env_id) AND\n> (att_dir_hecho.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\n> \"\n>\n> Buffers: shared hit=2976462 read=862256 dirtied=730\"\n> \"\n>\n> -> Index Only Scan using ix_bus_ent_inst_attr_03 on\n> public.bus_ent_inst_attribute att_dir_hecho (cost=0.83..1647646.84\n> rows=7253898 width= (...)\"\n> \"\n>\n> Output: att_dir_hecho.att_id, att_dir_hecho.ent_inst_att_str_value,\n> att_dir_hecho.env_id, att_dir_hecho.bus_ent_inst_id, att_dir_hecho (...)\"\n> \"\n>\n> Index Cond: (att_dir_hecho.att_id = 1122)\"\n> \"\n>\n> Heap Fetches: 221189\"\n> \"\n>\n> Buffers: shared hit=217265 read=76872 dirtied=96\"\n> \"\n>\n> -> Hash (cost=4240271.10..4240271.10 rows=886647 width=83) (actual\n> time=1355910.157..1355910.157 rows=3362294 loops=1)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_loc_hecho.ent_inst_att_str_value, att_nro.ent_inst\n> (...)\"\n> \"\n>\n> Buckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\n> Usage: 224784kB\"\n> \"\n>\n> Buffers: shared hit=2759197 read=785384 dirtied=634\"\n> \"\n>\n> -> Hash Right Join (cost=2672428.25..4240271.10 rows=886647 width=83)\n> (actual time=1097647.410..1353630.001 rows=3362294 loops=1)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_loc_hecho.ent_inst_att_str_value, att_nro.en (...)\"\n> \"\n>\n> Hash Cond: ((att_loc_hecho.env_id = ba.env_id) AND\n> (att_loc_hecho.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\n> \"\n>\n> Buffers: shared hit=2759197 read=785384 dirtied=634\"\n> \"\n>\n> -> Index Only Scan using ix_bus_ent_inst_attr_03 on\n> public.bus_ent_inst_attribute att_loc_hecho (cost=0.83..1516778.41 rows=66\n> (...)\"\n> \"\n>\n> Output: att_loc_hecho.att_id, att_loc_hecho.ent_inst_att_str_value,\n> att_loc_hecho.env_id, att_loc_hecho.bus_ent_inst_id, a (...)\"\n> \"\n>\n> Index Cond: (att_loc_hecho.att_id = 1133)\"\n> \"\n>\n> Heap Fetches: 218787\"\n> \"\n>\n> Buffers: shared hit=332968 read=93935 dirtied=115\"\n> \"\n>\n> -> Hash (cost=2659127.72..2659127.72 rows=886647 width=67) (actual\n> time=1097642.027..1097642.027 rows=3362294 loops=1)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_nro.ent_inst_att_str_value, att_pad.en (...)\"\n> \"\n>\n> Buckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\n> Usage: 215839kB\"\n> \"\n>\n> Buffers: shared hit=2426229 read=691449 dirtied=519\"\n> \"\n>\n> -> Hash Right Join (cost=1353880.71..2659127.72 rows=886647 width=67)\n> (actual time=466534.722..1095259.942 rows=3362294 (...)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_nro.ent_inst_att_str_value, att_ (...)\"\n> \"\n>\n> Hash Cond: ((att_nro.env_id = ba.env_id) AND (att_nro.bus_ent_inst_id =\n> ba.bus_ent_inst_id_auto))\"\n> \"\n>\n> Buffers: shared hit=2426229 read=691449 dirtied=519\"\n> \"\n>\n> -> Index Only Scan using ix_bus_ent_inst_attr_03 on\n> public.bus_ent_inst_attribute att_nro (cost=0.83..1262736.66 r (...)\"\n> \"\n>\n> Output: att_nro.att_id, att_nro.ent_inst_att_str_value, att_nro.env_id,\n> att_nro.bus_ent_inst_id, att_nro.reg_s (...)\"\n> \"\n>\n> Index Cond: (att_nro.att_id = 1135)\"\n> \"\n>\n> Heap Fetches: 156988\"\n> \"\n>\n> Buffers: shared hit=1568458 read=151792 dirtied=285\"\n> \"\n>\n> -> Hash (cost=1340580.18..1340580.18 rows=886647 width=51) (actual\n> time=466528.985..466528.985 rows=3362294 loops= (...)\"\n> \"\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_pad.ent_inst_att_str_value (...)\"\n> \"\n>\n> Buckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\n> Usage: 204115kB\"\n> \"\n>\n> Buffers: shared hit=857771 read=539657 dirtied=234\"\n> \"\n>\n> -> Hash Right Join (cost=1265450.85..1340580.18 rows=886647 width=51)\n> (actual time=464578.744..465343.707 ro (...)\"\n> \"\n>\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_pad.ent_inst_att_str (...)\"\n> \"\n>\n>\n> Hash Cond: ((att_manz.env_id = ba.env_id) AND (att_manz.bus_ent_inst_id =\n> ba.bus_ent_inst_id_auto))\"\n> \"\n>\n>\n> Buffers: shared hit=857771 read=539657 dirtied=234\"\n> \"\n>\n>\n> -> Index Only Scan using ix_bus_ent_inst_attr_03 on\n> public.bus_ent_inst_attribute att_manz (cost=0.83. (...)\"\n> \"\n>\n>\n> Output: att_manz.att_id, att_manz.ent_inst_att_str_value, att_manz.env_id,\n> att_manz.bus_ent_inst_i (...)\"\n> \"\n>\n>\n> Index Cond: (att_manz.att_id = 1134)\"\n> \"\n>\n>\n> Heap Fetches: 14\"\n> \"\n>\n>\n> Buffers: shared hit=276 read=15\"\n> \"\n>\n>\n> -> Hash (cost=1252150.32..1252150.32 rows=886647 width=35) (actual\n> time=464569.271..464569.271 rows=33 (...)\"\n> \"\n>\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_pad.ent_inst_a (...)\"\n> \"\n>\n>\n> Buckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\n> Usage: 204114kB\"\n> \"\n>\n>\n> Buffers: shared hit=857495 read=539642 dirtied=234\"\n> \"\n>\n>\n> -> Hash Right Join (cost=1177020.99..1252150.32 rows=886647 width=35)\n> (actual time=184587.973..4 (...)\"\n> \"\n>\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1, att_pad.ent_ (...)\"\n> \"\n>\n>\n> Hash Cond: ((att_pad.env_id = ba.env_id) AND (att_pad.bus_ent_inst_id =\n> ba.bus_ent_inst_id_a (...)\"\n> \"\n>\n>\n> Buffers: shared hit=857495 read=539642 dirtied=234\"\n> \"\n>\n>\n> -> Index Only Scan using ix_bus_ent_inst_attr_03 on\n> public.bus_ent_inst_attribute att_pad (...)\"\n> \"\n>\n>\n> Output: att_pad.att_id, att_pad.ent_inst_att_str_value, att_pad.env_id,\n> att_pad.bus_en (...)\"\n> \"\n>\n>\n> Index Cond: (att_pad.att_id = 1136)\"\n> \"\n>\n>\n> Heap Fetches: 54024\"\n> \"\n>\n>\n> Buffers: shared hit=334762 read=60835 dirtied=136\"\n> \"\n>\n>\n> -> Hash (cost=1163720.45..1163720.45 rows=886647 width=19) (actual\n> time=184573.023..184573 (...)\"\n> \"\n>\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1\"\n> \"\n>\n>\n> Buckets: 4194304 (originally 1048576) Batches: 1 (originally 1) Memory\n> Usage: 200216 (...)\"\n> \"\n>\n>\n> Buffers: shared hit=522733 read=478807 dirtied=98\"\n> \"\n>\n>\n> -> Bitmap Heap Scan on public.bus_ent_instance ba\n> (cost=35242.83..1163720.45 rows=88 (...)\"\n> \"\n>\n>\n> Output: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\n> ba.att_value_num_1\"\n> \"\n>\n>\n> Recheck Cond: ((((ba.bus_ent_inst_name_pre)::text = 'FOTPER'::text) AND\n> ((ba.att (...)\"\n> \"\n>\n>\n> Heap Blocks: exact=981056\"\n> \"\n>\n>\n> Buffers: shared hit=522733 read=478807 dirtied=98\"\n> \"\n>\n>\n> -> BitmapOr (cost=35242.83..35242.83 rows=896239 width=0) (actual\n> time=33401.6 (...)\"\n> \"\n>\n>\n> Buffers: shared hit=43 read=20441\"\n> \"\n>\n>\n> -> Bitmap Index Scan on ix_bus_ent_instance_atts_namenum_01 (cost=0.00..\n> (...)\"\n> \"\n>\n>\n> Index Cond: (((ba.bus_ent_inst_name_pre)::text = 'FOTPER'::text) AND (...)\"\n> \"\n>\n>\n> Buffers: shared hit=9 read=5030\"\n> \"\n>\n>\n> -> Bitmap Index Scan on ix_bus_ent_instance_atts_namenum_01 (cost=0.00..\n> (...)\"\n> \"\n>\n>\n> Index Cond: (((ba.bus_ent_inst_name_pre)::text = 'FOTPER'::text) AND (...)\"\n> \"\n>\n>\n> Buffers: shared hit=34 read=15411\"\n> \" -> Sort\n> (cost=125485.07..125573.87 rows=35520 width=13) (actual time=5.831..312.164\n> rows=3217523 loops=1)\"\n> \" Output:\n> depto2.att_value_1, depto2.env_id, depto2.bus_ent_inst_name_num\"\n> \" Sort Key:\n> depto2.env_id, depto2.bus_ent_inst_name_num\"\n> \" Sort Method:\n> quicksort Memory: 26kB\"\n> \" Buffers:\n> shared hit=7 read=2\"\n> \" -> Bitmap\n> Heap Scan on public.bus_ent_instance depto2 (cost=971.85..122800.41\n> rows=35520 width=13) (actual time=5.758..5.776 rows=21 loops=1)\"\n> \"\n> Output:\n> depto2.att_value_1, depto2.env_id, depto2.bus_ent_inst_name_num\"\n> \"\n> Recheck\n> Cond: (depto2.bus_ent_id = 1091)\"\n> \" Heap\n> Blocks: exact=5\"\n> \"\n> Buffers: shared hit=7 read=2\"\n> \" ->\n> Bitmap Index Scan on ix_bus_ent_instance_01 (cost=0.00..962.97 rows=35520\n> width=0) (actual time=5.733..5.733 rows=21 loops=1)\"\n> \"\n> Index Cond: (depto2.bus_ent_id = 1091)\"\n> \"\n> Buffers: shared hit=2 read=2\"\n> \" -> Sort\n> (cost=125485.07..125573.87 rows=35520 width=13) (actual\n> time=14.418..320.637\n> rows=3217335 loops=1)\"\n> \" Output:\n> loc2.att_value_1,\n> loc2.env_id, loc2.bus_ent_inst_name_num\"\n> \" Sort Key: loc2.env_id,\n> loc2.bus_ent_inst_name_num\"\n> \" Sort Method: quicksort\n> Memory: 76kB\"\n> \" Buffers: shared hit=112\n> read=5\"\n> \" -> Bitmap Heap Scan on\n> public.bus_ent_instance loc2 (cost=971.85..122800.41 rows=35520 width=13)\n> (actual time=13.305..13.922 rows=725 loops=1)\"\n> \" Output:\n> loc2.att_value_1, loc2.env_id, loc2.bus_ent_inst_name_num\"\n> \" Recheck Cond:\n> (loc2.bus_ent_id = 1165)\"\n> \" Heap Blocks:\n> exact=110\"\n> \" Buffers: shared\n> hit=112 read=5\"\n> \" -> Bitmap Index\n> Scan on ix_bus_ent_instance_01 (cost=0.00..962.97 rows=35520 width=0)\n> (actual time=13.262..13.262 rows=725 loops=1)\"\n> \" Index Cond:\n> (loc2.bus_ent_id = 1165)\"\n> \" Buffers:\n> shared hit=2 read=5\"\n> \" -> Hash (cost=4.35..4.35 rows=235 width=552) (actual\n> time=0.175..0.175 rows=235 loops=1)\"\n> \" Output: bi.bus_ent_inst_name_num, bi.att_value_num_7,\n> bi.att_value_10\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 19kB\"\n> \" Buffers: local hit=2\"\n> \" -> Seq Scan on pg_temp_179.temp_table bi\n> (cost=0.00..4.35 rows=235 width=552) (actual time=0.015..0.055 rows=235\n> loops=1)\"\n> \" Output: bi.bus_ent_inst_name_num,\n> bi.att_value_num_7, bi.att_value_10\"\n> \" Buffers: local hit=2\"\n> \" -> Index Scan using ix_bus_ent_inst_attr_01 on\n> public.bus_ent_inst_attribute a2 (cost=0.58..237.03 rows=123 width=20)\n> (actual time=23.167..23.168 rows=1 loops=306)\"\n> \" Output: a2.env_id, a2.bus_ent_inst_id, a2.att_id,\n> a2.att_row_id_auto, a2.att_index_id, a2.ent_inst_att_num_value,\n> a2.ent_inst_att_str_value, a2.ent_inst_att_dte_value,\n> a2.ent_inst_att_doc_id, a2.ent_inst_att_tran_1, a2.ent_inst_att_tran_2, a2\n> (...)\"\n> \" Index Cond: ((a2.bus_ent_inst_id = ba.bus_ent_inst_id_auto)\n> AND (a2.att_id = 1083))\"\n> \" Buffers: shared hit=635 read=895\"\n> \" SubPlan 1\"\n> \" -> Index Scan using ix_bus_ent_inst_attr_01 on\n> public.bus_ent_inst_attribute a (cost=0.58..141.91 rows=72 width=16)\n> (actual time=0.646..0.647 rows=1 loops=306)\"\n> \" Output: a.ent_inst_att_str_value\"\n> \" Index Cond: ((ba.bus_ent_inst_id_auto = a.bus_ent_inst_id)\n> AND (a.att_id = 1071))\"\n> \" Filter: (a.reg_status = 0)\"\n> \" Buffers: shared hit=1434 read=31\"\n> \" SubPlan 2\"\n> \" -> Index Scan using ix_bus_ent_inst_attr_01 on\n> public.bus_ent_inst_attribute t (cost=0.58..46.15 rows=21 width=16)\n> (actual\n> time=0.839..0.841 rows=0 loops=306)\"\n> \" Output: t.ent_inst_att_str_value\"\n> \" Index Cond: ((ba.bus_ent_inst_id_auto = t.bus_ent_inst_id)\n> AND (t.att_id = 1141))\"\n> \" Filter: (t.reg_status = 0)\"\n> \" Buffers: shared hit=1217 read=42\"\n> \"Planning time: 18.329 ms\"\n> \"Execution time: 1972336.524 ms\"\n>\n>\n>\n> --\n> Sent from:\n> https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n>\n>\n>\n\nThat plan looks like it might have been cropped in places, and the formatting is making it tricky to help.Could you try again, pasting the plan into https://explain.depesz.com/ to make it easier to review?On Fri, Apr 3, 2020 at 5:18 PM dangal <[email protected]> wrote:Justin thank you very much for your answer, as you can also see the number of\r\nrows differs a lot\r\nI attach the complete explain, do not attach it because it is large\n\r\n\"HashAggregate  (cost=12640757.46..12713163.46 rows=385 width=720) (actual\r\ntime=1971962.023..1971962.155 rows=306 loops=1)\"\r\n\"  Output: bi.bus_ent_inst_name_num, bi.att_value_num_7, bi.att_value_10,\r\n((SubPlan 1)), ((SubPlan 2)), a2.ent_inst_att_str_value, ba.att_value_1,\r\ndepto2.att_value_1, loc2.att_value_1, att_call.ent_inst_att_str_value,\r\natt_nro.ent_inst_att_str_value, att_bis (...)\"\r\n\"  Group Key: bi.bus_ent_inst_name_num, bi.att_value_num_7, bi.att_value_10,\r\n(SubPlan 1), (SubPlan 2), a2.ent_inst_att_str_value, ba.att_value_1,\r\ndepto2.att_value_1, loc2.att_value_1, att_call.ent_inst_att_str_value,\r\natt_nro.ent_inst_att_str_value, att_bis. (...)\"\r\n\"  Buffers: shared hit=5817744 read=1034292 dirtied=790, local hit=2\"\r\n\"  ->  Nested Loop  (cost=11114347.52..12640740.13 rows=385 width=720)\r\n(actual time=1906401.083..1971959.176 rows=306 loops=1)\"\r\n\"        Output: bi.bus_ent_inst_name_num, bi.att_value_num_7,\r\nbi.att_value_10, (SubPlan 1), (SubPlan 2), a2.ent_inst_att_str_value,\r\nba.att_value_1, depto2.att_value_1, loc2.att_value_1,\r\natt_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value, att_b\r\n(...)\"\r\n\"        Buffers: shared hit=5817744 read=1034292 dirtied=790, local hit=2\"\r\n\"        ->  Hash Join  (cost=11114346.94..12228344.41 rows=1427 width=704)\r\n(actual time=1906372.468..1964409.907 rows=306 loops=1)\"\r\n\"              Output: bi.bus_ent_inst_name_num, bi.att_value_num_7,\r\nbi.att_value_10, ba.bus_ent_inst_id_auto, ba.att_value_1,\r\natt_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\r\natt_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value, a (...)\"\r\n\"              Hash Cond: (ba.att_value_num_1 =\r\n(bi.bus_ent_inst_name_num)::numeric)\"\r\n\"              Buffers: shared hit=5814458 read=1033324 dirtied=790, local\r\nhit=2\"\r\n\"              ->  Hash Right Join  (cost=11114339.65..12172907.42\r\nrows=886647 width=158) (actual time=1906344.617..1963668.889 rows=3362294\r\nloops=1)\"\r\n\"                    Output: ba.bus_ent_inst_id_auto, ba.att_value_1,\r\nba.att_value_num_1, att_call.ent_inst_att_str_value,\r\natt_nro.ent_inst_att_str_value, att_bis.ent_inst_att_str_value,\r\natt_pad.ent_inst_att_str_value, att_manz.ent_inst_att_str_value, att_a\r\n(...)\"\r\n\"                    Hash Cond: ((att_barr.env_id = ba.env_id) AND\r\n(att_barr.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\r\n\"                    Buffers: shared hit=5814458 read=1033324 dirtied=790\"\r\n\"                    ->  Index Only Scan using ix_bus_ent_inst_attr_03 on\r\npublic.bus_ent_inst_attribute att_barr  (cost=0.83..1024093.06 rows=4508264\r\nwidth=24) (actual time=10.435..52888.091 rows=4244011 loops=1)\"\r\n\"                          Output: att_barr.att_id,\r\natt_barr.ent_inst_att_str_value, att_barr.env_id, att_barr.bus_ent_inst_id,\r\natt_barr.reg_status\"\r\n\"                          Index Cond: (att_barr.att_id = 1115)\"\r\n\"                          Heap Fetches: 120577\"\r\n\"                          Buffers: shared hit=503194 read=31197 dirtied=5\"\r\n\"                    ->  Hash  (cost=11101039.12..11101039.12 rows=886647\r\nwidth=146) (actual time=1906329.888..1906329.888 rows=3362294 loops=1)\"\r\n\"                          Output: ba.bus_ent_inst_id_auto, ba.att_value_1,\r\nba.env_id, ba.att_value_num_1, att_call.ent_inst_att_str_value,\r\natt_nro.ent_inst_att_str_value, att_bis.ent_inst_att_str_value,\r\natt_pad.ent_inst_att_str_value, att_manz.ent_inst_att (...)\"\r\n\"                          Buckets: 4194304 (originally 1048576)  Batches: 1\r\n(originally 1)  Memory Usage: 396824kB\"\r\n\"                          Buffers: shared hit=5311264 read=1002127\r\ndirtied=785\"\r\n\"                          ->  Hash Right Join \r\n(cost=10328938.09..11101039.12 rows=886647 width=146) (actual\r\ntime=1867557.718..1904218.946 rows=3362294 loops=1)\"\r\n\"                                Output: ba.bus_ent_inst_id_auto,\r\nba.att_value_1, ba.env_id, ba.att_value_num_1,\r\natt_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\r\natt_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value,\r\natt_manz.ent_in (...)\"\r\n\"                                Hash Cond: ((att_apt.env_id = ba.env_id)\r\nAND (att_apt.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\r\n\"                                Buffers: shared hit=5311264 read=1002127\r\ndirtied=785\"\r\n\"                                ->  Index Only Scan using\r\nix_bus_ent_inst_attr_03 on public.bus_ent_inst_attribute att_apt \r\n(cost=0.83..746958.06 rows=3287982 width=24) (actual time=0.091..32788.731\r\nrows=3491599 loops=1)\"\r\n\"                                      Output: att_apt.att_id,\r\natt_apt.ent_inst_att_str_value, att_apt.env_id, att_apt.bus_ent_inst_id,\r\natt_apt.reg_status\"\r\n\"                                      Index Cond: (att_apt.att_id = 1113)\"\r\n\"                                      Heap Fetches: 88910\"\r\n\"                                      Buffers: shared hit=178090 read=25341\r\ndirtied=5\"\r\n\"                                ->  Hash  (cost=10315637.55..10315637.55\r\nrows=886647 width=130) (actual time=1867553.445..1867553.445 rows=3362294\r\nloops=1)\"\r\n\"                                      Output: ba.bus_ent_inst_id_auto,\r\nba.att_value_1, ba.env_id, ba.att_value_num_1,\r\natt_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\r\natt_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value, att_manz.\r\n(...)\"\r\n\"                                      Buckets: 4194304 (originally 1048576) \r\nBatches: 1 (originally 1)  Memory Usage: 376885kB\"\r\n\"                                      Buffers: shared hit=5133174\r\nread=976786 dirtied=780\"\r\n\"                                      ->  Merge Left Join \r\n(cost=10304076.40..10315637.55 rows=886647 width=130) (actual\r\ntime=1862979.687..1865773.765 rows=3362294 loops=1)\"\r\n\"                                            Output:\r\nba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\r\natt_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,\r\natt_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value, att (...)\"\r\n\"                                            Merge Cond: ((ba.env_id =\r\nloc2.env_id) AND (((att_loc_hecho.ent_inst_att_str_value)::integer) =\r\nloc2.bus_ent_inst_name_num))\"\r\n\"                                            Buffers: shared hit=5133174\r\nread=976786 dirtied=780\"\r\n\"                                            ->  Sort \r\n(cost=10178591.32..10180807.94 rows=886647 width=141) (actual\r\ntime=1862965.240..1863856.321 rows=3362294 loops=1)\"\r\n\"                                                  Output:\r\nba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\r\natt_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_att_str_value,\r\natt_nro.ent_inst_att_str_value, att_bis.ent_inst_att_st (...)\"\r\n\"                                                  Sort Key: ba.env_id,\r\n((att_loc_hecho.ent_inst_att_str_value)::integer)\"\r\n\"                                                  Sort Method: quicksort \r\nMemory: 544870kB\"\r\n\"                                                  Buffers: shared\r\nhit=5133062 read=976781 dirtied=780\"\r\n\"                                                  ->  Merge Left Join \r\n(cost=10079438.31..10090999.47 rows=886647 width=141) (actual\r\ntime=1854085.484..1857592.771 rows=3362294 loops=1)\"\r\n\"                                                        Output:\r\nba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\r\natt_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_att_str_value,\r\natt_nro.ent_inst_att_str_value, att_bis.ent_inst_ (...)\"\r\n\"                                                        Merge Cond:\r\n((ba.env_id = depto2.env_id) AND\r\n(((att_dir_hecho.ent_inst_att_str_value)::integer) =\r\ndepto2.bus_ent_inst_name_num))\"\r\n\"                                                        Buffers: shared\r\nhit=5133062 read=976781 dirtied=780\"\r\n\"                                                        ->  Sort \r\n(cost=9953953.24..9956169.85 rows=886647 width=152) (actual\r\ntime=1854079.630..1855329.406 rows=3362294 loops=1)\"\r\n\"                                                              Output:\r\nba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\r\natt_dir_hecho.ent_inst_att_str_value, att_loc_hecho.ent_inst_att_str_value,\r\natt_call.ent_inst_att_str_value, att_n (...)\"\r\n\"                                                              Sort Key:\r\nba.env_id, ((att_dir_hecho.ent_inst_att_str_value)::integer)\"\r\n\"                                                              Sort Method:\r\nquicksort  Memory: 544857kB\"\r\n\"                                                              Buffers:\r\nshared hit=5133055 read=976779 dirtied=780\"\r\n\"                                                              ->  Hash\r\nRight Join  (cost=9791232.05..9866361.38 rows=886647 width=152) (actual\r\ntime=1844734.652..1849217.758 rows=3362294 loops=1)\"\r\n\"                                                                    Output:\r\nba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id, ba.att_value_num_1,\r\natt_dir_hecho.ent_inst_att_str_value, att_loc_hecho.ent_inst_att_str_value,\r\natt_call.ent_inst_att_str_value, (...)\"\r\n\"                                                                    Hash\r\nCond: ((att_rut.env_id = ba.env_id) AND (att_rut.bus_ent_inst_id =\r\nba.bus_ent_inst_id_auto))\"\r\n\"                                                                   \r\nBuffers: shared hit=5133055 read=976779 dirtied=780\"\r\n\"                                                                    -> \r\nIndex Only Scan using ix_bus_ent_inst_attr_03 on\r\npublic.bus_ent_inst_attribute att_rut  (cost=0.83..72690.43 rows=319036\r\nwidth=24) (actual time=17.325..3078.312 rows=149644 loops=1)\"\r\n\"                                                                         \r\nOutput: att_rut.att_id, att_rut.ent_inst_att_str_value, att_rut.env_id,\r\natt_rut.bus_ent_inst_id, att_rut.reg_status\"\r\n\"                                                                         \r\nIndex Cond: (att_rut.att_id = 1138)\"\r\n\"                                                                         \r\nHeap Fetches: 5299\"\r\n\"                                                                         \r\nBuffers: shared hit=26350 read=1137\"\r\n\"                                                                    -> \r\nHash  (cost=9777931.51..9777931.51 rows=886647 width=136) (actual\r\ntime=1844713.350..1844713.350 rows=3362294 loops=1)\"\r\n\"                                                                         \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\r\natt_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_att_str_ (...)\"\r\n\"                                                                         \r\nBuckets: 4194304 (originally 1048576)  Batches: 1 (originally 1)  Memory\r\nUsage: 329015kB\"\r\n\"                                                                         \r\nBuffers: shared hit=5106705 read=975642 dirtied=780\"\r\n\"                                                                         \r\n->  Hash Right Join  (cost=9705206.15..9777931.51 rows=886647 width=136)\r\n(actual time=1837569.880..1842945.853 rows=3362294 loops=1)\"\r\n\"                                                                               \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\r\natt_loc_hecho.ent_inst_att_str_value, att_call.ent_inst_at (...)\"\r\n\"                                                                               \r\nHash Cond: ((att_km.env_id = ba.env_id) AND (att_km.bus_ent_inst_id =\r\nba.bus_ent_inst_id_auto))\"\r\n\"                                                                               \r\nBuffers: shared hit=5106705 read=975642 dirtied=780\"\r\n\"                                                                               \r\n->  Index Only Scan using ix_bus_ent_inst_attr_02 on\r\npublic.bus_ent_inst_attribute att_km  (cost=0.70..70286.34 rows=319036\r\nwidth=13) (actual time=0.107..2995.494 rows=149942 l (...)\"\r\n\"                                                                                     \r\nOutput: att_km.att_id, att_km.ent_inst_att_num_value, att_km.env_id,\r\natt_km.bus_ent_inst_id, att_km.reg_status\"\r\n\"                                                                                     \r\nIndex Cond: (att_km.att_id = 1132)\"\r\n\"                                                                                     \r\nHeap Fetches: 5330\"\r\n\"                                                                                     \r\nBuffers: shared hit=59470 read=1171\"\r\n\"                                                                               \r\n->  Hash  (cost=9691905.74..9691905.74 rows=886647 width=131) (actual\r\ntime=1837565.949..1837565.949 rows=3362294 loops=1)\"\r\n\"                                                                                     \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\r\natt_loc_hecho.ent_inst_att_str_value, att_call.ent_i (...)\"\r\n\"                                                                                     \r\nBuckets: 4194304 (originally 1048576)  Batches: 1 (originally 1)  Memory\r\nUsage: 328650kB\"\r\n\"                                                                                     \r\nBuffers: shared hit=5047235 read=974471 dirtied=780\"\r\n\"                                                                                     \r\n->  Hash Right Join  (cost=7694366.79..9691905.74 rows=886647 width=131)\r\n(actual time=1710903.369..1834807.221 rows=3362294 loops=1)\"\r\n\"                                                                                           \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\r\natt_loc_hecho.ent_inst_att_str_value, att_call (...)\"\r\n\"                                                                                           \r\nHash Cond: ((att_bis.env_id = ba.env_id) AND (att_bis.bus_ent_inst_id =\r\nba.bus_ent_inst_id_auto))\"\r\n\"                                                                                           \r\nBuffers: shared hit=5047235 read=974471 dirtied=780\"\r\n\"                                                                                           \r\n->  Index Only Scan using ix_bus_ent_inst_attr_03 on\r\npublic.bus_ent_inst_attribute att_bis  (cost=0.83..1932476.93 rows=8508077\r\nwidth=24) (actual time=6.488..116892 (...)\"\r\n\"                                                                                                 \r\nOutput: att_bis.att_id, att_bis.ent_inst_att_str_value, att_bis.env_id,\r\natt_bis.bus_ent_inst_id, att_bis.reg_status\"\r\n\"                                                                                                 \r\nIndex Cond: (att_bis.att_id = 1117)\"\r\n\"                                                                                                 \r\nHeap Fetches: 228123\"\r\n\"                                                                                                 \r\nBuffers: shared hit=218185 read=52064 dirtied=27\"\r\n\"                                                                                           \r\n->  Hash  (cost=7681066.26..7681066.26 rows=886647 width=115) (actual\r\ntime=1710893.007..1710893.007 rows=3362294 loops=1)\"\r\n\"                                                                                                 \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\r\natt_loc_hecho.ent_inst_att_str_value, at (...)\"\r\n\"                                                                                                 \r\nBuckets: 4194304 (originally 1048576)  Batches: 1 (originally 1)  Memory\r\nUsage: 309513kB\"\r\n\"                                                                                                 \r\nBuffers: shared hit=4829050 read=922407 dirtied=753\"\r\n\"                                                                                                 \r\n->  Hash Right Join  (cost=5969990.07..7681066.26 rows=886647 width=115)\r\n(actual time=1566042.427..1708291.649 rows=3362294 loops=1)\"\r\n\"                                                                                                       \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\r\natt_loc_hecho.ent_inst_att_str_val (...)\"\r\n\"                                                                                                       \r\nHash Cond: ((att_call.env_id = ba.env_id) AND (att_call.bus_ent_inst_id =\r\nba.bus_ent_inst_id_auto))\"\r\n\"                                                                                                       \r\nBuffers: shared hit=4829050 read=922407 dirtied=753\"\r\n\"                                                                                                       \r\n->  Index Only Scan using ix_bus_ent_inst_attr_03 on\r\npublic.bus_ent_inst_attribute att_call  (cost=0.83..1655345.90 rows=7287794\r\nwidth=24) (actual time= (...)\"\r\n\"                                                                                                             \r\nOutput: att_call.att_id, att_call.ent_inst_att_str_value, att_call.env_id,\r\natt_call.bus_ent_inst_id, att_call.reg_status\"\r\n\"                                                                                                             \r\nIndex Cond: (att_call.att_id = 1119)\"\r\n\"                                                                                                             \r\nHeap Fetches: 213801\"\r\n\"                                                                                                             \r\nBuffers: shared hit=1852588 read=60151 dirtied=23\"\r\n\"                                                                                                       \r\n->  Hash  (cost=5956689.54..5956689.54 rows=886647 width=99) (actual\r\ntime=1566015.832..1566015.832 rows=3362294 loops=1)\"\r\n\"                                                                                                             \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\r\natt_loc_hecho.ent_inst_att_s (...)\"\r\n\"                                                                                                             \r\nBuckets: 4194304 (originally 1048576)  Batches: 1 (originally 1)  Memory\r\nUsage: 258291kB\"\r\n\"                                                                                                             \r\nBuffers: shared hit=2976462 read=862256 dirtied=730\"\r\n\"                                                                                                             \r\n->  Hash Right Join  (cost=4253571.63..5956689.54 rows=886647 width=99)\r\n(actual time=1355922.435..1563760.249 rows=3362294 loops=1)\"\r\n\"                                                                                                                   \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_dir_hecho.ent_inst_att_str_value,\r\natt_loc_hecho.ent_inst (...)\"\r\n\"                                                                                                                   \r\nHash Cond: ((att_dir_hecho.env_id = ba.env_id) AND\r\n(att_dir_hecho.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\r\n\"                                                                                                                   \r\nBuffers: shared hit=2976462 read=862256 dirtied=730\"\r\n\"                                                                                                                   \r\n->  Index Only Scan using ix_bus_ent_inst_attr_03 on\r\npublic.bus_ent_inst_attribute att_dir_hecho  (cost=0.83..1647646.84\r\nrows=7253898 width= (...)\"\r\n\"                                                                                                                         \r\nOutput: att_dir_hecho.att_id, att_dir_hecho.ent_inst_att_str_value,\r\natt_dir_hecho.env_id, att_dir_hecho.bus_ent_inst_id, att_dir_hecho (...)\"\r\n\"                                                                                                                         \r\nIndex Cond: (att_dir_hecho.att_id = 1122)\"\r\n\"                                                                                                                         \r\nHeap Fetches: 221189\"\r\n\"                                                                                                                         \r\nBuffers: shared hit=217265 read=76872 dirtied=96\"\r\n\"                                                                                                                   \r\n->  Hash  (cost=4240271.10..4240271.10 rows=886647 width=83) (actual\r\ntime=1355910.157..1355910.157 rows=3362294 loops=1)\"\r\n\"                                                                                                                         \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_loc_hecho.ent_inst_att_str_value, att_nro.ent_inst\r\n(...)\"\r\n\"                                                                                                                         \r\nBuckets: 4194304 (originally 1048576)  Batches: 1 (originally 1)  Memory\r\nUsage: 224784kB\"\r\n\"                                                                                                                         \r\nBuffers: shared hit=2759197 read=785384 dirtied=634\"\r\n\"                                                                                                                         \r\n->  Hash Right Join  (cost=2672428.25..4240271.10 rows=886647 width=83)\r\n(actual time=1097647.410..1353630.001 rows=3362294 loops=1)\"\r\n\"                                                                                                                               \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_loc_hecho.ent_inst_att_str_value, att_nro.en (...)\"\r\n\"                                                                                                                               \r\nHash Cond: ((att_loc_hecho.env_id = ba.env_id) AND\r\n(att_loc_hecho.bus_ent_inst_id = ba.bus_ent_inst_id_auto))\"\r\n\"                                                                                                                               \r\nBuffers: shared hit=2759197 read=785384 dirtied=634\"\r\n\"                                                                                                                               \r\n->  Index Only Scan using ix_bus_ent_inst_attr_03 on\r\npublic.bus_ent_inst_attribute att_loc_hecho  (cost=0.83..1516778.41 rows=66\r\n(...)\"\r\n\"                                                                                                                                     \r\nOutput: att_loc_hecho.att_id, att_loc_hecho.ent_inst_att_str_value,\r\natt_loc_hecho.env_id, att_loc_hecho.bus_ent_inst_id, a (...)\"\r\n\"                                                                                                                                     \r\nIndex Cond: (att_loc_hecho.att_id = 1133)\"\r\n\"                                                                                                                                     \r\nHeap Fetches: 218787\"\r\n\"                                                                                                                                     \r\nBuffers: shared hit=332968 read=93935 dirtied=115\"\r\n\"                                                                                                                               \r\n->  Hash  (cost=2659127.72..2659127.72 rows=886647 width=67) (actual\r\ntime=1097642.027..1097642.027 rows=3362294 loops=1)\"\r\n\"                                                                                                                                     \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_nro.ent_inst_att_str_value, att_pad.en (...)\"\r\n\"                                                                                                                                     \r\nBuckets: 4194304 (originally 1048576)  Batches: 1 (originally 1)  Memory\r\nUsage: 215839kB\"\r\n\"                                                                                                                                     \r\nBuffers: shared hit=2426229 read=691449 dirtied=519\"\r\n\"                                                                                                                                     \r\n->  Hash Right Join  (cost=1353880.71..2659127.72 rows=886647 width=67)\r\n(actual time=466534.722..1095259.942 rows=3362294  (...)\"\r\n\"                                                                                                                                           \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_nro.ent_inst_att_str_value, att_ (...)\"\r\n\"                                                                                                                                           \r\nHash Cond: ((att_nro.env_id = ba.env_id) AND (att_nro.bus_ent_inst_id =\r\nba.bus_ent_inst_id_auto))\"\r\n\"                                                                                                                                           \r\nBuffers: shared hit=2426229 read=691449 dirtied=519\"\r\n\"                                                                                                                                           \r\n->  Index Only Scan using ix_bus_ent_inst_attr_03 on\r\npublic.bus_ent_inst_attribute att_nro  (cost=0.83..1262736.66 r (...)\"\r\n\"                                                                                                                                                 \r\nOutput: att_nro.att_id, att_nro.ent_inst_att_str_value, att_nro.env_id,\r\natt_nro.bus_ent_inst_id, att_nro.reg_s (...)\"\r\n\"                                                                                                                                                 \r\nIndex Cond: (att_nro.att_id = 1135)\"\r\n\"                                                                                                                                                 \r\nHeap Fetches: 156988\"\r\n\"                                                                                                                                                 \r\nBuffers: shared hit=1568458 read=151792 dirtied=285\"\r\n\"                                                                                                                                           \r\n->  Hash  (cost=1340580.18..1340580.18 rows=886647 width=51) (actual\r\ntime=466528.985..466528.985 rows=3362294 loops= (...)\"\r\n\"                                                                                                                                                 \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_pad.ent_inst_att_str_value (...)\"\r\n\"                                                                                                                                                 \r\nBuckets: 4194304 (originally 1048576)  Batches: 1 (originally 1)  Memory\r\nUsage: 204115kB\"\r\n\"                                                                                                                                                 \r\nBuffers: shared hit=857771 read=539657 dirtied=234\"\r\n\"                                                                                                                                                 \r\n->  Hash Right Join  (cost=1265450.85..1340580.18 rows=886647 width=51)\r\n(actual time=464578.744..465343.707 ro (...)\"\r\n\"                                                                                                                                                       \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_pad.ent_inst_att_str (...)\"\r\n\"                                                                                                                                                       \r\nHash Cond: ((att_manz.env_id = ba.env_id) AND (att_manz.bus_ent_inst_id =\r\nba.bus_ent_inst_id_auto))\"\r\n\"                                                                                                                                                       \r\nBuffers: shared hit=857771 read=539657 dirtied=234\"\r\n\"                                                                                                                                                       \r\n->  Index Only Scan using ix_bus_ent_inst_attr_03 on\r\npublic.bus_ent_inst_attribute att_manz  (cost=0.83. (...)\"\r\n\"                                                                                                                                                             \r\nOutput: att_manz.att_id, att_manz.ent_inst_att_str_value, att_manz.env_id,\r\natt_manz.bus_ent_inst_i (...)\"\r\n\"                                                                                                                                                             \r\nIndex Cond: (att_manz.att_id = 1134)\"\r\n\"                                                                                                                                                             \r\nHeap Fetches: 14\"\r\n\"                                                                                                                                                             \r\nBuffers: shared hit=276 read=15\"\r\n\"                                                                                                                                                       \r\n->  Hash  (cost=1252150.32..1252150.32 rows=886647 width=35) (actual\r\ntime=464569.271..464569.271 rows=33 (...)\"\r\n\"                                                                                                                                                             \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_pad.ent_inst_a (...)\"\r\n\"                                                                                                                                                             \r\nBuckets: 4194304 (originally 1048576)  Batches: 1 (originally 1)  Memory\r\nUsage: 204114kB\"\r\n\"                                                                                                                                                             \r\nBuffers: shared hit=857495 read=539642 dirtied=234\"\r\n\"                                                                                                                                                             \r\n->  Hash Right Join  (cost=1177020.99..1252150.32 rows=886647 width=35)\r\n(actual time=184587.973..4 (...)\"\r\n\"                                                                                                                                                                   \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1, att_pad.ent_ (...)\"\r\n\"                                                                                                                                                                   \r\nHash Cond: ((att_pad.env_id = ba.env_id) AND (att_pad.bus_ent_inst_id =\r\nba.bus_ent_inst_id_a (...)\"\r\n\"                                                                                                                                                                   \r\nBuffers: shared hit=857495 read=539642 dirtied=234\"\r\n\"                                                                                                                                                                   \r\n->  Index Only Scan using ix_bus_ent_inst_attr_03 on\r\npublic.bus_ent_inst_attribute att_pad   (...)\"\r\n\"                                                                                                                                                                         \r\nOutput: att_pad.att_id, att_pad.ent_inst_att_str_value, att_pad.env_id,\r\natt_pad.bus_en (...)\"\r\n\"                                                                                                                                                                         \r\nIndex Cond: (att_pad.att_id = 1136)\"\r\n\"                                                                                                                                                                         \r\nHeap Fetches: 54024\"\r\n\"                                                                                                                                                                         \r\nBuffers: shared hit=334762 read=60835 dirtied=136\"\r\n\"                                                                                                                                                                   \r\n->  Hash  (cost=1163720.45..1163720.45 rows=886647 width=19) (actual\r\ntime=184573.023..184573 (...)\"\r\n\"                                                                                                                                                                         \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1\"\r\n\"                                                                                                                                                                         \r\nBuckets: 4194304 (originally 1048576)  Batches: 1 (originally 1)  Memory\r\nUsage: 200216 (...)\"\r\n\"                                                                                                                                                                         \r\nBuffers: shared hit=522733 read=478807 dirtied=98\"\r\n\"                                                                                                                                                                         \r\n->  Bitmap Heap Scan on public.bus_ent_instance ba \r\n(cost=35242.83..1163720.45 rows=88 (...)\"\r\n\"                                                                                                                                                                               \r\nOutput: ba.bus_ent_inst_id_auto, ba.att_value_1, ba.env_id,\r\nba.att_value_num_1\"\r\n\"                                                                                                                                                                               \r\nRecheck Cond: ((((ba.bus_ent_inst_name_pre)::text = 'FOTPER'::text) AND\r\n((ba.att (...)\"\r\n\"                                                                                                                                                                               \r\nHeap Blocks: exact=981056\"\r\n\"                                                                                                                                                                               \r\nBuffers: shared hit=522733 read=478807 dirtied=98\"\r\n\"                                                                                                                                                                               \r\n->  BitmapOr  (cost=35242.83..35242.83 rows=896239 width=0) (actual\r\ntime=33401.6 (...)\"\r\n\"                                                                                                                                                                                     \r\nBuffers: shared hit=43 read=20441\"\r\n\"                                                                                                                                                                                     \r\n->  Bitmap Index Scan on ix_bus_ent_instance_atts_namenum_01  (cost=0.00..\r\n(...)\"\r\n\"                                                                                                                                                                                           \r\nIndex Cond: (((ba.bus_ent_inst_name_pre)::text = 'FOTPER'::text) AND (...)\"\r\n\"                                                                                                                                                                                           \r\nBuffers: shared hit=9 read=5030\"\r\n\"                                                                                                                                                                                     \r\n->  Bitmap Index Scan on ix_bus_ent_instance_atts_namenum_01  (cost=0.00..\r\n(...)\"\r\n\"                                                                                                                                                                                           \r\nIndex Cond: (((ba.bus_ent_inst_name_pre)::text = 'FOTPER'::text) AND (...)\"\r\n\"                                                                                                                                                                                           \r\nBuffers: shared hit=34 read=15411\"\r\n\"                                                        ->  Sort \r\n(cost=125485.07..125573.87 rows=35520 width=13) (actual time=5.831..312.164\r\nrows=3217523 loops=1)\"\r\n\"                                                              Output:\r\ndepto2.att_value_1, depto2.env_id, depto2.bus_ent_inst_name_num\"\r\n\"                                                              Sort Key:\r\ndepto2.env_id, depto2.bus_ent_inst_name_num\"\r\n\"                                                              Sort Method:\r\nquicksort  Memory: 26kB\"\r\n\"                                                              Buffers:\r\nshared hit=7 read=2\"\r\n\"                                                              ->  Bitmap\r\nHeap Scan on public.bus_ent_instance depto2  (cost=971.85..122800.41\r\nrows=35520 width=13) (actual time=5.758..5.776 rows=21 loops=1)\"\r\n\"                                                                    Output:\r\ndepto2.att_value_1, depto2.env_id, depto2.bus_ent_inst_name_num\"\r\n\"                                                                    Recheck\r\nCond: (depto2.bus_ent_id = 1091)\"\r\n\"                                                                    Heap\r\nBlocks: exact=5\"\r\n\"                                                                   \r\nBuffers: shared hit=7 read=2\"\r\n\"                                                                    -> \r\nBitmap Index Scan on ix_bus_ent_instance_01  (cost=0.00..962.97 rows=35520\r\nwidth=0) (actual time=5.733..5.733 rows=21 loops=1)\"\r\n\"                                                                         \r\nIndex Cond: (depto2.bus_ent_id = 1091)\"\r\n\"                                                                         \r\nBuffers: shared hit=2 read=2\"\r\n\"                                            ->  Sort \r\n(cost=125485.07..125573.87 rows=35520 width=13) (actual time=14.418..320.637\r\nrows=3217335 loops=1)\"\r\n\"                                                  Output: loc2.att_value_1,\r\nloc2.env_id, loc2.bus_ent_inst_name_num\"\r\n\"                                                  Sort Key: loc2.env_id,\r\nloc2.bus_ent_inst_name_num\"\r\n\"                                                  Sort Method: quicksort \r\nMemory: 76kB\"\r\n\"                                                  Buffers: shared hit=112\r\nread=5\"\r\n\"                                                  ->  Bitmap Heap Scan on\r\npublic.bus_ent_instance loc2  (cost=971.85..122800.41 rows=35520 width=13)\r\n(actual time=13.305..13.922 rows=725 loops=1)\"\r\n\"                                                        Output:\r\nloc2.att_value_1, loc2.env_id, loc2.bus_ent_inst_name_num\"\r\n\"                                                        Recheck Cond:\r\n(loc2.bus_ent_id = 1165)\"\r\n\"                                                        Heap Blocks:\r\nexact=110\"\r\n\"                                                        Buffers: shared\r\nhit=112 read=5\"\r\n\"                                                        ->  Bitmap Index\r\nScan on ix_bus_ent_instance_01  (cost=0.00..962.97 rows=35520 width=0)\r\n(actual time=13.262..13.262 rows=725 loops=1)\"\r\n\"                                                              Index Cond:\r\n(loc2.bus_ent_id = 1165)\"\r\n\"                                                              Buffers:\r\nshared hit=2 read=5\"\r\n\"              ->  Hash  (cost=4.35..4.35 rows=235 width=552) (actual\r\ntime=0.175..0.175 rows=235 loops=1)\"\r\n\"                    Output: bi.bus_ent_inst_name_num, bi.att_value_num_7,\r\nbi.att_value_10\"\r\n\"                    Buckets: 1024  Batches: 1  Memory Usage: 19kB\"\r\n\"                    Buffers: local hit=2\"\r\n\"                    ->  Seq Scan on pg_temp_179.temp_table bi \r\n(cost=0.00..4.35 rows=235 width=552) (actual time=0.015..0.055 rows=235\r\nloops=1)\"\r\n\"                          Output: bi.bus_ent_inst_name_num,\r\nbi.att_value_num_7, bi.att_value_10\"\r\n\"                          Buffers: local hit=2\"\r\n\"        ->  Index Scan using ix_bus_ent_inst_attr_01 on\r\npublic.bus_ent_inst_attribute a2  (cost=0.58..237.03 rows=123 width=20)\r\n(actual time=23.167..23.168 rows=1 loops=306)\"\r\n\"              Output: a2.env_id, a2.bus_ent_inst_id, a2.att_id,\r\na2.att_row_id_auto, a2.att_index_id, a2.ent_inst_att_num_value,\r\na2.ent_inst_att_str_value, a2.ent_inst_att_dte_value,\r\na2.ent_inst_att_doc_id, a2.ent_inst_att_tran_1, a2.ent_inst_att_tran_2, a2\r\n(...)\"\r\n\"              Index Cond: ((a2.bus_ent_inst_id = ba.bus_ent_inst_id_auto)\r\nAND (a2.att_id = 1083))\"\r\n\"              Buffers: shared hit=635 read=895\"\r\n\"        SubPlan 1\"\r\n\"          ->  Index Scan using ix_bus_ent_inst_attr_01 on\r\npublic.bus_ent_inst_attribute a  (cost=0.58..141.91 rows=72 width=16)\r\n(actual time=0.646..0.647 rows=1 loops=306)\"\r\n\"                Output: a.ent_inst_att_str_value\"\r\n\"                Index Cond: ((ba.bus_ent_inst_id_auto = a.bus_ent_inst_id)\r\nAND (a.att_id = 1071))\"\r\n\"                Filter: (a.reg_status = 0)\"\r\n\"                Buffers: shared hit=1434 read=31\"\r\n\"        SubPlan 2\"\r\n\"          ->  Index Scan using ix_bus_ent_inst_attr_01 on\r\npublic.bus_ent_inst_attribute t  (cost=0.58..46.15 rows=21 width=16) (actual\r\ntime=0.839..0.841 rows=0 loops=306)\"\r\n\"                Output: t.ent_inst_att_str_value\"\r\n\"                Index Cond: ((ba.bus_ent_inst_id_auto = t.bus_ent_inst_id)\r\nAND (t.att_id = 1141))\"\r\n\"                Filter: (t.reg_status = 0)\"\r\n\"                Buffers: shared hit=1217 read=42\"\r\n\"Planning time: 18.329 ms\"\r\n\"Execution time: 1972336.524 ms\"\n\n\n\r\n--\r\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html", "msg_date": "Tue, 7 Apr 2020 11:20:41 +0100", "msg_from": "Michael Christofides <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow query" } ]
[ { "msg_contents": "I have a table with 120 million rows of data spread among 512\npartitioned by hash table. The id column of the table is a uuid, which\nis what is being used for the partition hash and it's also the PK for\nthe table.\n\nThe table has a text column, which also has a btree index on it. A\nselect query on an identical non-partitioned table takes 0.144\nseconds, but on the partitioned table it takes 5.689 seconds.\n\nAm I missing something in my setup? Or is this expected? I do know\nhaving more than 100 partitions in prior versions of PostgreSQL 12\nwould cause a major slow down, but from what I read PostgreSQL 12\naddresses that now?\n\nhttps://www.2ndquadrant.com/en/blog/postgresql-12-partitioning/\n\n\n", "msg_date": "Sun, 5 Apr 2020 13:48:03 -0400", "msg_from": "Arya F <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql 12, 512 partition by hash. Slow select" }, { "msg_contents": "\n\nAm 05.04.20 um 19:48 schrieb Arya F:\n> Am I missing something in my setup? Or is this expected? I do know\n> having more than 100 partitions in prior versions of PostgreSQL 12\n> would cause a major slow down, but from what I read PostgreSQL 12\n> addresses that now?\n\nto say more about your problem we need to know more. For instance, the \nexact table definition, the query and the execution plan (explain \nanalyse ...).\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n", "msg_date": "Sun, 5 Apr 2020 20:41:03 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql 12, 512 partition by hash. Slow select" }, { "msg_contents": "Arya F <[email protected]> writes:\n> I have a table with 120 million rows of data spread among 512\n> partitioned by hash table. The id column of the table is a uuid, which\n> is what is being used for the partition hash and it's also the PK for\n> the table.\n\n> The table has a text column, which also has a btree index on it. A\n> select query on an identical non-partitioned table takes 0.144\n> seconds, but on the partitioned table it takes 5.689 seconds.\n\n> Am I missing something in my setup? Or is this expected? I do know\n> having more than 100 partitions in prior versions of PostgreSQL 12\n> would cause a major slow down, but from what I read PostgreSQL 12\n> addresses that now?\n\nYou have your expectations calibrated wrongly, I suspect.\n\nYour default expectation with a table with many partitions should be\nthat queries will have to hit all those partitions and it will take a\nlong time. If the query is such that the system can prove that it\nonly needs to access one partition, then it can be fast --- but those\nproof rules are not superlatively bright, and they're especially not\nbright for hash partitioning since that has so little relationship\nto WHERE restrictions that practical queries would use. But if the\nquery WHERE isn't restricting the partitioning key at all, as I suspect\nis the case for your query, then there's certainly no chance of not\nhaving to search all the partitions.\n\nIf you showed us the specific table declaration and query you're\nworking with, it might be possible to offer more than generalities.\n\nIn general though, partitioning should be a last resort when you've\ngot so much data that you have no other choice. I doubt that you\nare there at all with 100M rows, and you are certainly not at a point\nwhere using hundreds of partitions is a good idea. They are not\ncost-free, by a very long shot. And when you do partition, you\ntypically need to think hard about what the partitioning rule will be.\nI'm afraid that hash partitioning is more of a shiny trap for novices\nthan it is a useful tool, because it doesn't organize the data into\nmeaningful sub-groups.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 Apr 2020 14:55:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql 12, 512 partition by hash. Slow select" }, { "msg_contents": "On Sun, Apr 5, 2020 at 2:55 PM Tom Lane <[email protected]> wrote:\n>\n> Arya F <[email protected]> writes:\n> > I have a table with 120 million rows of data spread among 512\n> > partitioned by hash table. The id column of the table is a uuid, which\n> > is what is being used for the partition hash and it's also the PK for\n> > the table.\n>\n> > The table has a text column, which also has a btree index on it. A\n> > select query on an identical non-partitioned table takes 0.144\n> > seconds, but on the partitioned table it takes 5.689 seconds.\n>\n> > Am I missing something in my setup? Or is this expected? I do know\n> > having more than 100 partitions in prior versions of PostgreSQL 12\n> > would cause a major slow down, but from what I read PostgreSQL 12\n> > addresses that now?\n>\n> You have your expectations calibrated wrongly, I suspect.\n>\n> Your default expectation with a table with many partitions should be\n> that queries will have to hit all those partitions and it will take a\n> long time. If the query is such that the system can prove that it\n> only needs to access one partition, then it can be fast --- but those\n> proof rules are not superlatively bright, and they're especially not\n> bright for hash partitioning since that has so little relationship\n> to WHERE restrictions that practical queries would use. But if the\n> query WHERE isn't restricting the partitioning key at all, as I suspect\n> is the case for your query, then there's certainly no chance of not\n> having to search all the partitions.\n>\n> If you showed us the specific table declaration and query you're\n> working with, it might be possible to offer more than generalities.\n>\n> In general though, partitioning should be a last resort when you've\n> got so much data that you have no other choice. I doubt that you\n> are there at all with 100M rows, and you are certainly not at a point\n> where using hundreds of partitions is a good idea. They are not\n> cost-free, by a very long shot. And when you do partition, you\n> typically need to think hard about what the partitioning rule will be.\n> I'm afraid that hash partitioning is more of a shiny trap for novices\n> than it is a useful tool, because it doesn't organize the data into\n> meaningful sub-groups.\n>\n> regards, tom lane\n\n\nThe table at some point will have more than 1 billion rows, the\ninformation stored is international residential addresses. Trying to\nfigure out a way of spreading the data fairly evenly thought out\nmultiple partitions, but I was unable to come up with a way of\nsplitting the data so that Postgres does not have to search though all\nthe partitions.\n\n\n", "msg_date": "Sun, 5 Apr 2020 15:50:18 -0400", "msg_from": "Arya F <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql 12, 512 partition by hash. Slow select" }, { "msg_contents": "\n> On Apr 5, 2020, at 2:50 PM, Arya F <[email protected]> wrote:\n> \n> The table at some point will have more than 1 billion rows, the\n> information stored is international residential addresses. Trying to\n> figure out a way of spreading the data fairly evenly thought out\n> multiple partitions, but I was unable to come up with a way of\n> splitting the data so that Postgres does not have to search though all\n> the partitions.\n> \n\n\nIf you have to use partitions, I would split it by country using population for analysis. I understand that address and population are different, but I would expect some correlation. \n\nThe largest 14 countries each have a population of 100 million or more and represent about 62% of the world population. That means the rest of the world should fit easily into another 14 partitions. \n\nIt seems like it could be fairly easily evened out with a little bit of data analysis.\n\nYou could probably refine this to be no more than 20 partitions.\n\nNow China and India could be a problem and need to be split, but I would not do that unless necessary. China and India both have 6 nationally recognized regions that could be used if needed.\n\nNeil\n-\nFairwind Software\nhttps://www.fairwindsoft.com\n\n\n\n\n\n\n", "msg_date": "Sun, 5 Apr 2020 15:48:49 -0500", "msg_from": "Neil <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql 12, 512 partition by hash. Slow select" } ]
[ { "msg_contents": "Hi,\n\nI am seeing a performance problem with postgresql v 11.7 on views, and I am wondering if anyone can tell me why or has any suggestion.\n\nA table is created as:\n\nCREATE TABLE \"FBNK_CUSTOMER\" (RECID VARCHAR(255) NOT NULL PRIMARY KEY, XMLRECORD VARCHAR)\n\nAnd contains only 180 rows.\n\nDoing an explain plan on the view created over this gives:\n\nEXPLAIN ANALYZE\nselect RECID from \"V_FBNK_CUSTOMER\"\n\n\nSubquery Scan on \"V_FBNK_CUSTOMER\" (cost=0.00..19014.60 rows=180 width=7) (actual time=459.601..78642.189 rows=180 loops=1)\n -> Seq Scan on \"FBNK_CUSTOMER\" a (cost=0.00..19012.80 rows=180 width=14575) (actual time=459.600..78641.950 rows=180 loops=1)\nPlanning Time: 0.679 ms\nExecution Time: 78642.616 ms\n\nYet an Explain plan on the underlying table( on select RECID from \"FBNK_CUSTOMER\") gives:\n\nSeq Scan on \"FBNK_CUSTOMER\" (cost=0.00..22.80 rows=180 width=7) (actual time=0.004..0.272 rows=180 loops=1)\nPlanning Time: 0.031 ms\nExecution Time: 0.288 ms\n\nSo you can see that postgresql is not using the primary key index for RECID. THIS IS NOT THE CASE FOR ORACLE where the primary key index is used in the explain plan\n\nThe view is created similar to the following where extractValueJS is a stored procedure that extracts a value from the VARCHAR XMLRECORD column.\n\nCREATE VIEW \"V_FBNK_CUSTOMER\" as\nSELECT a.RECID, a.XMLRECORD \"THE_RECORD\"\n,a.RECID \"CUSTOMER_CODE\"\n,a.RECID \"CUSTOMER_NO\"\n,extractValueJS(a.XMLRECORD, 1, 0) \"MNEMONIC\"\n,extractValueJS(a.XMLRECORD, 2, 0) \"SHORT_NAME\"\n,extractValueJS(a.XMLRECORD, 2, -1) \"SHORT_NAME_2\"\n, etc\n, extractValueJS(a.XMLRECORD, 179, 9) \"TESTER\"\nFROM\n\"FBNK_CUSTOMER\" a\n\n\nAs well, the problem gets worse as columns are added to the view, irrespective of the SELECTION columns and it seems to perform some activity behind.\n\nCreating an empty view,\n\nCREATE VIEW \"V_FBNK_CUSTOMER_TEST\" as\nSELECT a.RECID, a.XMLRECORD \"THE_RECORD\"\n,a.RECID \"CUSTOMER_CODE\"\n,a.RECID \"CUSTOMER_NO\"\nFROM\n\"FBNK_CUSTOMER\" a ------------- > 3 ms select RECID from \"V_FBNK_CUSTOMER_TEST\"\n\n\nCREATE VIEW \"V_FBNK_CUSTOMER_TEST\" as\nSELECT a.RECID, a.XMLRECORD \"THE_RECORD\"\n,a.RECID \"CUSTOMER_CODE\"\n,a.RECID \"CUSTOMER_NO\"\n,extractValueJS(a.XMLRECORD, 1, 0) \"MNEMONIC\"\nFROM\n\"FBNK_CUSTOMER\" a ------------------> 54 ms select RECID from \"V_FBNK_CUSTOMER_TEST\"\n\n\nCREATE VIEW \"V_FBNK_CUSTOMER_TEST\" as\nSELECT a.RECID, a.XMLRECORD \"THE_RECORD\"\n,a.RECID \"CUSTOMER_CODE\"\n,a.RECID \"CUSTOMER_NO\"\n,extractValueJS(a.XMLRECORD, 1, 0) \"MNEMONIC\"\n,extractValueJS(a.XMLRECORD, 2, 0) \"SHORT_NAME\"\nFROM\n\"FBNK_CUSTOMER\" a ------------------------> 118 ms select RECID from \"V_FBNK_CUSTOMER_TEST\"\n\nThe following query takes an extremely long time for only 180 rows, and what this means is that we would have to index anything appearing in the where clause for every table in order to use views because the views seem not to consider the select clause. Why is that and does anyone know a way around this?\n\nSELECT RECID FROM \"V_FBNK_CUSTOMER\" WHERE \"TESTER\" = '5.00' ORDER BY RECID\n\nSort (cost=19015.06..19015.06 rows=1 width=7) (actual time=102172.500..102172.501 rows=1 loops=1)\n Sort Key: \"V_FBNK_CUSTOMER\".recid\n Sort Method: quicksort Memory: 25kB\n -> Subquery Scan on \"V_FBNK_CUSTOMER\" (cost=0.00..19015.05 rows=1 width=7) (actual time=91242.866..102172.474 rows=1 loops=1)\n Filter: ((\"V_FBNK_CUSTOMER\".\"TESTER\")::text = '5.00'::text)\n Rows Removed by Filter: 179\n -> Seq Scan on \"FBNK_CUSTOMER\" a (cost=0.00..19012.80 rows=180 width=14575) (actual time=613.455..102172.175 rows=180 loops=1)\nPlanning Time: 1.674 ms\nExecution Time: 102174.015 ms\n\n\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.\n\n\n\n\n\n\n\n\n\nHi,\n \nI am seeing a performance problem with postgresql v 11.7 on views, and I am wondering if anyone can tell me why or has any suggestion.\n \nA table is created as:\n \nCREATE TABLE \"FBNK_CUSTOMER\" (RECID VARCHAR(255) NOT NULL\nPRIMARY KEY, XMLRECORD VARCHAR)\n \nAnd contains only 180 rows.\n \nDoing an explain plan on the view created over this gives:\n \nEXPLAIN\nANALYZE\nselect RECID\nfrom\n\"V_FBNK_CUSTOMER\"\n \n \nSubquery Scan on \"V_FBNK_CUSTOMER\"  (cost=0.00..19014.60 rows=180 width=7) (actual time=459.601..78642.189 rows=180 loops=1)\n  ->  Seq Scan on \"FBNK_CUSTOMER\" a  (cost=0.00..19012.80 rows=180 width=14575) (actual time=459.600..78641.950 rows=180 loops=1)\nPlanning Time: 0.679 ms\nExecution Time: 78642.616 ms\n \nYet an Explain plan on the underlying table( on select RECID from “FBNK_CUSTOMER”) gives:\n \nSeq Scan on \"FBNK_CUSTOMER\"  (cost=0.00..22.80 rows=180 width=7) (actual time=0.004..0.272 rows=180 loops=1)\nPlanning Time: 0.031 ms\nExecution Time: 0.288 ms\n \nSo you can see that postgresql is not using the primary key index for RECID. \nTHIS IS NOT THE CASE FOR ORACLE where the primary key index is used in the explain plan\n \nThe view is created similar to the following where extractValueJS is a stored procedure that extracts a value from the VARCHAR XMLRECORD column.\n \nCREATE VIEW \"V_FBNK_CUSTOMER\" as\n\nSELECT a.RECID, a.XMLRECORD \"THE_RECORD\"\n,a.RECID \"CUSTOMER_CODE\"\n,a.RECID \"CUSTOMER_NO\"\n,extractValueJS(a.XMLRECORD, 1, 0) \"MNEMONIC\"\n,extractValueJS(a.XMLRECORD, 2, 0) \"SHORT_NAME\"\n,extractValueJS(a.XMLRECORD, 2, -1) \"SHORT_NAME_2\"\n, etc\n, \nextractValueJS(a.XMLRECORD, 179, 9) \"TESTER\"\nFROM \n\"FBNK_CUSTOMER\" a\n \n \nAs well, the problem gets worse as columns are added to the view,\nirrespective of the SELECTION columns and it seems to perform some activity behind.\n \nCreating an empty view,\n \nCREATE\nVIEW\n\"V_FBNK_CUSTOMER_TEST\"\nas\n\nSELECT a.RECID, a.XMLRECORD\n\"THE_RECORD\"\n,a.RECID\n\"CUSTOMER_CODE\"\n,a.RECID\n\"CUSTOMER_NO\"\nFROM\n\n\"FBNK_CUSTOMER\" a                ------------- > 3 ms  \nselect RECID\nfrom\n\"V_FBNK_CUSTOMER_TEST\"           \n\n \n \nCREATE\nVIEW\n\"V_FBNK_CUSTOMER_TEST\"\nas\n\nSELECT a.RECID, a.XMLRECORD\n\"THE_RECORD\"\n,a.RECID\n\"CUSTOMER_CODE\"\n,a.RECID\n\"CUSTOMER_NO\"\n,extractValueJS(a.XMLRECORD,\n1,\n0)\n\"MNEMONIC\"\nFROM\n\n\"FBNK_CUSTOMER\" a               ----------------à\n 54 ms select RECID\nfrom\n\"V_FBNK_CUSTOMER_TEST\"\n \n \nCREATE\nVIEW\n\"V_FBNK_CUSTOMER_TEST\"\nas\n\nSELECT a.RECID, a.XMLRECORD\n\"THE_RECORD\"\n,a.RECID\n\"CUSTOMER_CODE\"\n,a.RECID\n\"CUSTOMER_NO\"\n,extractValueJS(a.XMLRECORD,\n1,\n0)\n\"MNEMONIC\"\n,extractValueJS(a.XMLRECORD,\n2,\n0)\n\"SHORT_NAME\"\nFROM\n\n\"FBNK_CUSTOMER\" a             ----------------------à\n 118 ms select RECID\nfrom\n\"V_FBNK_CUSTOMER_TEST\"\n \nThe following query takes an extremely long time for only 180 rows, and what this means is that we would have to index anything appearing in the where clause\n for every table in order to use views because the views seem not to consider the select clause.  Why is that and does anyone know a way around this?\n \nSELECT RECID\nFROM\n\"V_FBNK_CUSTOMER\"\nWHERE\n\"TESTER\" =\n'5.00'\nORDER\nBY RECID\n \nSort  (cost=19015.06..19015.06 rows=1 width=7) (actual time=102172.500..102172.501 rows=1 loops=1)\n  Sort Key: \"V_FBNK_CUSTOMER\".recid\n  Sort Method: quicksort  Memory: 25kB\n  ->  Subquery Scan on \"V_FBNK_CUSTOMER\"  (cost=0.00..19015.05 rows=1 width=7) (actual time=91242.866..102172.474 rows=1 loops=1)\n        Filter: ((\"V_FBNK_CUSTOMER\".\"TESTER\")::text = '5.00'::text)\n        Rows Removed by Filter: 179\n        ->  Seq Scan on \"FBNK_CUSTOMER\" a  (cost=0.00..19012.80 rows=180 width=14575) (actual time=613.455..102172.175 rows=180 loops=1)\nPlanning Time: 1.674 ms\nExecution Time: 102174.015 ms\n \n\n\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized\n and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you\n check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.", "msg_date": "Mon, 6 Apr 2020 14:19:59 +0000", "msg_from": "Rick Vincent <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres not using index on views" }, { "msg_contents": "On Mon, Apr 06, 2020 at 02:19:59PM +0000, Rick Vincent wrote:\n> I am seeing a performance problem with postgresql v 11.7 on views, and I am wondering if anyone can tell me why or has any suggestion.\n> \n> A table is created as:\n> \n> CREATE TABLE \"FBNK_CUSTOMER\" (RECID VARCHAR(255) NOT NULL PRIMARY KEY, XMLRECORD VARCHAR)\n> \n> And contains only 180 rows.\n> \n> Doing an explain plan on the view created over this gives:\n> \n> EXPLAIN ANALYZE\n> select RECID from \"V_FBNK_CUSTOMER\"\n> \n> \n> Subquery Scan on \"V_FBNK_CUSTOMER\" (cost=0.00..19014.60 rows=180 width=7) (actual time=459.601..78642.189 rows=180 loops=1)\n> -> Seq Scan on \"FBNK_CUSTOMER\" a (cost=0.00..19012.80 rows=180 width=14575) (actual time=459.600..78641.950 rows=180 loops=1)\n> \n> Yet an Explain plan on the underlying table( on select RECID from \"FBNK_CUSTOMER\") gives:\n> \n> Seq Scan on \"FBNK_CUSTOMER\" (cost=0.00..22.80 rows=180 width=7) (actual time=0.004..0.272 rows=180 loops=1)\n\nIt still did a seq scan on the table, so I'm not sure what this has to do with\nindex scans ?\n\n> The following query takes an extremely long time for only 180 rows, and what this means is that we would have to index anything appearing in the where clause for every table in order to use views because the views seem not to consider the select clause. Why is that and does anyone know a way around this?\n\nIs there a reason why you don't store the extracted value in its own column ?\nAnd maybe keep it up to date using an insert/update trigger on the xmlrecord\ncolumn.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 Apr 2020 23:59:29 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres not using index on views" }, { "msg_contents": "Justin Pryzby <[email protected]> writes:\n> On Mon, Apr 06, 2020 at 02:19:59PM +0000, Rick Vincent wrote:\n>> The following query takes an extremely long time for only 180 rows, and what this means is that we would have to index anything appearing in the where clause for every table in order to use views because the views seem not to consider the select clause. Why is that and does anyone know a way around this?\n\n> Is there a reason why you don't store the extracted value in its own column ?\n\nThe planner seems to be quite well aware that the slower query is going to\nbe slower, since the estimated costs are much higher. Since it's not\nchoosing to optimize into a faster form, I wonder whether it's constrained\nby semantic requirements. In particular, I'm suspicious that some of\nthose functions you have in the view are marked \"volatile\", preventing\nthem from being optimized away.\n\nBeyond that guess, though, there's really not enough info here to say.\nThe info we usually ask for to debug slow-query problems is explained\nat\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Apr 2020 01:09:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres not using index on views" }, { "msg_contents": "On Mon, 2020-04-06 at 14:19 +0000, Rick Vincent wrote:\n> I am seeing a performance problem with postgresql v 11.7 on views, and I am wondering if\n> anyone can tell me why or has any suggestion.\n\nYour account is somewhat confused - too many questions rolled into one\nrant, I would say.\n\nThere are two points that may clear up the case:\n\n- If you have no WHERE clause, a sequential scan of the table is usually\n the best way to do it. The exception is an index only scan if the index\n contains all that is required, but in PostgreSQL you need a recently\n VACUUMed table for that.\n\n- The expensive part in your view is the \"extractValueJS\" function.\n Try to tune that for better performance.\n\nIf any of your problems are not explained by that, please say so.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Tue, 07 Apr 2020 08:42:35 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres not using index on views" }, { "msg_contents": "Hi Justin,\n\nYou said, \" Is there a reason why you don't store the extracted value in its own column ?\"\n\nRV>> It simply is the way the application stores the data. For Oracle we are storing in XML and JSON format, for postgres, due do limitations of XML api, we are storing in VARCHAR. We can't break it out into columns very easily because of the legacy application.\n\nYou said, \"It still did a seq scan on the table, so I'm not sure what this has to do with index scans ?\"\n\nRV>> On Oracle it will use the primary key index because it detects that all of the columns in the select clause are indexable. With Postgres, it might be doing a seq scan but on a 180 rows, a select on the underlying table is many times faster than the same select on the view. It seems all of the view columns are being triggered which makes it incredibly slow.\n\nThanks,\nRick\n\n\n\n-----Original Message-----\nFrom: Justin Pryzby <[email protected]>\nSent: Tuesday, April 7, 2020 6:59 AM\nTo: Rick Vincent <[email protected]>\nCc: [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\nSubject: Re: Postgres not using index on views\n\nOn Mon, Apr 06, 2020 at 02:19:59PM +0000, Rick Vincent wrote:\n> I am seeing a performance problem with postgresql v 11.7 on views, and I am wondering if anyone can tell me why or has any suggestion.\n>\n> A table is created as:\n>\n> CREATE TABLE \"FBNK_CUSTOMER\" (RECID VARCHAR(255) NOT NULL PRIMARY KEY,\n> XMLRECORD VARCHAR)\n>\n> And contains only 180 rows.\n>\n> Doing an explain plan on the view created over this gives:\n>\n> EXPLAIN ANALYZE\n> select RECID from \"V_FBNK_CUSTOMER\"\n>\n>\n> Subquery Scan on \"V_FBNK_CUSTOMER\" (cost=0.00..19014.60 rows=180 width=7) (actual time=459.601..78642.189 rows=180 loops=1)\n> -> Seq Scan on \"FBNK_CUSTOMER\" a (cost=0.00..19012.80 rows=180\n> width=14575) (actual time=459.600..78641.950 rows=180 loops=1)\n>\n> Yet an Explain plan on the underlying table( on select RECID from \"FBNK_CUSTOMER\") gives:\n>\n> Seq Scan on \"FBNK_CUSTOMER\" (cost=0.00..22.80 rows=180 width=7)\n> (actual time=0.004..0.272 rows=180 loops=1)\n\nIt still did a seq scan on the table, so I'm not sure what this has to do with index scans ?\n\n> The following query takes an extremely long time for only 180 rows, and what this means is that we would have to index anything appearing in the where clause for every table in order to use views because the views seem not to consider the select clause. Why is that and does anyone know a way around this?\n\nIs there a reason why you don't store the extracted value in its own column ?\nAnd maybe keep it up to date using an insert/update trigger on the xmlrecord column.\n\n--\nJustin\n\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.\n\n\n", "msg_date": "Tue, 7 Apr 2020 07:53:46 +0000", "msg_from": "Rick Vincent <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Postgres not using index on views" }, { "msg_contents": "Hi Tom,\n\nThe function is defined as below, so no use of VOLATILE. Let me know if you need any other information. I am hoping the below will further clarify the issue.\n\nCREATE OR REPLACE FUNCTION extractValueJS (sVar text, nfm INTEGER, nvm INTEGER)\nRETURNS VARCHAR as $$\ndeclare\nsRet text := '';\nnSize int := 0;\nretVal int := 0;\ncVar text[] := regexp_split_to_array(sVar,'');\nidx int := 1;\nnStart int := 0;\nnEnd int := 0;\nbegin\netc...\n return sRet;\nend;\n$$ LANGUAGE plpgsql;\n\nAfter reading you link.....\n\nHere is a better explain plan:\n\nExplain on the table:\n\nEXPLAIN (analyze,BUFFERS)\n select RECID from \"FBNK_CUSTOMER\"\nSeq Scan on \"FBNK_CUSTOMER\" (cost=0.00..22.80 rows=180 width=7) (actual time=0.011..0.073 rows=180 loops=1)\n Buffers: shared hit=21\nPlanning Time: 0.056 ms\nExecution Time: 0.091 ms\n\nExplain on the view:\n\nEXPLAIN (analyze,BUFFERS)\n select RECID from \"V_FBNK_CUSTOMER\"\n\nSubquery Scan on \"V_FBNK_CUSTOMER\" (cost=0.00..19014.60 rows=180 width=7) (actual time=455.727..76837.097 rows=180 loops=1)\n Buffers: shared hit=204\n -> Seq Scan on \"FBNK_CUSTOMER\" a (cost=0.00..19012.80 rows=180 width=14575) (actual time=455.726..76836.791 rows=180 loops=1)\n Buffers: shared hit=204\nPlanning Time: 1.109 ms\nExecution Time: 76838.505 ms\n\nExplain on view with a column:\n\nEXPLAIN (analyze,BUFFERS)\n SELECT RECID FROM \"V_FBNK_CUSTOMER\" WHERE \"TESTER\" = '5.00' ORDER BY RECID\nSort (cost=19015.06..19015.06 rows=1 width=7) (actual time=76033.475..76033.475 rows=1 loops=1)\n Sort Key: \"V_FBNK_CUSTOMER\".recid\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=21\n -> Subquery Scan on \"V_FBNK_CUSTOMER\" (cost=0.00..19015.05 rows=1 width=7) (actual time=66521.952..76033.434 rows=1 loops=1)\n Filter: ((\"V_FBNK_CUSTOMER\".\"TESTER\")::text = '5.00'::text)\n Rows Removed by Filter: 179\n Buffers: shared hit=21\n -> Seq Scan on \"FBNK_CUSTOMER\" a (cost=0.00..19012.80 rows=180 width=14575) (actual time=462.949..76033.096 rows=180 loops=1)\n Buffers: shared hit=21\nPlanning Time: 0.819 ms\nExecution Time: 76033.731 ms\n\nBut on the underlying table and not the view but just using the one view column called TESTER:\n\nEXPLAIN (analyze,BUFFERS)\n SELECT RECID FROM \"FBNK_CUSTOMER\" WHERE extractValueJS(XMLRECORD, 179, 9) = '5.00' ORDER BY RECID\nSort (cost=68.26..68.27 rows=1 width=7) (actual time=220.403..220.404 rows=1 loops=1)\n Sort Key: recid\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=21\n -> Seq Scan on \"FBNK_CUSTOMER\" (cost=0.00..68.25 rows=1 width=7) (actual time=193.000..220.397 rows=1 loops=1)\n Filter: ((extractvaluejs((xmlrecord)::text, 179, 9))::text = '5.00'::text)\n Rows Removed by Filter: 179\n Buffers: shared hit=21\nPlanning Time: 0.045 ms\nExecution Time: 220.418 ms\n\nOther info:\n\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='FBNK_CURRENCY';\n\nrelname relpages reltuples relallvisible relkind relnatts relhassubclass reloptions pg_table_size\nFBNK_CURRENCY 6 93 0 r 2 false NULL 81920\n\nVersion is:\nPostgreSQL 11.7 (Debian 11.7-2.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit\n\nIt is a postgres docker image.\n\nThanks,\nRick\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]>\nSent: Tuesday, April 7, 2020 7:09 AM\nTo: Justin Pryzby <[email protected]>\nCc: Rick Vincent <[email protected]>; [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\nSubject: Re: Postgres not using index on views\n\nJustin Pryzby <[email protected]<mailto:[email protected]>> writes:\n> On Mon, Apr 06, 2020 at 02:19:59PM +0000, Rick Vincent wrote:\n>> The following query takes an extremely long time for only 180 rows, and what this means is that we would have to index anything appearing in the where clause for every table in order to use views because the views seem not to consider the select clause. Why is that and does anyone know a way around this?\n\n> Is there a reason why you don't store the extracted value in its own column ?\n\nThe planner seems to be quite well aware that the slower query is going to be slower, since the estimated costs are much higher. Since it's not choosing to optimize into a faster form, I wonder whether it's constrained by semantic requirements. In particular, I'm suspicious that some of those functions you have in the view are marked \"volatile\", preventing them from being optimized away.\n\nBeyond that guess, though, there's really not enough info here to say.\nThe info we usually ask for to debug slow-query problems is explained at\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n regards, tom lane\n\n\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.\n\n\n\n\n\n\n\n\n\n\nHi Tom,\n \nThe function is defined as below, so no use of VOLATILE.  Let me know if you need any other information.  I am hoping the below will further clarify the issue.\n \nCREATE OR REPLACE FUNCTION extractValueJS (sVar text, nfm INTEGER, nvm INTEGER) \nRETURNS VARCHAR as $$\ndeclare\nsRet text := '';\nnSize int := 0;\nretVal int := 0;\ncVar text[] := regexp_split_to_array(sVar,'');\nidx int := 1;\nnStart int := 0;\nnEnd int := 0;\nbegin\netc...\n        return sRet;\nend;\n$$ LANGUAGE plpgsql;\n \nAfter reading you link…..\n \nHere is a better explain plan:\n \nExplain on the table:\n \nEXPLAIN (analyze,BUFFERS)\n select RECID from \"FBNK_CUSTOMER\"\nSeq Scan on \"FBNK_CUSTOMER\"  (cost=0.00..22.80 rows=180 width=7) (actual time=0.011..0.073 rows=180 loops=1)\n  Buffers: shared hit=21\nPlanning Time: 0.056 ms\nExecution Time: 0.091 ms\n \nExplain on the view:\n \nEXPLAIN (analyze,BUFFERS)\n select RECID from \"V_FBNK_CUSTOMER\"\n \nSubquery Scan on \"V_FBNK_CUSTOMER\"  (cost=0.00..19014.60 rows=180 width=7) (actual time=455.727..76837.097 rows=180 loops=1)\n  Buffers: shared hit=204\n  ->  Seq Scan on \"FBNK_CUSTOMER\" a  (cost=0.00..19012.80 rows=180 width=14575) (actual time=455.726..76836.791 rows=180 loops=1)\n        Buffers: shared hit=204\nPlanning Time: 1.109 ms\nExecution Time: 76838.505 ms\n \nExplain on view with a column:\n \nEXPLAIN (analyze,BUFFERS)\n SELECT RECID FROM \"V_FBNK_CUSTOMER\" WHERE \"TESTER\"\n= '5.00' ORDER BY RECID\nSort  (cost=19015.06..19015.06 rows=1 width=7) (actual time=76033.475..76033.475 rows=1 loops=1)\n  Sort Key: \"V_FBNK_CUSTOMER\".recid\n  Sort Method: quicksort  Memory: 25kB\n  Buffers: shared hit=21\n  ->  Subquery Scan on \"V_FBNK_CUSTOMER\"  (cost=0.00..19015.05 rows=1 width=7) (actual time=66521.952..76033.434 rows=1 loops=1)\n        Filter: ((\"V_FBNK_CUSTOMER\".\"TESTER\")::text = '5.00'::text)\n        Rows Removed by Filter: 179\n        Buffers: shared hit=21\n        ->  Seq Scan on \"FBNK_CUSTOMER\" a  (cost=0.00..19012.80 rows=180 width=14575) (actual time=462.949..76033.096 rows=180 loops=1)\n              Buffers: shared hit=21\nPlanning Time: 0.819 ms\nExecution Time: 76033.731 ms\n \nBut on the underlying table and not the view but just using the one view column called TESTER:\n \nEXPLAIN (analyze,BUFFERS)\n SELECT RECID FROM \"FBNK_CUSTOMER\" WHERE extractValueJS(XMLRECORD,\n179, 9) = '5.00'\nORDER BY RECID\nSort  (cost=68.26..68.27 rows=1 width=7) (actual time=220.403..220.404 rows=1 loops=1)\n  Sort Key: recid\n  Sort Method: quicksort  Memory: 25kB\n  Buffers: shared hit=21\n  ->  Seq Scan on \"FBNK_CUSTOMER\"  (cost=0.00..68.25 rows=1 width=7) (actual time=193.000..220.397 rows=1 loops=1)\n        Filter: ((extractvaluejs((xmlrecord)::text, 179, 9))::text = '5.00'::text)\n        Rows Removed by Filter: 179\n        Buffers: shared hit=21\nPlanning Time: 0.045 ms\nExecution Time: 220.418 ms\n \nOther info:\n \nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM\npg_class WHERE relname='FBNK_CURRENCY';\n \nrelname relpages        reltuples       relallvisible   relkind relnatts        relhassubclass  reloptions      pg_table_size\nFBNK_CURRENCY   6       93      0       r       2       false   NULL    81920\n \nVersion is:\nPostgreSQL 11.7 (Debian 11.7-2.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit\n \nIt is a postgres docker image.\n \nThanks,\nRick\n \n-----Original Message-----\n\nFrom: Tom Lane <[email protected]> \n\nSent: Tuesday, April 7, 2020 7:09 AM\n\nTo: Justin Pryzby <[email protected]>\n\nCc: Rick Vincent <[email protected]>; [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\n\nSubject: Re: Postgres not using index on views\n \nJustin Pryzby <[email protected]> writes:\n> On Mon, Apr 06, 2020 at 02:19:59PM +0000, Rick Vincent wrote:\n>> The following query takes an extremely long time for only 180 rows, and what this means is that we would have to index anything appearing in the where clause for every table in order to use views because the views seem not to consider the select clause. \nWhy is that and does anyone know a way around this?\n \n> Is there a reason why you don't store the extracted value in its own column ?\n \nThe planner seems to be quite well aware that the slower query is going to be slower, since the estimated costs are much higher.  Since it's not choosing to optimize into a faster form, I wonder whether it's constrained by semantic requirements.  In particular,\nI'm suspicious that some of those functions you have in the view are marked \"volatile\", preventing them from being optimized away.\n \nBeyond that guess, though, there's really not enough info here to say.\nThe info we usually ask for to debug slow-query problems is explained at\n \nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n \n                        regards, tom lane\n \n\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents\nof this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and\ndo not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.", "msg_date": "Tue, 7 Apr 2020 09:08:06 +0000", "msg_from": "Rick Vincent <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Postgres not using index on views" }, { "msg_contents": "Rick Vincent schrieb am 07.04.2020 um 11:08:\n> The function is defined as below, so no use of VOLATILE.\n\nIf you don't specify anything, the default is VOLATILE.\n\nSo your function *is* volatile.\n \n> CREATE OR REPLACE FUNCTION extractValueJS (sVar text, nfm INTEGER, nvm INTEGER)\n> RETURNS VARCHAR as $$\n> declare\n> sRet text := '';\n> nSize int := 0;\n> retVal int := 0;\n> cVar text[] := regexp_split_to_array(sVar,'');\n> idx int := 1;\n> nStart int := 0;\n> nEnd int := 0;\n> begin\n> etc...\n>         return sRet;\n> end;\n> $$ LANGUAGE plpgsql;\n\nYou haven't shown us your actual code, but if you can turn that into a \"language sql\" function (defined as immutable, or at least stable), I would expect it to be way more efficient.\n\nThomas\n\n\n", "msg_date": "Tue, 7 Apr 2020 11:18:04 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres not using index on views" }, { "msg_contents": "> RV>> It simply is the way the application stores the data. For Oracle\n> we are storing in XML and JSON format, for postgres, due do\n> limitations of XML api, we are storing in VARCHAR.\n\nWhy not use JSON in Postgres then?\nPostgres' JSON functions are at least as powerful as Oracle's (if not better in a lot of areas).\n\nWould be interesting to see what XML function/feature from Oracle you can't replicate/migrate to Postgres.\n\nAnother option might be to upgrade to Postgres 12 and define those columns as generated columns as part of the table, rather than a view.\nThen you only pay the performance penalty of the extracValueJS() function when you update the table, not for every select.\n\nThomas\n\n\n", "msg_date": "Tue, 7 Apr 2020 11:24:01 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres not using index on views" }, { "msg_contents": "Hi,\n\nI was wondering if anyone can explain the below problem. Should a bug be logged for this?\n\nKind regards,\nRick\n\n_____________________________________________\nFrom: Rick Vincent\nSent: Tuesday, April 7, 2020 11:08 AM\nTo: 'Tom Lane' <[email protected]>; Justin Pryzby <[email protected]>\nCc: [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\nSubject: RE: Postgres not using index on views\n\n\nHi Tom,\n\nThe function is defined as below, so no use of VOLATILE. Let me know if you need any other information. I am hoping the below will further clarify the issue.\n\nCREATE OR REPLACE FUNCTION extractValueJS (sVar text, nfm INTEGER, nvm INTEGER)\nRETURNS VARCHAR as $$\ndeclare\nsRet text := '';\nnSize int := 0;\nretVal int := 0;\ncVar text[] := regexp_split_to_array(sVar,'');\nidx int := 1;\nnStart int := 0;\nnEnd int := 0;\nbegin\netc...\n return sRet;\nend;\n$$ LANGUAGE plpgsql;\n\nAfter reading you link.....\n\nHere is a better explain plan:\n\nExplain on the table:\n\nEXPLAIN (analyze,BUFFERS)\n select RECID from \"FBNK_CUSTOMER\"\nSeq Scan on \"FBNK_CUSTOMER\" (cost=0.00..22.80 rows=180 width=7) (actual time=0.011..0.073 rows=180 loops=1)\n Buffers: shared hit=21\nPlanning Time: 0.056 ms\nExecution Time: 0.091 ms\n\nExplain on the view:\n\nEXPLAIN (analyze,BUFFERS)\n select RECID from \"V_FBNK_CUSTOMER\"\n\nSubquery Scan on \"V_FBNK_CUSTOMER\" (cost=0.00..19014.60 rows=180 width=7) (actual time=455.727..76837.097 rows=180 loops=1)\n Buffers: shared hit=204\n -> Seq Scan on \"FBNK_CUSTOMER\" a (cost=0.00..19012.80 rows=180 width=14575) (actual time=455.726..76836.791 rows=180 loops=1)\n Buffers: shared hit=204\nPlanning Time: 1.109 ms\nExecution Time: 76838.505 ms\n\nExplain on view with a column:\n\nEXPLAIN (analyze,BUFFERS)\n SELECT RECID FROM \"V_FBNK_CUSTOMER\" WHERE \"TESTER\" = '5.00' ORDER BY RECID\nSort (cost=19015.06..19015.06 rows=1 width=7) (actual time=76033.475..76033.475 rows=1 loops=1)\n Sort Key: \"V_FBNK_CUSTOMER\".recid\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=21\n -> Subquery Scan on \"V_FBNK_CUSTOMER\" (cost=0.00..19015.05 rows=1 width=7) (actual time=66521.952..76033.434 rows=1 loops=1)\n Filter: ((\"V_FBNK_CUSTOMER\".\"TESTER\")::text = '5.00'::text)\n Rows Removed by Filter: 179\n Buffers: shared hit=21\n -> Seq Scan on \"FBNK_CUSTOMER\" a (cost=0.00..19012.80 rows=180 width=14575) (actual time=462.949..76033.096 rows=180 loops=1)\n Buffers: shared hit=21\nPlanning Time: 0.819 ms\nExecution Time: 76033.731 ms\n\nBut on the underlying table and not the view but just using the one view column called TESTER:\n\nEXPLAIN (analyze,BUFFERS)\n SELECT RECID FROM \"FBNK_CUSTOMER\" WHERE extractValueJS(XMLRECORD, 179, 9) = '5.00' ORDER BY RECID\nSort (cost=68.26..68.27 rows=1 width=7) (actual time=220.403..220.404 rows=1 loops=1)\n Sort Key: recid\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=21\n -> Seq Scan on \"FBNK_CUSTOMER\" (cost=0.00..68.25 rows=1 width=7) (actual time=193.000..220.397 rows=1 loops=1)\n Filter: ((extractvaluejs((xmlrecord)::text, 179, 9))::text = '5.00'::text)\n Rows Removed by Filter: 179\n Buffers: shared hit=21\nPlanning Time: 0.045 ms\nExecution Time: 220.418 ms\n\nOther info:\n\nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='FBNK_CURRENCY';\n\nrelname relpages reltuples relallvisible relkind relnatts relhassubclass reloptions pg_table_size\nFBNK_CURRENCY 6 93 0 r 2 false NULL 81920\n\nVersion is:\nPostgreSQL 11.7 (Debian 11.7-2.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit\n\nIt is a postgres docker image.\n\nThanks,\nRick\n\n-----Original Message-----\nFrom: Tom Lane <[email protected]<mailto:[email protected]>>\nSent: Tuesday, April 7, 2020 7:09 AM\nTo: Justin Pryzby <[email protected]<mailto:[email protected]>>\nCc: Rick Vincent <[email protected]<mailto:[email protected]>>; [email protected]<mailto:[email protected]>; Manoj Kumar <[email protected]<mailto:[email protected]>>; Herve Aubert <[email protected]<mailto:[email protected]>>\nSubject: Re: Postgres not using index on views\n\nJustin Pryzby <[email protected]<mailto:[email protected]>> writes:\n> On Mon, Apr 06, 2020 at 02:19:59PM +0000, Rick Vincent wrote:\n>> The following query takes an extremely long time for only 180 rows, and what this means is that we would have to index anything appearing in the where clause for every table in order to use views because the views seem not to consider the select clause. Why is that and does anyone know a way around this?\n\n> Is there a reason why you don't store the extracted value in its own column ?\n\nThe planner seems to be quite well aware that the slower query is going to be slower, since the estimated costs are much higher. Since it's not choosing to optimize into a faster form, I wonder whether it's constrained by semantic requirements. In particular, I'm suspicious that some of those functions you have in the view are marked \"volatile\", preventing them from being optimized away.\n\nBeyond that guess, though, there's really not enough info here to say.\nThe info we usually ask for to debug slow-query problems is explained at\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n regards, tom lane\n\n\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI was wondering if anyone can explain the below problem.  Should a bug be logged for this?\n \nKind regards,\nRick\n \n_____________________________________________\nFrom: Rick Vincent \nSent: Tuesday, April 7, 2020 11:08 AM\nTo: 'Tom Lane' <[email protected]>; Justin Pryzby <[email protected]>\nCc: [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\nSubject: RE: Postgres not using index on views\n \n \nHi Tom,\n \nThe function is defined as below, so no use of VOLATILE.  Let me know if you need any other information.  I am hoping the below will further clarify the issue.\n \nCREATE OR REPLACE FUNCTION extractValueJS (sVar text, nfm INTEGER, nvm INTEGER) \nRETURNS VARCHAR as $$\ndeclare\nsRet text := '';\nnSize int := 0;\nretVal int := 0;\ncVar text[] := regexp_split_to_array(sVar,'');\nidx int := 1;\nnStart int := 0;\nnEnd int := 0;\nbegin\netc...\n        return sRet;\nend;\n$$ LANGUAGE plpgsql;\n \nAfter reading you link…..\n \nHere is a better explain plan:\n \nExplain on the table:\n \nEXPLAIN (analyze,BUFFERS)\n select RECID from \"FBNK_CUSTOMER\"\nSeq Scan on \"FBNK_CUSTOMER\"  (cost=0.00..22.80 rows=180 width=7) (actual time=0.011..0.073 rows=180 loops=1)\n  Buffers: shared hit=21\nPlanning Time: 0.056 ms\nExecution Time: 0.091 ms\n \nExplain on the view:\n \nEXPLAIN (analyze,BUFFERS)\n select RECID from \"V_FBNK_CUSTOMER\"\n \nSubquery Scan on \"V_FBNK_CUSTOMER\"  (cost=0.00..19014.60 rows=180 width=7) (actual time=455.727..76837.097 rows=180 loops=1)\n  Buffers: shared hit=204\n  ->  Seq Scan on \"FBNK_CUSTOMER\" a  (cost=0.00..19012.80 rows=180 width=14575) (actual time=455.726..76836.791 rows=180 loops=1)\n        Buffers: shared hit=204\nPlanning Time: 1.109 ms\nExecution Time: 76838.505 ms\n \nExplain on view with a column:\n \nEXPLAIN (analyze,BUFFERS)\n SELECT RECID FROM \"V_FBNK_CUSTOMER\" WHERE \"TESTER\"\n= '5.00' ORDER BY RECID\nSort  (cost=19015.06..19015.06 rows=1 width=7) (actual time=76033.475..76033.475 rows=1 loops=1)\n  Sort Key: \"V_FBNK_CUSTOMER\".recid\n  Sort Method: quicksort  Memory: 25kB\n  Buffers: shared hit=21\n  ->  Subquery Scan on \"V_FBNK_CUSTOMER\"  (cost=0.00..19015.05 rows=1 width=7) (actual time=66521.952..76033.434 rows=1 loops=1)\n        Filter: ((\"V_FBNK_CUSTOMER\".\"TESTER\")::text = '5.00'::text)\n        Rows Removed by Filter: 179\n        Buffers: shared hit=21\n        ->  Seq Scan on \"FBNK_CUSTOMER\" a  (cost=0.00..19012.80 rows=180 width=14575) (actual time=462.949..76033.096 rows=180 loops=1)\n              Buffers: shared hit=21\nPlanning Time: 0.819 ms\nExecution Time: 76033.731 ms\n \nBut on the underlying table and not the view but just using the one view column called TESTER:\n \nEXPLAIN (analyze,BUFFERS)\n SELECT RECID FROM \"FBNK_CUSTOMER\" WHERE extractValueJS(XMLRECORD,\n179, 9) = '5.00'\nORDER BY RECID\nSort  (cost=68.26..68.27 rows=1 width=7) (actual time=220.403..220.404 rows=1 loops=1)\n  Sort Key: recid\n  Sort Method: quicksort  Memory: 25kB\n  Buffers: shared hit=21\n  ->  Seq Scan on \"FBNK_CUSTOMER\"  (cost=0.00..68.25 rows=1 width=7) (actual time=193.000..220.397 rows=1 loops=1)\n        Filter: ((extractvaluejs((xmlrecord)::text, 179, 9))::text = '5.00'::text)\n        Rows Removed by Filter: 179\n        Buffers: shared hit=21\nPlanning Time: 0.045 ms\nExecution Time: 220.418 ms\n \nOther info:\n \nSELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM\npg_class WHERE relname='FBNK_CURRENCY';\n \nrelname relpages        reltuples       relallvisible   relkind relnatts        relhassubclass  reloptions      pg_table_size\nFBNK_CURRENCY   6       93      0       r       2       false   NULL    81920\n \nVersion is:\nPostgreSQL 11.7 (Debian 11.7-2.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit\n \nIt is a postgres docker image.\n \nThanks,\nRick\n \n-----Original Message-----\n\nFrom: Tom Lane <[email protected]>\n\n\nSent: Tuesday, April 7, 2020 7:09 AM\n\nTo: Justin Pryzby <[email protected]>\n\nCc: Rick Vincent <[email protected]>; [email protected]; Manoj Kumar <[email protected]>;\nHerve Aubert <[email protected]>\n\nSubject: Re: Postgres not using index on views\n \nJustin Pryzby <[email protected]> writes:\n> On Mon, Apr 06, 2020 at 02:19:59PM +0000, Rick Vincent wrote:\n>> The following query takes an extremely long time for only 180 rows, and what this means is that we would have to index anything appearing in the where clause for every table in order to use views because the views seem not to consider\nthe select clause.  Why is that and does anyone know a way around this?\n \n> Is there a reason why you don't store the extracted value in its own column ?\n \nThe planner seems to be quite well aware that the slower query is going to be slower, since the estimated costs are much higher.  Since it's not choosing to optimize into a faster form, I wonder whether it's constrained by semantic\nrequirements.  In particular, I'm suspicious that some of those functions you have in the view are marked \"volatile\", preventing them from being optimized away.\n \nBeyond that guess, though, there's really not enough info here to say.\nThe info we usually ask for to debug slow-query problems is explained at\n \nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n \n                        regards, tom lane\n \n\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents\nof this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and\ndo not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.", "msg_date": "Fri, 17 Apr 2020 14:28:02 +0000", "msg_from": "Rick Vincent <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Postgres not using index on views" }, { "msg_contents": "On Friday, April 17, 2020, Rick Vincent <[email protected]> wrote:\n\n> Hi,\n>\n> I was wondering if anyone can explain the below problem. Should a bug be\n> logged for this?\n>\n> Kind regards,\n> Rick\n>\n> _____________________________________________\n> *From:* Rick Vincent\n> *Sent:* Tuesday, April 7, 2020 11:08 AM\n> *To:* 'Tom Lane' <[email protected]>; Justin Pryzby <[email protected]>\n> *Cc:* [email protected]; Manoj Kumar <\n> [email protected]>; Herve Aubert <[email protected]>\n> *Subject:* RE: Postgres not using index on views\n>\n>\n> Hi Tom,\n>\n> The function is defined as below, so no use of VOLATILE. Let me know if\n> you need any other information. I am hoping the below will further clarify\n> the issue.\n>\n>\n\nIIUC as Tom wrote you have volatile functions (implied/default as Thomas\nwrote) attached to view column outputs and the planner will not optimize\nthose away.\n\nMark your function immutable (assuming it is) and retry your experiment\nwith the where clause query.\n\nDavid J.\n\nOn Friday, April 17, 2020, Rick Vincent <[email protected]> wrote:\n\n\nHi,\n \nI was wondering if anyone can explain the below problem.  Should a bug be logged for this?\n \nKind regards,\nRick\n \n_____________________________________________\nFrom: Rick Vincent \nSent: Tuesday, April 7, 2020 11:08 AM\nTo: 'Tom Lane' <[email protected]>; Justin Pryzby <[email protected]>\nCc: [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\nSubject: RE: Postgres not using index on views\n \n \nHi Tom,\n \nThe function is defined as below, so no use of VOLATILE.  Let me know if you need any other information.  I am hoping the below will further clarify the issue.\n IIUC as Tom wrote you have volatile functions (implied/default as Thomas wrote) attached to view column outputs and the planner will not optimize those away.Mark your function immutable (assuming it is) and retry your experiment with the where clause query. David J.", "msg_date": "Fri, 17 Apr 2020 07:55:29 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Postgres not using index on views" }, { "msg_contents": "Hi David,\r\n\r\nOh, okay…I missed that implied part. Will try it and post back.\r\n\r\nThanks,\r\nRick\r\n\r\nFrom: David G. Johnston <[email protected]>\r\nSent: Friday, April 17, 2020 4:55 PM\r\nTo: Rick Vincent <[email protected]>\r\nCc: Tom Lane <[email protected]>; Justin Pryzby <[email protected]>; [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\r\nSubject: Postgres not using index on views\r\n\r\nOn Friday, April 17, 2020, Rick Vincent <[email protected]<mailto:[email protected]>> wrote:\r\nHi,\r\n\r\nI was wondering if anyone can explain the below problem. Should a bug be logged for this?\r\n\r\nKind regards,\r\nRick\r\n\r\n_____________________________________________\r\nFrom: Rick Vincent\r\nSent: Tuesday, April 7, 2020 11:08 AM\r\nTo: 'Tom Lane' <[email protected]<mailto:[email protected]>>; Justin Pryzby <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>; Manoj Kumar <[email protected]<mailto:[email protected]>>; Herve Aubert <[email protected]<mailto:[email protected]>>\r\nSubject: RE: Postgres not using index on views\r\n\r\n\r\nHi Tom,\r\n\r\nThe function is defined as below, so no use of VOLATILE. Let me know if you need any other information. I am hoping the below will further clarify the issue.\r\n\r\n\r\nIIUC as Tom wrote you have volatile functions (implied/default as Thomas wrote) attached to view column outputs and the planner will not optimize those away.\r\n\r\nMark your function immutable (assuming it is) and retry your experiment with the where clause query.\r\n\r\nDavid J.\r\n\r\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.\r\n\n\n\n\n\n\n\n\n\nHi David,\n \nOh, okay…I missed that implied part.  Will try it and post back.\n \nThanks,\nRick\n \nFrom: David G. Johnston <[email protected]>\r\n\nSent: Friday, April 17, 2020 4:55 PM\nTo: Rick Vincent <[email protected]>\nCc: Tom Lane <[email protected]>; Justin Pryzby <[email protected]>; [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\nSubject: Postgres not using index on views\n \nOn Friday, April 17, 2020, Rick Vincent <[email protected]> wrote:\n\n\n\nHi,\n\n\n \n\n\nI was wondering if anyone can explain the below problem.  Should a bug be logged for this?\n\n\n \n\n\nKind regards,\n\n\nRick\n\n\n \n\n\n_____________________________________________\nFrom: Rick Vincent \nSent: Tuesday, April 7, 2020 11:08 AM\nTo: 'Tom Lane' <[email protected]>; Justin Pryzby <[email protected]>\nCc: [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\nSubject: RE: Postgres not using index on views\n\n\n \n\n\n \n\n\nHi Tom,\n\n\n \n\n\nThe function is defined as below, so no use of VOLATILE.  Let me know if you need any other information.  I am hoping the below will further clarify the issue.\n\n\n \n\n\n\n\n \n\n\nIIUC as Tom wrote you have volatile functions (implied/default as Thomas wrote) attached to view column outputs and the planner will not optimize those away.\n\n\n \n\n\nMark your function immutable (assuming it is) and retry your experiment with the where clause query. \n\n\n \n\n\nDavid J.\n\n\n\r\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized\r\n and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you\r\n check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.", "msg_date": "Fri, 17 Apr 2020 15:12:14 +0000", "msg_from": "Rick Vincent <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Postgres not using index on views" }, { "msg_contents": "Dear all,\r\n\r\nChanging the function signature to IMMUTABLE worked like a dream. No issue now. Sorry for my confusion on VOLATILE being created as the default. Thanks to everyone for your help!\r\n\r\nKind regards,\r\nRick Vincent\r\n\r\nFrom: David G. Johnston <[email protected]>\r\nSent: Friday, April 17, 2020 4:55 PM\r\nTo: Rick Vincent <[email protected]>\r\nCc: Tom Lane <[email protected]>; Justin Pryzby <[email protected]>; [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\r\nSubject: Postgres not using index on views\r\n\r\nOn Friday, April 17, 2020, Rick Vincent <[email protected]<mailto:[email protected]>> wrote:\r\nHi,\r\n\r\nI was wondering if anyone can explain the below problem. Should a bug be logged for this?\r\n\r\nKind regards,\r\nRick\r\n\r\n_____________________________________________\r\nFrom: Rick Vincent\r\nSent: Tuesday, April 7, 2020 11:08 AM\r\nTo: 'Tom Lane' <[email protected]<mailto:[email protected]>>; Justin Pryzby <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>; Manoj Kumar <[email protected]<mailto:[email protected]>>; Herve Aubert <[email protected]<mailto:[email protected]>>\r\nSubject: RE: Postgres not using index on views\r\n\r\n\r\nHi Tom,\r\n\r\nThe function is defined as below, so no use of VOLATILE. Let me know if you need any other information. I am hoping the below will further clarify the issue.\r\n\r\n\r\nIIUC as Tom wrote you have volatile functions (implied/default as Thomas wrote) attached to view column outputs and the planner will not optimize those away.\r\n\r\nMark your function immutable (assuming it is) and retry your experiment with the where clause query.\r\n\r\nDavid J.\r\n\r\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.\r\n\n\n\n\n\n\n\n\n\nDear all,\n \nChanging the function signature to IMMUTABLE worked like a dream.  No issue now.  Sorry for my confusion on VOLATILE being created as the default.  Thanks to\r\n everyone for your help!\n \nKind regards,\nRick Vincent\n \nFrom: David G. Johnston <[email protected]>\r\n\nSent: Friday, April 17, 2020 4:55 PM\nTo: Rick Vincent <[email protected]>\nCc: Tom Lane <[email protected]>; Justin Pryzby <[email protected]>; [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\nSubject: Postgres not using index on views\n \nOn Friday, April 17, 2020, Rick Vincent <[email protected]> wrote:\n\n\n\nHi,\n\n\n \n\n\nI was wondering if anyone can explain the below problem.  Should a bug be logged for this?\n\n\n \n\n\nKind regards,\n\n\nRick\n\n\n \n\n\n_____________________________________________\nFrom: Rick Vincent \nSent: Tuesday, April 7, 2020 11:08 AM\nTo: 'Tom Lane' <[email protected]>; Justin Pryzby <[email protected]>\nCc: [email protected]; Manoj Kumar <[email protected]>; Herve Aubert <[email protected]>\nSubject: RE: Postgres not using index on views\n\n\n \n\n\n \n\n\nHi Tom,\n\n\n \n\n\nThe function is defined as below, so no use of VOLATILE.  Let me know if you need any other information.  I am hoping the below will further clarify the issue.\n\n\n \n\n\n\n\n \n\n\nIIUC as Tom wrote you have volatile functions (implied/default as Thomas wrote) attached to view column outputs and the planner will not optimize those away.\n\n\n \n\n\nMark your function immutable (assuming it is) and retry your experiment with the where clause query. \n\n\n \n\n\nDavid J.\n\n\n\r\nThe information in this e-mail and any attachments is confidential and may be legally privileged. It is intended solely for the addressee or addressees. Any use or disclosure of the contents of this e-mail/attachments by a not intended recipient is unauthorized\r\n and may be unlawful. If you have received this e-mail in error please notify the sender. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of TEMENOS. We recommend that you\r\n check this e-mail and any attachments against viruses. TEMENOS accepts no liability for any damage caused by any malicious code or virus transmitted by this e-mail.", "msg_date": "Mon, 20 Apr 2020 11:10:46 +0000", "msg_from": "Rick Vincent <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Postgres not using index on views" } ]
[ { "msg_contents": "hi folks,\n\nwe are looking for a PostgreSQL DBA to help us in tuning our database.\n\nCould you please recommend somebody in your network?\nthanks,\n--daya--\n\nhi folks,we are looking for a PostgreSQL DBA to help us in tuning our database. Could you please recommend somebody in your network?thanks,--daya--", "msg_date": "Tue, 7 Apr 2020 17:20:47 +0530", "msg_from": "daya airody <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL DBA consulting" }, { "msg_contents": "On Tue, Apr 07, 2020 at 05:20:47PM +0530, daya airody wrote:\n> we are looking for a PostgreSQL DBA to help us in tuning our database.\n\nYou can start here:\nhttps://www.postgresql.org/support/professional_support/\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 7 Apr 2020 06:56:55 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL DBA consulting" }, { "msg_contents": "https://www.cybertec-postgresql.com/\n\n\nWith Warm Regards,\nAmol P. Tarte\nProject Manager,\nRajdeep InfoTechno Pvt. Ltd.\nVisit us at http://it.rajdeepgroup.com\n\nOn Tue, Apr 7, 2020, 5:21 PM daya airody <[email protected]> wrote:\n\n> hi folks,\n>\n> we are looking for a PostgreSQL DBA to help us in tuning our database.\n>\n> Could you please recommend somebody in your network?\n> thanks,\n> --daya--\n>\n>\n>\n\nhttps://www.cybertec-postgresql.com/With Warm Regards,Amol P. TarteProject Manager,Rajdeep InfoTechno Pvt. Ltd.Visit us at http://it.rajdeepgroup.comOn Tue, Apr 7, 2020, 5:21 PM daya airody <[email protected]> wrote:hi folks,we are looking for a PostgreSQL DBA to help us in tuning our database. Could you please recommend somebody in your network?thanks,--daya--", "msg_date": "Mon, 13 Apr 2020 20:34:06 +0530", "msg_from": "Amol Tarte <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL DBA consulting" } ]
[ { "msg_contents": "Hi,\n\nI have performance issues which I never seen before in my 20+ years\nexperience with PostgreSQL.\n\nWith database on dedicated server I encountered unusual load profile:\nmulti thread (200 connections static size pool via pgbouncer) insert only\ninto single table around 15.000 insert/s.\n\nUsually insert took 0.025ms and amount active backends (via\npg_stat_activity) usually stay in 1-5-10 range.\nBut every so while (few times per minute actually) number of active backend\ngo up to all 200 allowed connections.\nWhich lead to serious latency in latency sensitive load.\n\nNo problem with IO latency or CPU usage found during performance analyze.\nsyncronous_commit = off\n\nTo analyze what going with locks I run\n\\o ~/tmp/watch_events.log\nselect wait_event_type,wait_event,count(*) from pg_stat_activity where\nstate='active' and backend_type='client backend' group by 1,2 order by 3\ndesc\n\\watch 0.1\n\nNormal output when all goes well:\n wait_event_type | wait_event | count\n-----------------+------------+-------\n Client | ClientRead | 5\n | | 4\n(few processes running queries and few processes doing network IO)\n\nBad case (few times per minute, huge latency peak, some inserts took up to\n100ms to run):\n wait_event_type | wait_event | count\n-----------------+----------------+-------\n LWLock | buffer_content | 178\n LWLock | XidGenLock | 21\n IO | SLRUSync | 1\n | | 1\n\nSo there are almost all backends waiting on buffer_content lock and some\nbackends waiting for XidGenLock .\nAnd always one backend in SLRUSync.\n\nIf anyone can have any good idea whats going on in that case and how I can\nfix it - any ideas welcome.\nSo far I out of ideas.\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone RU: +7 985 433 0000\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\n\n\"Доктор, вы мне советовали так не делать, но почему мне по-прежнему больно\nкогда я так делаю ещё раз?\"\n\nHi,I have performance issues which I never seen before in my 20+ years experience with PostgreSQL.With database on dedicated server I encountered unusual load profile:multi thread (200 connections static size pool via pgbouncer) insert only into single table around 15.000 insert/s.Usually insert took 0.025ms and amount active backends (via pg_stat_activity) usually stay in 1-5-10 range.But every so while (few times per minute actually) number of active backend go up to all 200 allowed connections.Which lead to serious latency in latency sensitive load.No problem with IO latency or CPU usage found during performance analyze.syncronous_commit = offTo analyze what going with locks I run \\o ~/tmp/watch_events.logselect wait_event_type,wait_event,count(*) from pg_stat_activity where state='active' and backend_type='client backend' group by 1,2 order by 3 desc\\watch 0.1Normal output when all goes well: wait_event_type | wait_event | count -----------------+------------+------- Client          | ClientRead |     5                 |            |     4(few processes running queries and few processes doing network IO)Bad case (few times per minute, huge latency peak, some inserts took up to 100ms to run): wait_event_type |   wait_event   | count-----------------+----------------+------- LWLock          | buffer_content |   178 LWLock          | XidGenLock     |    21 IO              | SLRUSync       |     1                 |                |     1So there are almost all backends waiting on buffer_content lock and some backends waiting for XidGenLock .And always one backend in SLRUSync.If anyone can have any good idea whats going on in that case and how I can fix it - any ideas welcome.So far I out of ideas.-- Maxim BogukSenior Postgresql DBAhttps://dataegret.com/Phone RU: +7  985 433 0000Phone UA: +380 99 143 0000Phone AU: +61  45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.boguk\"Доктор, вы мне советовали так не делать, но почему мне по-прежнему больно когда я так делаю ещё раз?\"", "msg_date": "Fri, 10 Apr 2020 00:51:03 +1000", "msg_from": "Maxim Boguk <[email protected]>", "msg_from_op": true, "msg_subject": "High insert rate server, unstable insert latency and load peaks with\n buffer_content and XidGenLock LWlocks with Postgresql 12 version" }, { "msg_contents": "On Fri, Apr 10, 2020 at 12:51:03AM +1000, Maxim Boguk wrote:\n> With database on dedicated server I encountered unusual load profile:\n> multi thread (200 connections static size pool via pgbouncer) insert only\n> into single table around 15.000 insert/s.\n> \n> Usually insert took 0.025ms and amount active backends (via\n> pg_stat_activity) usually stay in 1-5-10 range.\n> But every so while (few times per minute actually) number of active backend\n> go up to all 200 allowed connections.\n> Which lead to serious latency in latency sensitive load.\n> \n> No problem with IO latency or CPU usage found during performance analyze.\n> syncronous_commit = off\n\nCan you share other settings ? shared_buffers, checkpoint_*, bgwriter_* and\nmax_wal_size ? And version()\n\n> And always one backend in SLRUSync.\n> \n> If anyone can have any good idea whats going on in that case and how I can\n> fix it - any ideas welcome.\n> So far I out of ideas.\n\nThis might be useful: pg_stat_bgwriter view.\n\nI suggest to follow others advice and make a cronjob to do this every ~5 minutes:\n| INSERT INTO jrn_pg_stat_bgwriter SELECT now(), * FROM pg_stat_bgwriter;\nand write a window function to show values/time, or rrd graphs or whatever.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 9 Apr 2020 10:16:33 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High insert rate server, unstable insert latency and load peaks\n with buffer_content and XidGenLock LWlocks with Postgresql 12 version" }, { "msg_contents": "On Fri, Apr 10, 2020 at 1:16 AM Justin Pryzby <[email protected]> wrote:\n\n> On Fri, Apr 10, 2020 at 12:51:03AM +1000, Maxim Boguk wrote:\n> > With database on dedicated server I encountered unusual load profile:\n> > multi thread (200 connections static size pool via pgbouncer) insert only\n> > into single table around 15.000 insert/s.\n> >\n> > Usually insert took 0.025ms and amount active backends (via\n> > pg_stat_activity) usually stay in 1-5-10 range.\n> > But every so while (few times per minute actually) number of active\n> backend\n> > go up to all 200 allowed connections.\n> > Which lead to serious latency in latency sensitive load.\n> >\n> > No problem with IO latency or CPU usage found during performance analyze.\n> > syncronous_commit = off\n>\n> Can you share other settings ? shared_buffers, checkpoint_*, bgwriter_*\n> and\n> max_wal_size ? And version()\n>\n\n\nversion - PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg18.04+1) on\nx86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0,\n64-bit\nshared_buffers 140GB\ncheckpoint_timeout 1h\ncheckpoint_flush_after 0\ncheckpoint_completion_target 0.9\nbgwriter_delay 10ms\nbgwriter_flush_after 0\nbgwriter_lru_maxpages 10000\nbgwriter_lru_multiplier 10\nmax_wal_size 128GB\n\nCheckpoints happens every 1h and lag spiked doesn't depend on checkpointer\nactivity.\nbuffers_checkpoint 92% writes, buffers_clean 2% writes, buffers_backend 6%\nwrites (over course of 5 minutes).\nNothing especially suspicious on graphical monitoring of these values as\nwell.\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA\nhttps://dataegret.com/\n\nPhone RU: +7 985 433 0000\nPhone UA: +380 99 143 0000\nPhone AU: +61 45 218 5678\n\nLinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b\nSkype: maxim.boguk\n\n\"Доктор, вы мне советовали так не делать, но почему мне по-прежнему больно\nкогда я так делаю ещё раз?\"\n\nOn Fri, Apr 10, 2020 at 1:16 AM Justin Pryzby <[email protected]> wrote:On Fri, Apr 10, 2020 at 12:51:03AM +1000, Maxim Boguk wrote:\n> With database on dedicated server I encountered unusual load profile:\n> multi thread (200 connections static size pool via pgbouncer) insert only\n> into single table around 15.000 insert/s.\n> \n> Usually insert took 0.025ms and amount active backends (via\n> pg_stat_activity) usually stay in 1-5-10 range.\n> But every so while (few times per minute actually) number of active backend\n> go up to all 200 allowed connections.\n> Which lead to serious latency in latency sensitive load.\n> \n> No problem with IO latency or CPU usage found during performance analyze.\n> syncronous_commit = off\n\nCan you share other settings ?  shared_buffers, checkpoint_*, bgwriter_* and\nmax_wal_size ?  And version()version -  PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bitshared_buffers 140GBcheckpoint_timeout  1hcheckpoint_flush_after 0checkpoint_completion_target 0.9bgwriter_delay 10msbgwriter_flush_after 0bgwriter_lru_maxpages 10000bgwriter_lru_multiplier 10max_wal_size 128GBCheckpoints happens every 1h and lag spiked doesn't depend on checkpointer activity.buffers_checkpoint 92% writes, buffers_clean 2% writes, buffers_backend 6% writes (over course of 5 minutes).Nothing especially suspicious on graphical monitoring of these values as well.-- Maxim BogukSenior Postgresql DBAhttps://dataegret.com/Phone RU: +7  985 433 0000Phone UA: +380 99 143 0000Phone AU: +61  45 218 5678LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1bSkype: maxim.boguk\"Доктор, вы мне советовали так не делать, но почему мне по-прежнему больно когда я так делаю ещё раз?\"", "msg_date": "Fri, 10 Apr 2020 01:29:29 +1000", "msg_from": "Maxim Boguk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High insert rate server, unstable insert latency and load peaks\n with buffer_content and XidGenLock LWlocks with Postgresql 12 version" } ]
[ { "msg_contents": "Good morning,\n\nUbuntu 16.04.6 LTS\nPostgreSQL 9.6.5\n\nOn one of our database servers, we're regularly seeing kswapd at the top of\n\"top\" output, regularly using over 50 %CPU. We should have well over 80GB\nof available memory according to \"free -m\".\n\n# free -m\n total used free shared buff/cache\navailable\nMem: 125910 41654 820 857 83435\n82231\nSwap: 511 448 63\n\nWe've already got vm.swappiness and vm.zone_reclaim_mode set to 0, and NUMA\nis disabled from what I can see:\n\n# dmesg | grep -i numa\n[ 0.000000] No NUMA configuration found\n\nWe are using HugePages and things look good there as well. Curious what\nwould be causing kswapd to run hot like it is, or if it is a red herring as\nI look into high CPU usage on this box (although, again, it is the\nsingle-highest CPU user).\n\n-- \nDon Seiler\nwww.seiler.us\n\nGood morning,Ubuntu 16.04.6 LTSPostgreSQL 9.6.5On one of our database servers, we're regularly seeing kswapd at the top of \"top\" output, regularly using over 50 %CPU. We should have well over 80GB of available memory according to \"free -m\".# free -m              total        used        free      shared  buff/cache   availableMem:         125910       41654         820         857       83435       82231Swap:           511         448          63We've already got vm.swappiness and vm.zone_reclaim_mode set to 0, and NUMA is disabled from what I can see:# dmesg | grep -i numa[    0.000000] No NUMA configuration foundWe are using HugePages and things look good there as well. Curious what would be causing kswapd to run hot like it is, or if it is a red herring as I look into high CPU usage on this box (although, again, it is the single-highest CPU user).-- Don Seilerwww.seiler.us", "msg_date": "Mon, 13 Apr 2020 09:34:23 -0500", "msg_from": "Don Seiler <[email protected]>", "msg_from_op": true, "msg_subject": "High kswapd" }, { "msg_contents": "On Mon, Apr 13, 2020 at 09:34:23AM -0500, Don Seiler wrote:\n> Good morning,\n> \n> Ubuntu 16.04.6 LTS\n> PostgreSQL 9.6.5\n> \n> On one of our database servers, we're regularly seeing kswapd at the top of\n> \"top\" output, regularly using over 50 %CPU. We should have well over 80GB\n> of available memory according to \"free -m\".\n\nDo you have THP enabled? Or KSM ?\n\ntail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag\n\nhttps://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Apr 2020 09:42:54 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High kswapd" }, { "msg_contents": "On Mon, Apr 13, 2020 at 9:42 AM Justin Pryzby <[email protected]> wrote:\n\n>\n> tail /sys/kernel/mm/ksm/run\n> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag\n> /sys/kernel/mm/transparent_hugepage/enabled\n> /sys/kernel/mm/transparent_hugepage/defrag\n>\n\n# tail /sys/kernel/mm/ksm/run\n/sys/kernel/mm/transparent_hugepage/khugepaged/defrag\n/sys/kernel/mm/transparent_hugepage/enabled\n/sys/kernel/mm/transparent_hugepage/defrag\n==> /sys/kernel/mm/ksm/run <==\n0\n\n==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==\n1\n\n==> /sys/kernel/mm/transparent_hugepage/enabled <==\nalways madvise [never]\n\n==> /sys/kernel/mm/transparent_hugepage/defrag <==\nalways madvise [never]\n\n\n-- \nDon Seiler\nwww.seiler.us\n\nOn Mon, Apr 13, 2020 at 9:42 AM Justin Pryzby <[email protected]> wrote:\ntail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag# tail /sys/kernel/mm/ksm/run /sys/kernel/mm/transparent_hugepage/khugepaged/defrag /sys/kernel/mm/transparent_hugepage/enabled /sys/kernel/mm/transparent_hugepage/defrag==> /sys/kernel/mm/ksm/run <==0==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==1==> /sys/kernel/mm/transparent_hugepage/enabled <==always madvise [never]==> /sys/kernel/mm/transparent_hugepage/defrag <==always madvise [never]-- Don Seilerwww.seiler.us", "msg_date": "Mon, 13 Apr 2020 09:46:22 -0500", "msg_from": "Don Seiler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High kswapd" }, { "msg_contents": "On Mon, Apr 13, 2020 at 09:46:22AM -0500, Don Seiler wrote:\n> ==> /sys/kernel/mm/ksm/run <==\n> 0\n\nWas it off to begin with ?\nIf not, you can set it to \"2\" to \"unshare\" pages.\n\n> ==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==\n> 1\n\nSo I'd suggest trying with this disabled.\n\nI don't know if I ever fully understood the problem, but it sounds like at\nleast in your case it's related to large shared_buffers, and hugepages, which\ncannot be swapped out.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Apr 2020 09:58:52 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High kswapd" }, { "msg_contents": "On Mon, Apr 13, 2020 at 9:58 AM Justin Pryzby <[email protected]> wrote:\n\n> On Mon, Apr 13, 2020 at 09:46:22AM -0500, Don Seiler wrote:\n> > ==> /sys/kernel/mm/ksm/run <==\n> > 0\n>\n> Was it off to begin with ?\n> If not, you can set it to \"2\" to \"unshare\" pages.\n>\n\nYes we haven't changed this. It was already set to 0.\n\n\n>\n> > ==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==\n> > 1\n>\n> So I'd suggest trying with this disabled.\n>\n\nMy understanding was that THP is disabled anyway. What would this defrag\nfeature be doing now?\n\n\n> I don't know if I ever fully understood the problem, but it sounds like at\n> least in your case it's related to large shared_buffers, and hugepages,\n> which\n> cannot be swapped out.\n>\n\nBasically the problem is our DB host getting slammed with connections (even\nwith pgbouncer in place). We see the CPU load spiking, and when we check\n\"top\" we regularly see \"kswapd\" at the top of the list. For example, just\nnow kswapd is at 72 %CPU in top. The next highest is a postgres process at\n6.6 %CPU.\n\nOur shared_buffers is set to 32GB, and HugePages is set to 36GB:\n\n# grep Huge /proc/meminfo\nAnonHugePages: 0 kB\nHugePages_Total: 18000\nHugePages_Free: 1897\nHugePages_Rsvd: 41\nHugePages_Surp: 0\nHugepagesize: 2048 kB\n\nAlso FWIW this host is actually a VSphere VM. We're looking into any\nunderlying events during these spikes as well.\n\nDon.\n-- \nDon Seiler\nwww.seiler.us\n\nOn Mon, Apr 13, 2020 at 9:58 AM Justin Pryzby <[email protected]> wrote:On Mon, Apr 13, 2020 at 09:46:22AM -0500, Don Seiler wrote:\n> ==> /sys/kernel/mm/ksm/run <==\n> 0\n\nWas it off to begin with ?\nIf not, you can set it to \"2\" to \"unshare\" pages.Yes we haven't changed this. It was already set to 0. \n\n> ==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==\n> 1\n\nSo I'd suggest trying with this disabled.My understanding was that THP is disabled anyway. What would this defrag feature be doing now? \nI don't know if I ever fully understood the problem, but it sounds like at\nleast in your case it's related to large shared_buffers, and hugepages, which\ncannot be swapped out.Basically the problem is our DB host getting slammed with connections (even with pgbouncer in place). We see the CPU load spiking, and when we check \"top\" we regularly see \"kswapd\" at the top of the list. For example, just now kswapd is at 72 %CPU in top. The next highest is a postgres process at 6.6 %CPU.Our shared_buffers is set to 32GB, and HugePages is set to 36GB:# grep Huge /proc/meminfoAnonHugePages:         0 kBHugePages_Total:   18000HugePages_Free:     1897HugePages_Rsvd:       41HugePages_Surp:        0Hugepagesize:       2048 kB Also FWIW this host is actually a VSphere VM. We're looking into any underlying events during these spikes as well.Don.-- Don Seilerwww.seiler.us", "msg_date": "Mon, 13 Apr 2020 10:08:47 -0500", "msg_from": "Don Seiler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High kswapd" } ]
[ { "msg_contents": "Hi,\n\nWe have an odd issue where specifying the same where clause twice causes PG\nto pick a much more efficent plan. We would like to know why.\n\nQuery A (this is the 'slow' query):\nUPDATE problem_instance SET processed = false\nFROM problem\nWHERE problem.id = problem_instance.problem_id\nAND problem.status != 2\nAND processed = true;\n\nQuery B (this is the 'fast' query):\nUPDATE problem_instance SET processed = false\nFROM problem\nWHERE problem.id = problem_instance.problem_id\nAND problem.status != 2\nAND problem.status != 2\nAND processed = true;\n\nThe EXPLAIN ANALYZE for both queries can be found here:-\nQuery A: https://explain.depesz.com/s/lFuy\nQuery B: https://explain.depesz.com/s/Jqmv\n\nThe table definitions (including the indexes) can be found here:-\npublic.problem:\nhttps://gist.github.com/indy-singh/e90ee6d23d053d32c2564501720353df\npublic.problem_instance:\nhttps://gist.github.com/indy-singh/3c77096b91c89428752cf314d8e20286\n\nData stats:-\npublic.problem has around 10,000 rows and once the condition status != 2 is\napplied there are around 800 rows left.\npublic.problem_instance has around 592,000 rows and once the condition\nprocessed = true is applied there are around 370,000 rows left.\n\nPG version:\nPostgreSQL 9.5.19 on x86_64-pc-linux-gnu (Debian 9.5.19-1.pgdg90+1),\ncompiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit\n\n-- SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts,\nrelhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE\nrelname='TABLE_NAME'\nTable metadata:-\npublic.problem:\nhttps://gist.github.com/indy-singh/ff34a3b6e45432ea4be2bf0b5038e0be\npublic.problem_instance:\nhttps://gist.github.com/indy-singh/a09fe66c8a8840b7661ce9726ebcab71\n\nLast Vacuum:-\npublic.problem: 2020-04-14 23:11:47.51056+01\npublic.problem_instance: 2020-04-14 20:11:04.187138+01\n\nLast Analyze:\npublic.problem: 2020-04-14 23:11:47.592878+01\npublic.problem_instance: 2020-04-14 20:11:04.508432+01\n\nServer Configuration:\nhttps://gist.github.com/indy-singh/8386d59206af042d365e5cd49fbae68f\n\nI tried my best getting all the information up front, please let me know if\nI missed anything.\n\nThanks,\nIndy\n\nHi,We have an odd issue where specifying the same where clause twice causes PG to pick a much more efficent plan. We would like to know why.Query A (this is the 'slow' query):UPDATE problem_instance SET processed = falseFROM problemWHERE problem.id = problem_instance.problem_idAND problem.status != 2AND processed = true;Query B (this is the 'fast' query):UPDATE problem_instance SET processed = falseFROM problemWHERE problem.id = problem_instance.problem_idAND problem.status != 2AND problem.status != 2AND processed = true;The EXPLAIN ANALYZE for both queries can be found here:-Query A: https://explain.depesz.com/s/lFuyQuery B: https://explain.depesz.com/s/JqmvThe table definitions (including the indexes) can be found here:-public.problem: https://gist.github.com/indy-singh/e90ee6d23d053d32c2564501720353dfpublic.problem_instance: https://gist.github.com/indy-singh/3c77096b91c89428752cf314d8e20286Data stats:-public.problem has around 10,000 rows and once the condition status != 2 is applied there are around 800 rows left.public.problem_instance has around 592,000 rows and once the condition processed = true is applied there are around 370,000 rows left.PG version:PostgreSQL 9.5.19 on x86_64-pc-linux-gnu (Debian 9.5.19-1.pgdg90+1), compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit-- SELECT relname, relpages, reltuples, relallvisible, relkind, relnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class WHERE relname='TABLE_NAME'Table metadata:-public.problem: https://gist.github.com/indy-singh/ff34a3b6e45432ea4be2bf0b5038e0bepublic.problem_instance: https://gist.github.com/indy-singh/a09fe66c8a8840b7661ce9726ebcab71Last Vacuum:-public.problem: 2020-04-14 23:11:47.51056+01public.problem_instance: 2020-04-14 20:11:04.187138+01Last Analyze:public.problem: 2020-04-14 23:11:47.592878+01public.problem_instance: 2020-04-14 20:11:04.508432+01Server Configuration: https://gist.github.com/indy-singh/8386d59206af042d365e5cd49fbae68fI tried my best getting all the information up front, please let me know if I missed anything.Thanks,Indy", "msg_date": "Wed, 15 Apr 2020 20:55:53 +0100", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Duplicate WHERE condition changes performance and plan" }, { "msg_contents": "On Thu, 16 Apr 2020 at 07:56, [email protected] <[email protected]> wrote:\n> We have an odd issue where specifying the same where clause twice causes PG to pick a much more efficent plan. We would like to know why.\n\n> The EXPLAIN ANALYZE for both queries can be found here:-\n> Query A: https://explain.depesz.com/s/lFuy\n> Query B: https://explain.depesz.com/s/Jqmv\n\nThis is basically down to just a poor join selectivity estimation.\nThe selectivity estimation on the duplicate not equal clause is not\nremoved by the planner and the selectivity of that is taking into\naccount twice which reduces the selectivity of the table named\n\"problem\". With that selectivity taken into account, the query planner\nthinks a nested loop will be a more optimal plan, to which it seems to\nbe.\n\nJoin selectivity estimations can use the most common values lists as\nyou may see if you look at the pg_stats view for the tables and\ncolumns involved in the join condition. Perhaps ID columns are not\ngood candidates to get an MCV list in the stats. In that case, the\nndistinct estimate will be used. If there's no MCV list in the stats\nthen check ndistinct is reasonably accurate. If there is an MCV list,\nthen you can make that bigger by increasing the statistics targets on\nthe join columns and running ANALYZE. Note: Planning can become slower\nwhen you increase the statistics targets.\n\nStarting with PostgreSQL 9.6, foreign keys are also used to help with\njoin selectivity estimations. I see you have a suitable foreign key\nfrom the schema you posted. You might want to add that to the list of\nreasons to upgrade.\n\nDavid\n\n\n", "msg_date": "Thu, 16 Apr 2020 11:57:49 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Duplicate WHERE condition changes performance and plan" }, { "msg_contents": "> Starting with PostgreSQL 9.6, foreign keys are also used to help with\n> join selectivity estimations. I see you have a suitable foreign key\n> from the schema you posted. You might want to add that to the list of\n> reasons to upgrade.\n\nApologies for the delay in response. I've had \"PostgreSQL 9.6.3,\ncompiled by Visual C++ build 1800, 64-bit\" setup at home for a while\nand after importing the data across I'm still seeing the same\nbehaviour.\n\nEven after upgrading my local install of PG to \"PostgreSQL 12.2,\ncompiled by Visual C++ build 1914, 64-bit\" and I'm still seeing the\nsame behaviour.\n\nPlans for PG12:-\nQuery A: https://explain.depesz.com/s/zrVD\nQuery B: https://explain.depesz.com/s/ZLWe\n\nThe settings for my home setup are left at default, nothing special.\n\nIndy\n\n\n", "msg_date": "Mon, 20 Apr 2020 01:50:17 +0100", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Duplicate WHERE condition changes performance and plan" }, { "msg_contents": "On Wed, Apr 15, 2020 at 08:55:53PM +0100, [email protected] wrote:\n> We have an odd issue where specifying the same where clause twice causes PG\n> to pick a much more efficent plan. We would like to know why.\n\n> Query B (this is the 'fast' query):\n> UPDATE problem_instance SET processed = false\n> FROM problem\n> WHERE problem.id = problem_instance.problem_id\n> AND problem.status != 2\n> AND problem.status != 2\n> AND processed = true;\n\nWhen you specify redundant condition, it results in an underestimate, as\nexpected:\n\nIndex Scan using problem_id_idx1 on public.problem (cost=0.28..624.68 ROWS=73 width=14) (actual time=0.011..0.714 ROWS=841 loops=1)\n Filter: ((problem.status <> 2) AND (problem.status <> 2))\n\nIn this case, doing an index scans on problem_instance is apparently faster\nthan an seq scan.\n\nI think the duplicate condition is fooling the planner, and by chance it's\ngiving a better plan. That might indicate that your settings aren't ideal.\nMaybe random_page_cost should be lower, which would encourage index scans. If\nyou're using SSD storage, or if the DB is small compared with shared_buffers or\nRAM, then random_page_cost should be closer to seq_page_cost.\n\nHow large are the indexes? problem_id_idx1 ?\n\nOn Mon, Apr 20, 2020 at 01:50:17AM +0100, [email protected] wrote:\n> Even after upgrading my local install of PG to \"PostgreSQL 12.2,\n> compiled by Visual C++ build 1914, 64-bit\" and I'm still seeing the\n> same behaviour.\n\n> Server Configuration:\n> https://gist.github.com/indy-singh/8386d59206af042d365e5cd49fbae68f\n> shared_buffers \t2GB \tconfiguration file\n> effective_cache_size \t6GB \tconfiguration file\n\nNote, until v10, the documentation said this:\n\nhttps://www.postgresql.org/docs/9.6/runtime-config-resource.html\n|Also, on Windows, large values for shared_buffers aren't as effective. You may\n|find better results keeping the setting relatively low and using the operating\n|system cache more instead. The useful range for shared_buffers on Windows\n|systems is generally from 64MB to 512MB.\n\nIt would be interesting to know\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 19 Apr 2020 21:33:57 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Duplicate WHERE condition changes performance and plan" }, { "msg_contents": "> If you're using SSD storage, or if the DB is small compared with shared_buffers or RAM, then random_page_cost should be closer to seq_page_cost.\n\nI don't *think* we are using SSDs but I'll need to confirm that though.\n\n> How large are the indexes? problem_id_idx1 ?\n\nUsing the query from here:\nhttps://wiki.postgresql.org/wiki/Index_Maintenance#Index_size.2Fusage_statistics\nOutput here: https://gist.github.com/indy-singh/e33eabe5cc937043c93b42a8783b3bfb\n\nI've setup a repo here where it is possible to reproduce the weird\nbehaviour I'm getting:-\n\nhttps://github.com/indy-singh/postgres-duplicate-where-conditon\n\nThat contains the data (amended to remove any private information) as\nwell as the statements need to recreate tables, indices, and\nconstraints,\n\nI think after some trial and error this is something to do with the\nsize of the table and statistics. I've been trying to put together a\nShort, Self Contained, Correct example (http://sscce.org/) and the\nproblem only appears when fill problem_instance.message with junk, but\nI have to do it in two steps as outlined in the README in repo.\n\nIndy\n\n\n", "msg_date": "Fri, 24 Apr 2020 17:33:25 +0100", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Duplicate WHERE condition changes performance and plan" }, { "msg_contents": "> I don't *think* we are using SSDs but I'll need to confirm that though.\n\nConfirmed we are not using SSDs but '10K RPM SAS in RAID-10.'\n\nI've also been hunt for other queries that show this behaviour too,\nand I've found one. The PG settings/versions will be different in this\nexample due to the earlier example being for our internal CI/CD tool\nwhich is hosted a on local instance of PG. This example is directly\nfrom our production servers.\n\nQuery C (slow):-\nSELECT COUNT(1)\nFROM proposal.proposal\nINNER JOIN proposal.note ON proposal.note.proposal_reference =\nproposal.proposal.reference\nWHERE 1 = 1\nAND proposal.proposal.system_id = 11\nAND proposal.proposal.legacy_organisation_id IN (6, 7, 11, 16, 18, 44,\n200, 218, 233, 237, 259, 47)\nAND proposal.proposal.has_been_anonymised = false\nAND proposal.note.legacy_read_by IS NULL\nAND proposal.note.type_id IN (1, 4, 9)\nAND proposal.note.entry_time > '2020-04-01'\nAND proposal.note.entry_time < '2020-05-01';\n\nQuery D (fast):-\nSELECT COUNT(1)\nFROM proposal.proposal\nINNER JOIN proposal.note ON proposal.note.proposal_reference =\nproposal.proposal.reference\nWHERE 1 = 1\nAND proposal.proposal.system_id = 11\nAND proposal.proposal.legacy_organisation_id IN (6, 7, 11, 16, 18, 44,\n200, 218, 233, 237, 259, 47)\nAND proposal.proposal.has_been_anonymised = false\nAND proposal.proposal.has_been_anonymised = false\nAND proposal.note.legacy_read_by IS NULL\nAND proposal.note.type_id IN (1, 4, 9)\nAND proposal.note.entry_time > '2020-04-01'\nAND proposal.note.entry_time < '2020-05-01';\n\nThe EXPLAIN ANALYZE for both queries can be found here:-\nQuery C: https://explain.depesz.com/s/5Mbu\nQuery D: https://explain.depesz.com/s/jVnH\n\nThe table definitions (including the indexes) can be found here:-\nproposal.proposal:\nhttps://gist.github.com/indy-singh/6ccd86ff859e7cdad2ec1bf73a61445c\nproposal.note: https://gist.github.com/indy-singh/6c1f85ad15cb92e138447a91d8cf3ecb\n\nData stats:-\nproposal.proposal has 10,324,779 rows and once the table specific\nconditions are applied there are 39,223 rows left.\nproposal.note has 28,97,698 rows and once the table specific\nconditions are applied there are 54,359 rows left.\n\nPG version:\nPostgreSQL 9.5.17 on x86_64-pc-linux-gnu (Debian 9.5.17-1.pgdg90+1),\ncompiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit\n\n-- SELECT relname, relpages, reltuples, relallvisible, relkind,\nrelnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class\nWHERE relname='TABLE_NAME'\nTable metadata:-\nproposal.proposal:\nhttps://gist.github.com/indy-singh/24e7ec8f3d4e2c3ac73f724cea52f9de\nproposal.note: https://gist.github.com/indy-singh/104d6ec7ef8179461eb4f91c121615e0\n\nIndex Stats:-\nproposal.proposal:\nhttps://gist.github.com/indy-singh/1d41d15addb543bcdafc8641b9d7f036\nproposal.note: https://gist.github.com/indy-singh/7a698dec98dd8ef2808345d1802e6b6a\n\nLast Vacuum:-\nproposal.proposal: Never\nproposal.note: 2020-04-17 15:10:57.256013+01\n\nLast Analyze:\nproposal.proposal: Never\nproposal.note: 2020-04-07 11:48:49.689622+01\n\nServer Configuration:\nhttps://gist.github.com/indy-singh/b19134873f266ee6ce2b9815504d130c\n\nIndy\n\n\n", "msg_date": "Sun, 3 May 2020 12:39:56 +0100", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Duplicate WHERE condition changes performance and plan" }, { "msg_contents": "Why not vacuum analyze both tables to ensure stats are up to date?\n\nHave you customized default_statistics_target from 100? It may be that 250\nwould give you a more complete sample of the table without increasing the\nsize of the stats tables too much such that planning time increases hugely.\n\nDo you know if any of these columns are correlated? Custom stats with\nCREATE STATISTICS may help the planner make better decisions if so.\n\nI usually hesitate to put any boolean field in an index. Do you need\nthe proposal.has_been_anonymised false values only, if so you could add\nthat to a WHERE condition on the index instead of including it as the\nleading column.\n\nWhy not vacuum analyze both tables to ensure stats are up to date? Have you customized default_statistics_target from 100? It may be that 250 would give you a more complete sample of the table without increasing the size of the stats tables too much such that planning time increases hugely.Do you know if any of these columns are correlated? Custom stats with CREATE STATISTICS may help the planner make better decisions if so.I usually hesitate to put any boolean field in an index. Do you need the proposal.has_been_anonymised false values only, if so you could add that to a WHERE condition on the index instead of including it as the leading column.", "msg_date": "Mon, 4 May 2020 11:11:41 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Duplicate WHERE condition changes performance and plan" } ]
[ { "msg_contents": "Hi. Apologies in advance if this is not the right place to ask.\n\nI was wondering if anyone was using unlogged tables for website\nsessions in production. I'm interested if UNLOGGED breaks the\nprevailing opinion that you don't put sessions in PG.\n\n\n", "msg_date": "Wed, 15 Apr 2020 23:53:58 +0100", "msg_from": "Stephen Carboni <[email protected]>", "msg_from_op": true, "msg_subject": "Using unlogged tables for web sessions" } ]
[ { "msg_contents": "Hi,\n\nI don't understand why postgresql doesn't use clearly the most optimal \nindex for a query in PLAN.\n\nCan you help me?\n\n\ncreate table public.tabla\n(\n     cod_tabla bigint not null,\n     tabla varchar(31) not null,\n     constraint pk_tabla primary key (cod_tabla)\n);\n\n\ncreate table public.entidad\n(\n     cod_entidad bigint not null,\n     cod_tabla bigint not null,\n     cod_entidad_tabla bigint not null,\n     constraint pk_entidad primary key (cod_entidad),\n     constraint fk_tabla_entidad foreign key (cod_tabla)\n         references public.tabla (cod_tabla) match simple\n         on update cascade\n         on delete cascade\n);\n\n\nCREATE INDEX idx_tabla_entidad\n     ON public.entidad USING btree\n     (cod_tabla ASC NULLS LAST);\n\nCREATE INDEX idx_entidad_tabla_4\n     ON public.entidad USING btree\n     (cod_entidad_tabla ASC NULLS LAST)\n     INCLUDE(cod_entidad, cod_tabla, cod_entidad_tabla)\n     WHERE cod_tabla::bigint = 4;\n\n\n\n\nSELECT count(*) from entidad;\n34.413.354\n\nSELECT count(*) from entidad where cod_tabla = 4;\n1.409.985\n\n\n\nexplain (analyze, buffers, format text) select * from entidad where \ncod_tabla = 4\n\n\nIndex Scan using idx_tabla_entidad on entidad (cost=0.56..51121.41 \nrows=1405216 width=20) (actual time=0.037..242.609 rows=1409985 loops=1)\n   Index Cond: ((cod_tabla)::bigint = 4)\n   Buffers: shared hit=12839\nPlanning Time: 0.158 ms\nExecution Time: 311.828 ms\n\n\n\nWhy postgresql doesnt use the index idx_entidad_tabla_4?????\n\nThanks in advance\n\n\n\n\n", "msg_date": "Thu, 23 Apr 2020 13:36:43 +0200", "msg_from": "Arcadio Ortega Reinoso <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL does not choose my indexes well" }, { "msg_contents": "> CREATE INDEX idx_tabla_entidad\n>     ON public.entidad USING btree\n>     (cod_tabla ASC NULLS LAST);\n>\n> CREATE INDEX idx_entidad_tabla_4\n>     ON public.entidad USING btree\n>     (cod_entidad_tabla ASC NULLS LAST)\n>     INCLUDE(cod_entidad, cod_tabla, cod_entidad_tabla)\n>     WHERE cod_tabla::bigint = 4;\n>\n>\n> SELECT count(*) from entidad;\n> 34.413.354\n>\n> SELECT count(*) from entidad where cod_tabla = 4;\n> 1.409.985\n>\n>\n> explain (analyze, buffers, format text) select * from entidad where cod_tabla = 4\n> Index Scan using idx_tabla_entidad on entidad (cost=0.56..51121.41 rows=1405216 width=20) (actual time=0.037..242.609 rows=1409985 loops=1)\n>   Index Cond: ((cod_tabla)::bigint = 4)\n>   Buffers: shared hit=12839\n> Planning Time: 0.158 ms\n> Execution Time: 311.828 ms\n>\n>\n> Why postgresql doesnt use the index idx_entidad_tabla_4?????\n\nBecause that index does not contain the column from the WHERE clause as an \"indexed\" column (only as an included column).\nPlus: scanning idx_tabla_entidad is more efficient because that index is smaller.\n\nWhat do you think that idx_entidad_tabla_4 would be the better choice?\n\nThomas\n\n\n\n", "msg_date": "Thu, 23 Apr 2020 13:43:55 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "On Thursday, April 23, 2020, Thomas Kellerer <[email protected]> wrote:\n\n> > CREATE INDEX idx_tabla_entidad\n> > ON public.entidad USING btree\n> > (cod_tabla ASC NULLS LAST);\n> >\n> > CREATE INDEX idx_entidad_tabla_4\n> > ON public.entidad USING btree\n> > (cod_entidad_tabla ASC NULLS LAST)\n> > INCLUDE(cod_entidad, cod_tabla, cod_entidad_tabla)\n> > WHERE cod_tabla::bigint = 4;\n> >\n> >\n> > SELECT count(*) from entidad;\n> > 34.413.354\n> >\n> > SELECT count(*) from entidad where cod_tabla = 4;\n> > 1.409.985\n> >\n> >\n> > explain (analyze, buffers, format text) select * from entidad where\n> cod_tabla = 4\n> > Index Scan using idx_tabla_entidad on entidad (cost=0.56..51121.41\n> rows=1405216 width=20) (actual time=0.037..242.609 rows=1409985 loops=1)\n> > Index Cond: ((cod_tabla)::bigint = 4)\n> > Buffers: shared hit=12839\n> > Planning Time: 0.158 ms\n> > Execution Time: 311.828 ms\n> >\n> >\n> > Why postgresql doesnt use the index idx_entidad_tabla_4?????\n>\n> Because that index does not contain the column from the WHERE clause as an\n> \"indexed\" column (only as an included column).\n\n\nBut it does match the partials index’s predicate\n\n\n> Plus: scanning idx_tabla_entidad is more efficient because that index is\n> smaller.\n>\n\nReally? The absence of 33 million rows in the partial index seems like it\nwould compensate fully and then some for the extra included columns.\n\nDavid J.\n\nOn Thursday, April 23, 2020, Thomas Kellerer <[email protected]> wrote:> CREATE INDEX idx_tabla_entidad\n>     ON public.entidad USING btree\n>     (cod_tabla ASC NULLS LAST);\n>\n> CREATE INDEX idx_entidad_tabla_4\n>     ON public.entidad USING btree\n>     (cod_entidad_tabla ASC NULLS LAST)\n>     INCLUDE(cod_entidad, cod_tabla, cod_entidad_tabla)\n>     WHERE cod_tabla::bigint = 4;\n>\n>\n> SELECT count(*) from entidad;\n> 34.413.354\n>\n> SELECT count(*) from entidad where cod_tabla = 4;\n> 1.409.985\n>\n>\n> explain (analyze, buffers, format text) select * from entidad where cod_tabla = 4\n> Index Scan using idx_tabla_entidad on entidad (cost=0.56..51121.41 rows=1405216 width=20) (actual time=0.037..242.609 rows=1409985 loops=1)\n>   Index Cond: ((cod_tabla)::bigint = 4)\n>   Buffers: shared hit=12839\n> Planning Time: 0.158 ms\n> Execution Time: 311.828 ms\n>\n>\n> Why postgresql doesnt use the index idx_entidad_tabla_4?????\n\nBecause that index does not contain the column from the WHERE clause as an \"indexed\" column (only as an included column).But it does match the partials index’s predicate \nPlus: scanning idx_tabla_entidad is more efficient because that index is smaller.Really?  The absence of 33 million rows in the partial index seems like it would compensate fully and then some for the extra included columns. David J.", "msg_date": "Thu, 23 Apr 2020 06:57:29 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Thursday, April 23, 2020, Thomas Kellerer <[email protected]> wrote:\n>> Plus: scanning idx_tabla_entidad is more efficient because that index is\n>> smaller.\n\n> Really? The absence of 33 million rows in the partial index seems like it\n> would compensate fully and then some for the extra included columns.\n\nOn the other hand, an indexscan is likely to end up being effectively\nrandom-access rather than the purely sequential access involved in\na seqscan. (If the index was built recently, then it might not be\nso bad --- but the planner doesn't know that, so it assumes that the\nindex leaf pages are laid out pretty randomly.) Moreover, unless the\ntable is mostly marked all-visible, there will be another pile of\nrandomized accesses into the heap to validate visibility of the index\nentries.\n\nBottom line is that this choice is not nearly as open-and-shut as\nthe OP seems to think. In fact, it's fairly likely that this is a\nbadly designed index, not a well-designed one that the planner is\nunaccountably failing to use. Both covering indexes and partial\nindexes are easily-misused features that can make performance worse\nnot better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 11:29:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": ">\n> \"unless the table is mostly marked all-visible\"\n\n\nIs that taken into account during planning when evaluating index scan vs\nsequential scan?\n\n\"unless the table is mostly marked all-visible\"Is that taken into account during planning when evaluating index scan vs sequential scan?", "msg_date": "Thu, 23 Apr 2020 10:20:41 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "Michael Lewis <[email protected]> writes:\n>> \"unless the table is mostly marked all-visible\"\n\n> Is that taken into account during planning when evaluating index scan vs\n> sequential scan?\n\nIt is, although the planner's estimate is based on what the last ANALYZE\nsaw, which might be out-of-date.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 12:31:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "On Thu, Apr 23, 2020 at 8:29 AM Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Thursday, April 23, 2020, Thomas Kellerer <[email protected]> wrote:\n> >> Plus: scanning idx_tabla_entidad is more efficient because that index is\n> >> smaller.\n>\n> > Really? The absence of 33 million rows in the partial index seems like\n> it\n> > would compensate fully and then some for the extra included columns.\n>\n> On the other hand, an indexscan is likely to end up being effectively\n> random-access rather than the purely sequential access involved in\n> a seqscan.\n>\n\nI feel like I'm missing something as the OP's query is choosing indexscan -\njust it is choosing to scan the full index containing the searched upon\nfield instead of a partial index that doesn't contain the field but whose\npredicate matches the where condition - in furtherance of a count(*)\ncomputation where the columns don't really matter.\n\nI do get \"its going to perform 1.4 million random index entries and heap\nlookup anyway - so it doesn't really matter\" - but the first answer was\n\"the full index is smaller than the partial\" which goes against my\nintuition.\n\nThe sequential scan that isn't being used would have to touch 25x the\nnumber of records - so its non-preference seems reasonable.\n\nDavid J.\n\nOn Thu, Apr 23, 2020 at 8:29 AM Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> On Thursday, April 23, 2020, Thomas Kellerer <[email protected]> wrote:\n>> Plus: scanning idx_tabla_entidad is more efficient because that index is\n>> smaller.\n\n> Really?  The absence of 33 million rows in the partial index seems like it\n> would compensate fully and then some for the extra included columns.\n\nOn the other hand, an indexscan is likely to end up being effectively\nrandom-access rather than the purely sequential access involved in\na seqscan.I feel like I'm missing something as the OP's query is choosing indexscan - just it is choosing to scan the full index containing the searched upon field instead of a partial index that doesn't contain the field but whose predicate matches the where condition - in furtherance of a count(*) computation where the columns don't really matter.I do get \"its going to perform 1.4 million random index entries and heap lookup anyway - so it doesn't really matter\" - but the first answer was \"the full index is smaller than the partial\" which goes against my intuition.The sequential scan that isn't being used would have to touch 25x the number of records - so its non-preference seems reasonable.David J.", "msg_date": "Thu, 23 Apr 2020 09:50:03 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Thursday, April 23, 2020, Thomas Kellerer <[email protected]> wrote:\n> >> Plus: scanning idx_tabla_entidad is more efficient because that index is\n> >> smaller.\n> \n> > Really? The absence of 33 million rows in the partial index seems like it\n> > would compensate fully and then some for the extra included columns.\n> \n> On the other hand, an indexscan is likely to end up being effectively\n> random-access rather than the purely sequential access involved in\n> a seqscan. \n\nAn indexscan is what was chosen though, so this doesn't really seem to\nbe a question of index scan vs. seq scan, it's a question of why one\nindex vs. another, though it seems a bit odd that we'd pick a regular\nindex scan instead of a BitmapHeap/Index scan.\n\n> (If the index was built recently, then it might not be\n> so bad --- but the planner doesn't know that, so it assumes that the\n> index leaf pages are laid out pretty randomly.) Moreover, unless the\n> table is mostly marked all-visible, there will be another pile of\n> randomized accesses into the heap to validate visibility of the index\n> entries.\n\nIf the table *is* marked all visible, though, then certainly that index\nwill be better, and I think that's what a lot of this is coming down to\nin this particular case.\n\nPopulating the tables provided based on the minimal info we got,\nminimizing the numbers of pages that 'cod_tabla=4' is on:\n\ninsert into tabla select generate_series, 'abcdef' from generate_series(1,20);\ninsert into entidad select generate_series, 4, generate_series+1 from generate_series(1,1409985);\ninsert into entidad select generate_series+1409985, generate_series % 20 + 1, generate_series+1 from generate_series(1,34413354) where generate_series % 20 + 1 <> 4;\nvacuum analyze entidad;\n\nWith this, the table is 1.7GB, idx_tabla_entidad is about 700MB, while\nidx_entidad_tabla_4 is only 81MB.\n\nWith this, on v12-HEAD, PG will happily use the partial index:\n\n=# explain (analyze, buffers) select * from entidad where cod_tabla = 4;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using idx_entidad_tabla_4 on entidad (cost=0.43..55375.20 rows=1422085 width=24) (actual time=0.050..144.745 rows=1409985 loops=1)\n Heap Fetches: 0\n Buffers: shared hit=8497\n Planning Time: 0.338 ms\n Execution Time: 183.081 ms\n(5 rows)\n\nDropping that index and then running it again shows:\n\n=# explain (analyze, buffers) select * from entidad where cod_tabla = 4;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on entidad (cost=26641.72..608515.59 rows=1422085 width=24) (actual time=102.844..242.522 rows=1409985 loops=1)\n Recheck Cond: (cod_tabla = 4)\n Heap Blocks: exact=8981\n Buffers: shared read=12838\n -> Bitmap Index Scan on idx_tabla_entidad (cost=0.00..26286.20 rows=1422085 width=0) (actual time=101.969..101.969 rows=1409985 loops=1)\n Index Cond: (cod_tabla = 4)\n Buffers: shared read=3857\n Planning Time: 0.264 ms\n Execution Time: 277.854 ms\n(9 rows)\n\nIf we spread out where the 'cod_tabla=4' tuples are, the partial index\nis still used (note that we end up with more like 1.7M tuples instead of\n1.4M, but I don't think that's terribly relevant):\n\ntruncate entidad;\ninsert into entidad select generate_series, generate_series % 20 + 1, generate_series+1 from generate_series(1,34413354);\n\n=# explain (analyze, buffers) select * from entidad where cod_tabla = 4;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using idx_entidad_tabla_4 on entidad (cost=0.43..65231.31 rows=1664459 width=24) (actual time=0.036..185.171 rows=1720668 loops=1)\n Heap Fetches: 0\n Buffers: shared hit=10375\n Planning Time: 0.247 ms\n Execution Time: 233.205 ms\n(5 rows)\n\nThings get a lot worse when we drop that partial index:\n\ndrop index idx_entidad_tabla_4;\n\n=# explain (analyze, buffers) select * from entidad where cod_tabla = 4;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on entidad (cost=30088.12..270087.86 rows=1664459 width=24) (actual time=163.418..1465.733 rows=1720668 loops=1)\n Recheck Cond: (cod_tabla = 4)\n Heap Blocks: exact=219194\n Buffers: shared read=223609\n -> Bitmap Index Scan on idx_tabla_entidad (cost=0.00..29672.01 rows=1664459 width=0) (actual time=128.544..128.544 rows=1720668 loops=1)\n Index Cond: (cod_tabla = 4)\n Buffers: shared read=4415\n Planning Time: 0.094 ms\n Execution Time: 1515.066 ms\n(9 rows)\n\nTo get the kind of plan that the OP got, I dropped random_page_cost to 1.0:\n\n=# explain (analyze, buffers) select * from entidad where cod_tabla = 4;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_tabla_entidad on entidad (cost=0.56..251946.06 rows=1664459 width=24) (actual time=0.216..1236.371 rows=1720668 loops=1)\n Index Cond: (cod_tabla = 4)\n Buffers: shared read=223609\n Planning Time: 0.192 ms\n Execution Time: 1283.460 ms\n(5 rows)\n\nEven in that case though, when I recreate the partial index:\n\n=# explain (analyze, buffers) select * from entidad where cod_tabla = 4;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using idx_entidad_tabla_4 on entidad (cost=0.43..35033.31 rows=1664459 width=24) (actual time=0.039..211.644 rows=1720668 loops=1)\n Heap Fetches: 0\n Buffers: shared hit=7 read=10368\n Planning Time: 0.144 ms\n Execution Time: 256.304 ms\n(5 rows)\n\nSo it's not really clear what's happening in the OP's case, we'd really\nneed more information to figure it out, it seems to me.\n\n> Bottom line is that this choice is not nearly as open-and-shut as\n> the OP seems to think. In fact, it's fairly likely that this is a\n> badly designed index, not a well-designed one that the planner is\n> unaccountably failing to use. Both covering indexes and partial\n> indexes are easily-misused features that can make performance worse\n> not better.\n\nWhile I agree they can be mis-used, and that it's not open-and-shut,\nit's not really clear to me what's going on that's causing us to avoid\nthat partial index in the OP's case when we'll certainly use it in\ngeneral. The partial index in this particular case seems like it'd be\nperfectly well suited to this query and that we should be using it (as\nwe are in the tests I did above).\n\nI do wonder if we are maybe missing a bet at times though, considering\nthat I'm pretty sure we'll always go through the index in order, and\ntherefore randomly, even when we don't actually need the results in\norder..? Has there been much consideration for just opening an index\nand sequentially scanning it in cases like this where we have to go\nthrough all of the index anyway and don't need the results in order? I\nget that we'd still have to consider random access costs if the VM is\nout of date, but if it's not, I would think we could give such an\napproach a lower cost as we'd be going through the index sequentially\ninstead of the normal random access that we do.\n\nThanks,\n\nStephen", "msg_date": "Thu, 23 Apr 2020 13:18:58 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "Greetings,\n\n* David G. Johnston ([email protected]) wrote:\n> On Thu, Apr 23, 2020 at 8:29 AM Tom Lane <[email protected]> wrote:\n> > \"David G. Johnston\" <[email protected]> writes:\n> > > On Thursday, April 23, 2020, Thomas Kellerer <[email protected]> wrote:\n> > >> Plus: scanning idx_tabla_entidad is more efficient because that index is\n> > >> smaller.\n> >\n> > > Really? The absence of 33 million rows in the partial index seems like\n> > it\n> > > would compensate fully and then some for the extra included columns.\n> >\n> > On the other hand, an indexscan is likely to end up being effectively\n> > random-access rather than the purely sequential access involved in\n> > a seqscan.\n> \n> I feel like I'm missing something as the OP's query is choosing indexscan -\n> just it is choosing to scan the full index containing the searched upon\n> field instead of a partial index that doesn't contain the field but whose\n> predicate matches the where condition - in furtherance of a count(*)\n> computation where the columns don't really matter.\n\nThe actual query isn't a count(*) though, it's a 'select *'.\n\n> I do get \"its going to perform 1.4 million random index entries and heap\n> lookup anyway - so it doesn't really matter\" - but the first answer was\n> \"the full index is smaller than the partial\" which goes against my\n> intuition.\n\nYeah, I'm pretty sure the full index is quite a bit bigger than the\npartial index- see my note from just a moment ago.\n\n> The sequential scan that isn't being used would have to touch 25x the\n> number of records - so its non-preference seems reasonable.\n\nAgreed on that.\n\nThanks,\n\nStephen", "msg_date": "Thu, 23 Apr 2020 13:20:48 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "Stephen Frost <[email protected]> writes:\n> I do wonder if we are maybe missing a bet at times though, considering\n> that I'm pretty sure we'll always go through the index in order, and\n> therefore randomly, even when we don't actually need the results in\n> order..? Has there been much consideration for just opening an index\n> and sequentially scanning it in cases like this where we have to go\n> through all of the index anyway and don't need the results in order?\n\nAs I recall, it's unsafe to do so because of consistency considerations,\nspecifically there's a risk of missing or double-visiting some entries due\nto concurrent index page splits. VACUUM has some way around that, but it\ndoesn't work for regular data-fetching cases. (nbtree/README has more\nabout this, but I don't feel like looking it up for you.)\n\nMy guess based on your results is that the OP's table *isn't* all-visible,\nor at least the planner doesn't know it is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 13:56:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > I do wonder if we are maybe missing a bet at times though, considering\n> > that I'm pretty sure we'll always go through the index in order, and\n> > therefore randomly, even when we don't actually need the results in\n> > order..? Has there been much consideration for just opening an index\n> > and sequentially scanning it in cases like this where we have to go\n> > through all of the index anyway and don't need the results in order?\n> \n> As I recall, it's unsafe to do so because of consistency considerations,\n> specifically there's a risk of missing or double-visiting some entries due\n> to concurrent index page splits. VACUUM has some way around that, but it\n> doesn't work for regular data-fetching cases. (nbtree/README has more\n> about this, but I don't feel like looking it up for you.)\n\nThat README isn't exactly small, but the mention of VACUUM having a\ntrick there helped me find this:\n\n-------\nThe tricky part of this is to avoid missing any deletable tuples in the\npresence of concurrent page splits: a page split could easily move some\ntuples from a page not yet passed over by the sequential scan to a\nlower-numbered page already passed over. (This wasn't a concern for the\nindex-order scan, because splits always split right.) To implement this,\nwe provide a \"vacuum cycle ID\" mechanism that makes it possible to\ndetermine whether a page has been split since the current btbulkdelete\ncycle started. If btbulkdelete finds a page that has been split since\nit started, and has a right-link pointing to a lower page number, then\nit temporarily suspends its sequential scan and visits that page instead.\nIt must continue to follow right-links and vacuum dead tuples until\nreaching a page that either hasn't been split since btbulkdelete started,\nor is above the location of the outer sequential scan. Then it can resume\nthe sequential scan. This ensures that all tuples are visited.\n-------\n\nSo the issue is with a page split happening and a tuple being moved to\nan earlier leaf page, resulting in us potentially not seeing it even\nthough we should have during a sequential scan. The trick that VACUUM\ndoes seems pretty involved and would be more complicated for use for\nthis as it's not ok to return the same tuples multiple times (though\nperhaps in a BitmapIndexScan we could handle that..). Then again, maybe\nthe skipping scan mechanism that's been talked about recently would let\nus avoid having to scan the entire index even in cases where the\nconditional doesn't include the initial index columns, since it looks\nlike that might be what we're doing now.\n\n> My guess based on your results is that the OP's table *isn't* all-visible,\n> or at least the planner doesn't know it is.\n\nHrmpf, even then I seem to end up with an IndexOnlyScan-\n\n=# select * from pg_visibility_map('entidad') where all_visible;\nblkno | all_visible | all_frozen \n-------+-------------+------------\n(0 rows)\n\nanalyze entidad;\n\n=# select relallvisible from pg_class where relname = 'entidad';\n relallvisible \n---------------\n 0\n(1 row)\n\n=# explain (analyze, buffers) select * from entidad where cod_tabla = 4;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using idx_entidad_tabla_4 on entidad (cost=0.43..170908.14 rows=1657114 width=24) (actual time=0.312..3511.629 rows=1720668 loops=1)\n Heap Fetches: 3441336\n Buffers: shared hit=6444271 read=469499\n Planning Time: 2.831 ms\n Execution Time: 3563.413 ms\n(5 rows)\n\nI'm pretty suspicious that they've made some odd planner configuration\nchanges or something along those lines to end up with the plan they got,\nor there's some reason we don't think we can use the partial index.\n\nThanks,\n\nStephen", "msg_date": "Thu, 23 Apr 2020 16:01:43 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "Stephen Frost <[email protected]> writes:\n> I'm pretty suspicious that they've made some odd planner configuration\n> changes or something along those lines to end up with the plan they got,\n> or there's some reason we don't think we can use the partial index.\n\nYeah, the latter is definitely a possibility. I find the apparently\nunnecessary cast in the partial-index predicate to be suspicious ---\nmaybe that's blocking matching to the WHERE clause? In principle\nthe useless cast should have gotten thrown away, but maybe what we\nwere shown isn't quite exactly the real DDL.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 16:33:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "On Thu, Apr 23, 2020 at 1:33 PM Tom Lane <[email protected]> wrote:\n\n> I find the apparently\n> unnecessary cast in the partial-index predicate to be suspicious ---\n> maybe that's blocking matching to the WHERE clause?\n>\n\nI noticed that too...I suspect its related to the ANALYZE result:\n\nIndex Scan using idx_tabla_entidad on entidad (cost=0.56..51121.41\nrows=1405216 width=20) (actual time=0.037..242.609 rows=1409985 loops=1)\n Index Cond: ((cod_tabla)::bigint = 4)\n\nSince the index condition ended up cast to bigint the OP probably wrote the\npredicate to match.\n\nDavid J.\n\nOn Thu, Apr 23, 2020 at 1:33 PM Tom Lane <[email protected]> wrote:I find the apparently\nunnecessary cast in the partial-index predicate to be suspicious ---\nmaybe that's blocking matching to the WHERE clause?I noticed that too...I suspect its related to the ANALYZE result:Index Scan using idx_tabla_entidad on entidad (cost=0.56..51121.41rows=1405216 width=20) (actual time=0.037..242.609 rows=1409985 loops=1)   Index Cond: ((cod_tabla)::bigint = 4)Since the index condition ended up cast to bigint the OP probably wrote the predicate to match.David J.", "msg_date": "Thu, 23 Apr 2020 13:36:15 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> I noticed that too...I suspect its related to the ANALYZE result:\n\n> Index Scan using idx_tabla_entidad on entidad (cost=0.56..51121.41\n> rows=1405216 width=20) (actual time=0.037..242.609 rows=1409985 loops=1)\n> Index Cond: ((cod_tabla)::bigint = 4)\n\nYeah, that *strongly* suggests that cod_tabla isn't really bigint.\nI'm wondering about domains, for instance.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 16:45:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "El 23/4/20 a las 22:45, Tom Lane escribió:\n> \"David G. Johnston\" <[email protected]> writes:\n>> I noticed that too...I suspect its related to the ANALYZE result:\n>> Index Scan using idx_tabla_entidad on entidad (cost=0.56..51121.41\n>> rows=1405216 width=20) (actual time=0.037..242.609 rows=1409985 loops=1)\n>> Index Cond: ((cod_tabla)::bigint = 4)\n> Yeah, that *strongly* suggests that cod_tabla isn't really bigint.\n> I'm wondering about domains, for instance.\n>\n> \t\t\tregards, tom lane\n>\n>\nActually\n\nCREATE DOMAIN cod_pk AS bigint;\n\ncreate table public.tabla\n(\n     cod_tabla cod_pk not null,\n     tabla varchar(31) not null,\n     constraint pk_tabla primary key (cod_tabla)\n);\n\n\nDo you think is important?\n\n\nThank you very much to all\n\n\n\n\n\n", "msg_date": "Thu, 23 Apr 2020 23:01:36 +0200", "msg_from": "Arcadio Ortega Reinoso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "On Thu, Apr 23, 2020 at 7:36 AM Arcadio Ortega Reinoso <\[email protected]> wrote:\n\n> explain (analyze, buffers, format text) select * from entidad where\n> cod_tabla = 4\n>\n>\n> Index Scan using idx_tabla_entidad on entidad (cost=0.56..51121.41\n> rows=1405216 width=20) (actual time=0.037..242.609 rows=1409985 loops=1)\n> Index Cond: ((cod_tabla)::bigint = 4)\n> Buffers: shared hit=12839\n> Planning Time: 0.158 ms\n> Execution Time: 311.828 ms\n>\n\nIn order to read 1409985 / 12839 = 109 rows per buffer page, the table must\nbe extraordinarily well clustered on this index. That degree of clustering\nis going to steal much of the thunder from the index-only scan. But in my\nhands, it does still prefer the partial index with index-only scan by a\ncost estimate ratio of 3 to 1 (despite it actually being slightly slower)\nso I don't know why you don't get it being used.\n\nThis was how I populated the table:\n\ninsert into entidad select id, floor(random()*25)::int,\nfloor(random()*10000000)::int from generate_series(1,34000000) f(id);\ncluster entidad USING idx_tabla_entidad ;\n\n0.3 seconds for 1.4 million rows is pretty good. How much better are you\nhoping to get by micro-managing the planner?\n\nTo figure it out, it might help to see the explain (analyze, buffers,\nformat text) of the plan you want it to use. But the only way I see to do\nthat is to drop the other index.\n\nIf you don't want to \"really\" drop the index, you can drop it in a\ntransaction, run the \"explain (analyze, buffers, format text)\" query, and\nrollback the transaction. (Note this will lock the table for the entire\nduration of the transaction, so it is not something to do cavalierly in\nproduction)\n\nCheers,\n\nJeff\n\nOn Thu, Apr 23, 2020 at 7:36 AM Arcadio Ortega Reinoso <[email protected]> wrote:explain (analyze, buffers, format text) select * from entidad where \ncod_tabla = 4\n\n\nIndex Scan using idx_tabla_entidad on entidad (cost=0.56..51121.41 \nrows=1405216 width=20) (actual time=0.037..242.609 rows=1409985 loops=1)\n   Index Cond: ((cod_tabla)::bigint = 4)\n   Buffers: shared hit=12839\nPlanning Time: 0.158 ms\nExecution Time: 311.828 msIn order to read 1409985 / 12839 = 109 rows per buffer page, the table must be extraordinarily well clustered on this index.  That degree of clustering is going to steal much of the thunder from the index-only scan.  But in my hands, it does still prefer the partial index with index-only scan by a cost estimate ratio of 3 to 1 (despite it actually being slightly slower) so I don't know why you don't get it being used.This was how I populated the table:insert into entidad select id, floor(random()*25)::int, floor(random()*10000000)::int from generate_series(1,34000000) f(id);cluster entidad USING idx_tabla_entidad ;0.3 seconds for 1.4 million rows is pretty good.  How much better are you hoping to get by micro-managing the planner?To figure it out, it might help to see the \n\nexplain (analyze, buffers, format text) of the plan you want it to use.  But the only way I see to do that is to drop the other index.If you don't want to \"really\" drop the index, you can drop it in a transaction, run the \"explain (analyze, buffers, format text)\" query, and rollback the transaction.  (Note this will lock the table for the entire duration of the transaction, so it is not something to do cavalierly in production)Cheers,Jeff", "msg_date": "Fri, 24 Apr 2020 14:26:33 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "Greetings,\n\n* Jeff Janes ([email protected]) wrote:\n> In order to read 1409985 / 12839 = 109 rows per buffer page, the table must\n> be extraordinarily well clustered on this index. That degree of clustering\n> is going to steal much of the thunder from the index-only scan. But in my\n> hands, it does still prefer the partial index with index-only scan by a\n> cost estimate ratio of 3 to 1 (despite it actually being slightly slower)\n> so I don't know why you don't get it being used.\n\nTurns out to be because what was provided wasn't actually what was being\nused- there's a domain in there and that seems to gum up the works and\nmake it so we don't consider the partial index as being something we can\nuse (see the discussion at the end of the other sub-thread).\n\nThanks,\n\nStephen", "msg_date": "Fri, 24 Apr 2020 14:33:23 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "Stephen Frost <[email protected]> writes:\n> Turns out to be because what was provided wasn't actually what was being\n> used- there's a domain in there and that seems to gum up the works and\n> make it so we don't consider the partial index as being something we can\n> use (see the discussion at the end of the other sub-thread).\n\nSome simple experiments here don't find that a domain-type column prevents\nuse of the partial index. So it's still not entirely clear what's\nhappening for the OP. I concur with Jeff's suggestion to try forcing\nuse of the desired index, and see whether it happens at all and what\nthe cost estimate is.\n\nI'm also wondering exactly which Postgres version this is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Apr 2020 15:39:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "On Fri, Apr 24, 2020 at 2:33 PM Stephen Frost <[email protected]> wrote:\n\n> Greetings,\n>\n> * Jeff Janes ([email protected]) wrote:\n> > In order to read 1409985 / 12839 = 109 rows per buffer page, the table\n> must\n> > be extraordinarily well clustered on this index. That degree of\n> clustering\n> > is going to steal much of the thunder from the index-only scan. But in\n> my\n> > hands, it does still prefer the partial index with index-only scan by a\n> > cost estimate ratio of 3 to 1 (despite it actually being slightly slower)\n> > so I don't know why you don't get it being used.\n>\n> Turns out to be because what was provided wasn't actually what was being\n> used- there's a domain in there and that seems to gum up the works and\n> make it so we don't consider the partial index as being something we can\n> use (see the discussion at the end of the other sub-thread).\n>\n\n\nThanks. I somehow managed to overlook the existence of the entire last 24\nhours of discussion. But if I change the type of entidad.cod_tabla to\nmatch the domain now shown in table.cod_table, I can still get the index\nonly scan over the partial index. Now the cost estimate has changed so it\nslightly prefers the other index instead (in agreement with the original\nreport) but usage of the partial index-only can is still possible (e.g. if\nI drop the single column full-table index). I don't understand why the\ndomain changes the estimate without changing the execution, but it isn't\nsomething that is very important to me. I'm more interested in the index\nonly scan is not actually much if any faster. Even if there is no IO\nbenefit due to the clustering, I'd still expect there to be some CPU\nbenefit of not jumping back and forth between index pages and heap pages,\nbut iI don't know how much effort it is worth to put into that either.\n\nCheers,\n\nJeff\n\nOn Fri, Apr 24, 2020 at 2:33 PM Stephen Frost <[email protected]> wrote:Greetings,\n\n* Jeff Janes ([email protected]) wrote:\n> In order to read 1409985 / 12839 = 109 rows per buffer page, the table must\n> be extraordinarily well clustered on this index.  That degree of clustering\n> is going to steal much of the thunder from the index-only scan.  But in my\n> hands, it does still prefer the partial index with index-only scan by a\n> cost estimate ratio of 3 to 1 (despite it actually being slightly slower)\n> so I don't know why you don't get it being used.\n\nTurns out to be because what was provided wasn't actually what was being\nused- there's a domain in there and that seems to gum up the works and\nmake it so we don't consider the partial index as being something we can\nuse (see the discussion at the end of the other sub-thread).Thanks.  I somehow managed to overlook the existence of the entire last 24 hours of discussion.  But if I change the type of entidad.cod_tabla tomatch the domain now shown in table.cod_table, I can still get the index only scan over the partial index.  Now the cost estimate has changed so it slightly prefers the other index instead (in agreement with the original report) but usage of the partial index-only can is still possible (e.g. if I drop the single column full-table index).  I don't understand why the domain changes the estimate without changing the execution, but it isn't something that is very important to me.  I'm more interested in the index only scan is not actually much if any faster.  Even if there is no IO benefit due to the clustering, I'd still expect there to be some CPU benefit of not jumping back and forth between index pages and heap pages, but iI don't know how much effort it is worth to put into that either.Cheers,Jeff", "msg_date": "Fri, 24 Apr 2020 18:59:41 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "Greetings,\n\n* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > Turns out to be because what was provided wasn't actually what was being\n> > used- there's a domain in there and that seems to gum up the works and\n> > make it so we don't consider the partial index as being something we can\n> > use (see the discussion at the end of the other sub-thread).\n> \n> Some simple experiments here don't find that a domain-type column prevents\n> use of the partial index. So it's still not entirely clear what's\n> happening for the OP. I concur with Jeff's suggestion to try forcing\n> use of the desired index, and see whether it happens at all and what\n> the cost estimate is.\n\nOnce burned, twice shy, I suppose- considering we weren't given the\nactual DDL the first round, I'm guessing there's other differences.\n\n> I'm also wondering exactly which Postgres version this is.\n\nAlso a good question.\n\nThanks,\n\nStephen", "msg_date": "Sat, 25 Apr 2020 08:02:46 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does not choose my indexes well" }, { "msg_contents": "I'm also wondering exactly which Postgres version this is.\n\nAlso a good question.\n\nThanks,\n\nStephen\n\npostgresql-12/bionic-pgdg,now 12.2-2.pgdg18.04+1 amd64 [instalado]\npostgresql-client-12/bionic-pgdg,now 12.2-2.pgdg18.04+1 amd64 \n[instalado, autom�tico]\npostgresql-client-common/bionic-pgdg,bionic-pgdg,now 213.pgdg18.04+1 all \n[instalado, autom�tico]\npostgresql-common/bionic-pgdg,bionic-pgdg,now 213.pgdg18.04+1 all \n[instalado, autom�tico]\npostgresql-doc-11/bionic-pgdg,bionic-pgdg,now 11.7-2.pgdg18.04+1 all \n[instalado]", "msg_date": "Sat, 25 Apr 2020 21:47:52 +0200", "msg_from": "Arcadio Ortega Reinoso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL does not choose my indexes well" } ]
[ { "msg_contents": "We have production database that has slow queries because of the query get\nall columns even if I'm using only one column.\nThe result is slow for tables that there are too much columns\nThe weird part is that there is environment that I can't reproduce it even\nif they are using the same postgresql.conf\nI didn't find what is the variant/configuration to avoid it\nI could reproduce it using the official docker image of postgresql\n\n* Steps to reproduce it\n\n1. Run the following script:\n docker run --name psql1 -d -e POSTGRES_PASSWORD=pwd postgres\n docker exec -it --user=postgres psql1 psql\n # Into docker container\n CREATE DATABASE db;\n \\connect db;\n CREATE TABLE link (\n ID serial PRIMARY KEY,\n url VARCHAR (255) NOT NULL,\n name VARCHAR (255) NOT NULL,\n description VARCHAR (255),\n rel VARCHAR (50)\n );\n EXPLAIN (ANALYZE, VERBOSE, BUFFERS)\n SELECT l1.url\n FROM link l1\n JOIN link l2\n ON l1.url=l2.url;\n\n2. See result of the Query Plan:\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------\n Hash Join (cost=10.90..21.85 rows=40 width=516) (actual\ntime=0.080..0.081 rows=1 loops=1)\n Output: l1.url\n Hash Cond: ((l1.url)::text = (l2.url)::text)\n Buffers: shared hit=5\n -> Seq Scan on public.link l1 (cost=0.00..10.40 rows=40 width=516)\n(actual time=0.010..0.011 rows=1 loops=1)\n* Output: l1.id <http://l1.id>, l1.url, l1.name\n<http://l1.name>, l1.description, l1.rel*\n Buffers: shared hit=1\n -> Hash (cost=10.40..10.40 rows=40 width=516) (actual\ntime=0.021..0.021 rows=1 loops=1)\n Output: l2.url\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on public.link l2 (cost=0.00..10.40 rows=40\nwidth=516) (actual time=0.010..0.011 rows=1 loops=1)\n Output: l2.url\n Buffers: shared hit=1\n Planning Time: 0.564 ms\n Execution Time: 0.142 ms\n\n3. Notice that I'm using only the column \"url\" for \"JOIN\" and \"SELECT\"\nsection,\nbut the \"Output\" section is returning all columns.\n\nIs there a manner to avoid returning all columns in order to get a better\nperformance?\n\nThank you in advance\n\n* PostgreSQL version:\n\n psql postgres -c \"SELECT version()\"\n PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n\nChanges made to the settings in the postgresql.conf file: see Server\nConfiguration for a quick way to list them all.\n without changes\n\nOperating system and version:\n cat /etc/os-release\n PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"\n\n-- \nMoisés López\n@moylop260\n\nWe have production database that has slow queries because of the query get all columns even if I'm using only one column.The result is slow for tables that there are too much columnsThe weird part is that there is environment that I can't reproduce it even if they are using the same postgresql.confI didn't find what is the variant/configuration to avoid itI could reproduce it using the official docker image of postgresql* Steps to reproduce it1. Run the following script:    docker run --name psql1 -d -e POSTGRES_PASSWORD=pwd postgres    docker exec -it --user=postgres psql1 psql    # Into docker container    CREATE DATABASE db;    \\connect db;    CREATE TABLE link (    ID serial PRIMARY KEY,    url VARCHAR (255) NOT NULL,    name VARCHAR (255) NOT NULL,    description VARCHAR (255),    rel VARCHAR (50)    );    EXPLAIN (ANALYZE, VERBOSE, BUFFERS)         SELECT l1.url        FROM link l1        JOIN link l2          ON l1.url=l2.url;2. See result of the Query Plan:    QUERY PLAN    -------------------------------------------------------------------------------------------    Hash Join  (cost=10.90..21.85 rows=40 width=516) (actual time=0.080..0.081 rows=1 loops=1)    Output: l1.url    Hash Cond: ((l1.url)::text = (l2.url)::text)    Buffers: shared hit=5    ->  Seq Scan on public.link l1  (cost=0.00..10.40 rows=40 width=516) (actual time=0.010..0.011 rows=1 loops=1)            Output: l1.id, l1.url, l1.name, l1.description, l1.rel            Buffers: shared hit=1    ->  Hash  (cost=10.40..10.40 rows=40 width=516) (actual time=0.021..0.021 rows=1 loops=1)            Output: l2.url            Buckets: 1024  Batches: 1  Memory Usage: 9kB            Buffers: shared hit=1            ->  Seq Scan on public.link l2  (cost=0.00..10.40 rows=40 width=516) (actual time=0.010..0.011 rows=1 loops=1)                Output: l2.url                Buffers: shared hit=1    Planning Time: 0.564 ms    Execution Time: 0.142 ms3. Notice that I'm using only the column \"url\" for \"JOIN\" and \"SELECT\" section,but the \"Output\" section is returning all columns.Is there a manner to avoid returning all columns in order to get a better performance?Thank you in advance* PostgreSQL version:    psql postgres -c \"SELECT version()\"        PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bitChanges made to the settings in the postgresql.conf file:  see Server Configuration for a quick way to list them all.    without changesOperating system and version:    cat /etc/os-release        PRETTY_NAME=\"Debian GNU/Linux 10 (buster)\"-- Moisés López@moylop260", "msg_date": "Fri, 24 Apr 2020 16:11:12 -0500", "msg_from": "Moises Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "The query plan get all columns but I'm using only one column." }, { "msg_contents": "Moises Lopez <[email protected]> writes:\n> -> Seq Scan on public.link l1 (cost=0.00..10.40 rows=40 width=516)\n> (actual time=0.010..0.011 rows=1 loops=1)\n> * Output: l1.id <http://l1.id>, l1.url, l1.name\n> <http://l1.name>, l1.description, l1.rel*\n\nThis is normal; it is not a bug, and it is not a source of performance\nissues either. The planner is choosing to do that to avoid a projection\nstep in this plan node, because there's no need for one. On the other\nscan, where it *is* important to project out just the required columns to\nminimize the size of the hash table above the scan, it does do so:\n\n> -> Seq Scan on public.link l2 (cost=0.00..10.40 rows=40\n> width=516) (actual time=0.010..0.011 rows=1 loops=1)\n> Output: l2.url\n\n> Is there a manner to avoid returning all columns in order to get a better\n> performance?\n\nYou have not shown us anything about what your actual performance\nissue is, but this isn't it.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 25 Apr 2020 10:24:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The query plan get all columns but I'm using only one column." }, { "msg_contents": "The example is nonsensical so I expect it is too contrived to be useful for\nanalyzing the actual problem.\n\nAdditionally, the total query time is under 1ms and most of it is planning\ntime. Use a prepared statement or do something else to reduce planning time\nlike reducing statistics target if that actually makes sense for your use\ncase.\n\nElse, show us something much closer to the real problem.\n\nThe example is nonsensical so I expect it is too contrived to be useful for analyzing the actual problem.Additionally, the total query time is under 1ms and most of it is planning time. Use a prepared statement or do something else to reduce planning time like reducing statistics target if that actually makes sense for your use case.Else, show us something much closer to the real problem.", "msg_date": "Sat, 25 Apr 2020 14:40:15 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The query plan get all columns but I'm using only one column." }, { "msg_contents": "Hello,\nThanks for reply\n\nI have 2 environments production and staging.\n\nThe pg_settings result for both environment is the same:\nname setting min_val max_val context\neffective_cache_size 3,145,728 1 2,147,483,647 user\nshared_buffers 1,048,576 16 1,073,741,823 postmaster\nwork_mem 5,592 64 2,147,483,647 user\nI have created a backup from production and restored it in staging.\n\nFor staging I have configured the following extra parameters:\n# config for testing environment only\nfsync=off\nfull_page_write=off\ncheckpoint_timeout=45min\nsynchronous_commit=off\nautovacuum=off\n\n\nFor staging the query plans was:\n[{'QUERY PLAN': 'Aggregate (cost=7433.20..7433.21 rows=1 width=8) (actual\ntime=56.372..56.372 rows=1 loops=1)'},\n {'QUERY PLAN': ' Output: count(product_template.id)'},\n {'QUERY PLAN': ' Buffers: shared hit=3580'},\n {'QUERY PLAN': ' -> Hash Right Join (cost=3695.08..7349.06 rows=33656\nwidth=4) (actual time=32.039..54.076 rows=33709 loops=1)'},\n {'QUERY PLAN': ' Output: product_template.id'},\n {'QUERY PLAN': ' Hash Cond: (ir_translation.res_id = product_template.id\n)'},\n {'QUERY PLAN': ' Buffers: shared hit=3580'},\n {'QUERY PLAN': ' -> Bitmap Heap Scan on public.ir_translation\n(cost=1128.80..4459.54 rows=24187 width=4) (actual time=6.143..18.122\nrows=33293 loops=1)'},\n {'QUERY PLAN': ' Output: ir_translation.id, ir_translation.lang,\nir_translation.src, ir_translation.type, ir_translation.res_id,\nir_translation.value, ir_translation.name, ir_translation.module,\nir_translation.state, ir_translation.comments'},\n {'QUERY PLAN': \" Recheck Cond: (((ir_translation.name)::text =\n'product.template,name'::text) AND ((ir_translation.lang)::text =\n'es_CR'::text) AND ((ir_translation.type)::text = 'model'::text))\"},\n {'QUERY PLAN': \" Filter: (ir_translation.value <> ''::text)\"},\n {'QUERY PLAN': ' Heap Blocks: exact=1632'},\n {'QUERY PLAN': ' Buffers: shared hit=1872'},\n {'QUERY PLAN': ' -> Bitmap Index Scan on ir_translation_ltn\n(cost=0.00..1122.76 rows=24187 width=0) (actual time=5.960..5.960\nrows=33293 loops=1)'},\n {'QUERY PLAN': \" Index Cond: (((ir_translation.name)::text =\n'product.template,name'::text) AND ((ir_translation.lang)::text =\n'es_CR'::text) AND ((ir_translation.type)::text = 'model'::text))\"},\n {'QUERY PLAN': ' Buffers: shared hit=240'},\n {'QUERY PLAN': ' -> Hash (cost=2145.57..2145.57 rows=33656 width=4)\n(actual time=25.724..25.724 rows=33709 loops=1)'},\n {'QUERY PLAN': ' Output: product_template.id'},\n {'QUERY PLAN': ' Buckets: 65536 Batches: 1 Memory Usage: 1698kB'},\n {'QUERY PLAN': ' Buffers: shared hit=1708'},\n {'QUERY PLAN': ' -> Seq Scan on public.product_template\n(cost=0.00..2145.57 rows=33656 width=4) (actual time=0.015..19.301\nrows=33709 loops=1)'},\n {'QUERY PLAN': ' Output: product_template.id'},\n {'QUERY PLAN': \" Filter: (product_template.active AND\n((product_template.type)::text = ANY ('{consu,product}'::text[])))\"},\n {'QUERY PLAN': ' Rows Removed by Filter: 1297'},\n {'QUERY PLAN': ' Buffers: shared hit=1708'},\n {'QUERY PLAN': 'Planning time: 0.782 ms'},\n {'QUERY PLAN': 'Execution time: 56.441 ms'}]\n\nFor production the query plan was:\n[{'QUERY PLAN': 'Aggregate (cost=2157.08..2157.09 rows=1 width=8) (actual\ntime=53219.763..53219.763 rows=1 loops=1)'},\n {'QUERY PLAN': ' Output: count(product_template.id)'},\n {'QUERY PLAN': ' Buffers: shared hit=27280'},\n {'QUERY PLAN': ' -> Nested Loop Left Join (cost=0.42..2156.64 rows=175\nwidth=4) (actual time=16.755..53215.383 rows=33709 loops=1)'},\n {'QUERY PLAN': ' Output: product_template.id'},\n {'QUERY PLAN': ' Inner Unique: true'},\n {'QUERY PLAN': ' Join Filter: (product_template.id =\nir_translation.res_id)'},\n {'QUERY PLAN': ' Rows Removed by Join Filter: 576388512'},\n {'QUERY PLAN': ' Buffers: shared hit=27280'},\n {'QUERY PLAN': ' -> Seq Scan on public.product_template\n(cost=0.00..2145.57 rows=175 width=4) (actual time=0.016..30.750 rows=33709\nloops=1)'},\n {'QUERY PLAN': ' Output: product_template.id, product_template.create_uid,\nproduct_template.create_date, product_template.write_date,\nproduct_template.write_uid, product_template.supply_method,\nproduct_template.uos_id, product_template.list_price,\nproduct_template.weight, product_template.mes_type,\nproduct_template.uom_id, product_template.description_purchase,\nproduct_template.uos_coeff, product_template.purchase_ok,\nproduct_template.company_id, product_template.name, product_template.state,\nproduct_template.loc_rack, product_template.uom_po_id,\nproduct_template.type, product_template.description,\nproduct_template.loc_row, product_template.description_sale,\nproduct_template.procure_method, product_template.rental,\nproduct_template.sale_ok, product_template.sale_delay,\nproduct_template.loc_case, product_template.produce_delay,\nproduct_template.categ_id, product_template.volume,\nproduct_template.active, product_template.color,\nproduct_template.track_incoming, product_template.track_outgoing,\nproduct_template.track_all, product_template.track_production,\nproduct_template.sale_line_warn, product_template.sale_line_warn_msg,\nproduct_template.purchase_line_warn,\nproduct_template.purchase_line_warn_msg, product_template.sequence,\nproduct_template.invoice_policy, product_template.service_type,\nproduct_template.description_picking, product_template.tracking,\nproduct_template.recurring_invoice, product_template.purchase_method,\nproduct_template.purchase_requisition, product_template.default_code,\nproduct_template.expense_policy, product_template.location_id,\nproduct_template.warehouse_id, product_template.hs_code,\nproduct_template.responsible_id, product_template.description_pickingout,\nproduct_template.description_pickingin,\nproduct_template.subscription_template_id,\nproduct_template.service_tracking,\nproduct_template.message_main_attachment_id,\nproduct_template.service_to_purchase, product_template.l10n_cr_uom_id,\nproduct_template.l10n_cr_tariff_heading'},\n {'QUERY PLAN': \" Filter: (product_template.active AND\n((product_template.type)::text = ANY ('{consu,product}'::text[])))\"},\n {'QUERY PLAN': ' Rows Removed by Filter: 1297'},\n {'QUERY PLAN': ' Buffers: shared hit=1708'},\n {'QUERY PLAN': ' -> Materialize (cost=0.42..8.45 rows=1 width=4) (actual\ntime=0.000..0.650 rows=17100 loops=33709)'},\n {'QUERY PLAN': ' Output: ir_translation.res_id'},\n {'QUERY PLAN': ' Buffers: shared hit=25572'},\n {'QUERY PLAN': ' -> Index Scan using ir_translation_unique on\npublic.ir_translation (cost=0.42..8.44 rows=1 width=4) (actual\ntime=0.039..21.429 rows=33293 loops=1)'},\n {'QUERY PLAN': ' Output: ir_translation.res_id'},\n {'QUERY PLAN': \" Index Cond: (((ir_translation.type)::text =\n'model'::text) AND ((ir_translation.name)::text =\n'product.template,name'::text) AND ((ir_translation.lang)::text =\n'es_CR'::text))\"},\n {'QUERY PLAN': \" Filter: (ir_translation.value <> ''::text)\"},\n {'QUERY PLAN': ' Buffers: shared hit=25572'},\n {'QUERY PLAN': 'Planning time: 0.615 ms'},\n {'QUERY PLAN': 'Execution time: 53219.965 ms'}]\n\n\nI have ran a manual \"vacuum (VERBOSE, ANALYZE) product_template;\"\nand \"vacuum (VERBOSE, ANALYZE) ir_translation;\" for production.\n\nSee the attachments production_vacuum_product_template.png and\nproduction_vacuum_ir_translation.png\n\nAfter, the query plans result for production was:\n {'QUERY PLAN': 'Aggregate (cost=7063.88..7063.89 rows=1 width=8)\n(actual time=36.513..36.514 rows=1 loops=1)'},\n {'QUERY PLAN': ' Output: count(product_template.id)'},\n {'QUERY PLAN': ' Buffers: shared hit=3580'},\n {'QUERY PLAN': ' -> Hash Left Join (cost=4745.62..6979.65 rows=33693\nwidth=4) (actual time=18.165..34.420 rows=33709 loops=1)'},\n {'QUERY PLAN': ' Output: product_template.id'},\n {'QUERY PLAN': ' Inner Unique: true'},\n {'QUERY PLAN': ' Hash Cond: (product_template.id =\nir_translation.res_id)'},\n {'QUERY PLAN': ' Buffers: shared hit=3580'},\n {'QUERY PLAN': ' -> Seq Scan on public.product_template\n(cost=0.00..2145.57 rows=33693 width=4) (actual time=0.006..10.797\nrows=33709 loops=1)'},\n {'QUERY PLAN': ' Output: product_template.id,\nproduct_template.create_uid, product_template.create_date,\nproduct_template.write_date, product_template.write_uid,\nproduct_template.supply_method, product_template.uos_id,\nproduct_template.list_price, product_template.weight,\nproduct_template.mes_type, product_template.uom_id,\nproduct_template.description_purchase, product_template.uos_coeff,\nproduct_template.purchase_ok, product_template.company_id,\nproduct_template.name, product_template.state, product_template.loc_rack,\nproduct_template.uom_po_id, product_template.type,\nproduct_template.description, product_template.loc_row,\nproduct_template.description_sale, product_template.procure_method,\nproduct_template.rental, product_template.sale_ok,\nproduct_template.sale_delay, product_template.loc_case,\nproduct_template.produce_delay, product_template.categ_id,\nproduct_template.volume, product_template.active, product_template.color,\nproduct_template.track_incoming, product_template.track_outgoing,\nproduct_template.track_all, product_template.track_production,\nproduct_template.sale_line_warn, product_template.sale_line_warn_msg,\nproduct_template.purchase_line_warn,\nproduct_template.purchase_line_warn_msg, product_template.sequence,\nproduct_template.invoice_policy, product_template.service_type,\nproduct_template.description_picking, product_template.tracking,\nproduct_template.recurring_invoice, product_template.purchase_method,\nproduct_template.purchase_requisition, product_template.default_code,\nproduct_template.expense_policy, product_template.location_id,\nproduct_template.warehouse_id, product_template.hs_code,\nproduct_template.responsible_id, product_template.description_pickingout,\nproduct_template.description_pickingin,\nproduct_template.subscription_template_id,\nproduct_template.service_tracking,\nproduct_template.message_main_attachment_id,\nproduct_template.service_to_purchase, product_template.l10n_cr_uom_id,\nproduct_template.l10n_cr_tariff_heading'},\n {'QUERY PLAN': \" Filter: (product_template.active AND\n((product_template.type)::text = ANY ('{consu,product}'::text[])))\"},\n {'QUERY PLAN': ' Rows Removed by Filter: 1297'},\n {'QUERY PLAN': ' Buffers: shared hit=1708'},\n {'QUERY PLAN': ' -> Hash (cost=4447.50..4447.50 rows=23849 width=4)\n(actual time=18.138..18.138 rows=33293 loops=1)'},\n {'QUERY PLAN': ' Output: ir_translation.res_id'},\n {'QUERY PLAN': ' Buckets: 65536 (originally 32768) Batches: 1\n(originally 1) Memory Usage: 1683kB'},\n {'QUERY PLAN': ' Buffers: shared hit=1872'},\n {'QUERY PLAN': ' -> Bitmap Heap Scan on public.ir_translation\n(cost=1124.50..4447.50 rows=23849 width=4) (actual time=5.120..13.517\nrows=33293 loops=1)'},\n {'QUERY PLAN': ' Output: ir_translation.res_id'},\n {'QUERY PLAN': \" Recheck Cond: (((ir_translation.name)::text =\n'product.template,name'::text) AND ((ir_translation.lang)::text =\n'es_CR'::text) AND ((ir_translation.type)::text = 'model'::text))\"},\n {'QUERY PLAN': \" Filter: (ir_translation.value <> ''::text)\"},\n {'QUERY PLAN': ' Heap Blocks: exact=1632'},\n {'QUERY PLAN': ' Buffers: shared hit=1872'},\n {'QUERY PLAN': ' -> Bitmap Index Scan on ir_translation_ltn\n(cost=0.00..1118.54 rows=23850 width=0) (actual time=4.908..4.908\nrows=33293 loops=1)'},\n {'QUERY PLAN': \" Index Cond: (((ir_translation.name)::text =\n'product.template,name'::text) AND ((ir_translation.lang)::text =\n'es_CR'::text) AND ((ir_translation.type)::text = 'model'::text))\"},\n {'QUERY PLAN': ' Buffers: shared hit=240'},\n {'QUERY PLAN': 'Planning time: 0.363 ms'},\n {'QUERY PLAN': 'Execution time: 36.666 ms'},\n ]\n\nSo, the my problem was fixed with VACUUM in production.\nThank you!\n\n\nEl sáb., 25 abr. 2020 a las 15:40, Michael Lewis (<[email protected]>)\nescribió:\n\n> The example is nonsensical so I expect it is too contrived to be useful\n> for analyzing the actual problem.\n>\n> Additionally, the total query time is under 1ms and most of it is planning\n> time. Use a prepared statement or do something else to reduce planning time\n> like reducing statistics target if that actually makes sense for your use\n> case.\n>\n> Else, show us something much closer to the real problem.\n>\n\n\n-- \nMoisés López Calderón\nMobile: (+521) 477-752-22-30\nTwitter: @moylop260\nhangout: [email protected]\nhttp://www.vauxoo.com - Odoo Gold Partner\nTwitter: @vauxoo", "msg_date": "Wed, 29 Apr 2020 14:21:37 -0500", "msg_from": "Moises Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The query plan get all columns but I'm using only one column." }, { "msg_contents": "It is generally a very bad idea to turn off autovacuum. When it is causing\nproblems, it is likely that it needs to run more often to keep up with the\nwork, rather than not run at all. Certainly if it is turned off, it would\nbe critical to have a regularly scheduled process to vacuum analyze all\ntables.\n\n>\n\nIt is generally a very bad idea to turn off autovacuum. When it is causing problems, it is likely that it needs to run more often to keep up with the work, rather than not run at all. Certainly if it is turned off, it would be critical to have a regularly scheduled process to vacuum analyze all tables.", "msg_date": "Wed, 29 Apr 2020 13:36:35 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The query plan get all columns but I'm using only one column." }, { "msg_contents": ">\n> In staging environment we have disabled autovacuum since that it is a\n> testing environment and the database are restored very often.\n> But in production environment it is enabled autovacuum=on\n>\n> The weird case is that production was slow and staging environment was\n> faster.\n>\n\nYou haven't specified how you are doing backup and restore, but unless it\nis a byte-for-byte file copy method, then there would be no bloat on the\nrestored staging environment so no need to vacuum. You would want to ensure\nyou take a new statistics sample with analyze database after restore if you\naren't.\n\nIn your production system, if your configs for autovacuum settings have not\nbeen changed from the default parameters, it probably is not keeping up at\nall if the system is moderately high in terms of update/delete\ntransactions. You can check pg_stat_activity for active vacuums, change the\nparameter to log autovacuums longer than X to 0 and review the logs, or\ncheck pg_stat_user_tables to see how many autovacuums/analyze have been\ndone since you last reset those stats.\n\nIf you have tables that are in the millions or hundreds or millions of\nrows, then I would recommend decreasing autovacuum_vacuum_scale_factor from\n20% down to 1% or perhaps less and similar\nfor autovacuum_analyze_scale_factor. You can do this on individual tables\nif you have mostly small tables and just a few large ones. Else, increase\nthe threshold settings as well. The default value\nfor autovacuum_vacuum_cost_delay changed from 20ms to 2ms in PG12 so that\nmay also be prudent to do likewise if you upgraded to PG12 and kept your\nold settings, assuming your I/O system can handle it.\n\nOtherwise, if you have a period of time when the activity is low for your\ndatabase(s), then a last resort can be a daily scheduled vacuum analyze on\nall tables. Note- do not do vacuum FULL which requires an exclusive lock on\nthe table to re-write it entirely. You are just looking to mark space\nre-usable for future transactions, not recover the disk space back to the\nOS to be consumed again if autovacuum still can't keep up. pg_repack\nextension would be an option if you need to recover disk space while online.\n\nIn staging environment we have disabled autovacuum since that it is a testing environment and the database are restored very often.But in production environment it is enabled autovacuum=onThe weird case is that production was slow and staging environment was faster.You haven't specified how you are doing backup and restore, but unless it is a byte-for-byte file copy method, then there would be no bloat on the restored staging environment so no need to vacuum. You would want to ensure you take a new statistics sample with analyze database after restore if you aren't.In your production system, if your configs for autovacuum settings have not been changed from the default parameters, it probably is not keeping up at all if the system is moderately high in terms of update/delete transactions. You can check pg_stat_activity for active vacuums, change the parameter to log autovacuums longer than X to 0 and review the logs, or check pg_stat_user_tables to see how many autovacuums/analyze have been done since you last reset those stats.If you have tables that are in the millions or hundreds or millions of rows, then I would recommend decreasing autovacuum_vacuum_scale_factor from 20% down to 1% or perhaps less and similar for autovacuum_analyze_scale_factor. You can do this on individual tables if you have mostly small tables and just a few large ones. Else, increase the threshold settings as well. The default value for autovacuum_vacuum_cost_delay changed from 20ms to 2ms in PG12 so that may also be prudent to do likewise if you upgraded to PG12 and kept your old settings, assuming your I/O system can handle it.Otherwise, if you have a period of time when the activity is low for your database(s), then a last resort can be a daily scheduled vacuum analyze on all tables. Note- do not do vacuum FULL which requires an exclusive lock on the table to re-write it entirely. You are just looking to mark space re-usable for future transactions, not recover the disk space back to the OS to be consumed again if autovacuum still can't keep up. pg_repack extension would be an option if you need to recover disk space while online.", "msg_date": "Thu, 30 Apr 2020 09:52:06 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The query plan get all columns but I'm using only one column." }, { "msg_contents": "Michael,\n\nYour complete explanation is very helpful!\nI appreciate it\nThank you so much!\n\nRegards!\n\n\nEl jue., 30 abr. 2020 a las 10:52, Michael Lewis (<[email protected]>)\nescribió:\n\n> In staging environment we have disabled autovacuum since that it is a\n>> testing environment and the database are restored very often.\n>> But in production environment it is enabled autovacuum=on\n>>\n>> The weird case is that production was slow and staging environment was\n>> faster.\n>>\n>\n> You haven't specified how you are doing backup and restore, but unless it\n> is a byte-for-byte file copy method, then there would be no bloat on the\n> restored staging environment so no need to vacuum. You would want to ensure\n> you take a new statistics sample with analyze database after restore if you\n> aren't.\n>\n> In your production system, if your configs for autovacuum settings have\n> not been changed from the default parameters, it probably is not keeping up\n> at all if the system is moderately high in terms of update/delete\n> transactions. You can check pg_stat_activity for active vacuums, change the\n> parameter to log autovacuums longer than X to 0 and review the logs, or\n> check pg_stat_user_tables to see how many autovacuums/analyze have been\n> done since you last reset those stats.\n>\n> If you have tables that are in the millions or hundreds or millions of\n> rows, then I would recommend decreasing autovacuum_vacuum_scale_factor from\n> 20% down to 1% or perhaps less and similar\n> for autovacuum_analyze_scale_factor. You can do this on individual tables\n> if you have mostly small tables and just a few large ones. Else, increase\n> the threshold settings as well. The default value\n> for autovacuum_vacuum_cost_delay changed from 20ms to 2ms in PG12 so that\n> may also be prudent to do likewise if you upgraded to PG12 and kept your\n> old settings, assuming your I/O system can handle it.\n>\n> Otherwise, if you have a period of time when the activity is low for your\n> database(s), then a last resort can be a daily scheduled vacuum analyze on\n> all tables. Note- do not do vacuum FULL which requires an exclusive lock on\n> the table to re-write it entirely. You are just looking to mark space\n> re-usable for future transactions, not recover the disk space back to the\n> OS to be consumed again if autovacuum still can't keep up. pg_repack\n> extension would be an option if you need to recover disk space while online.\n>\n\n\n-- \nMoisés López Calderón\nMobile: (+521) 477-752-22-30\nTwitter: @moylop260\nhangout: [email protected]\nhttp://www.vauxoo.com - Odoo Gold Partner\nTwitter: @vauxoo\n\nMichael,Your complete explanation is very helpful!I appreciate itThank you so much!Regards!El jue., 30 abr. 2020 a las 10:52, Michael Lewis (<[email protected]>) escribió:In staging environment we have disabled autovacuum since that it is a testing environment and the database are restored very often.But in production environment it is enabled autovacuum=onThe weird case is that production was slow and staging environment was faster.You haven't specified how you are doing backup and restore, but unless it is a byte-for-byte file copy method, then there would be no bloat on the restored staging environment so no need to vacuum. You would want to ensure you take a new statistics sample with analyze database after restore if you aren't.In your production system, if your configs for autovacuum settings have not been changed from the default parameters, it probably is not keeping up at all if the system is moderately high in terms of update/delete transactions. You can check pg_stat_activity for active vacuums, change the parameter to log autovacuums longer than X to 0 and review the logs, or check pg_stat_user_tables to see how many autovacuums/analyze have been done since you last reset those stats.If you have tables that are in the millions or hundreds or millions of rows, then I would recommend decreasing autovacuum_vacuum_scale_factor from 20% down to 1% or perhaps less and similar for autovacuum_analyze_scale_factor. You can do this on individual tables if you have mostly small tables and just a few large ones. Else, increase the threshold settings as well. The default value for autovacuum_vacuum_cost_delay changed from 20ms to 2ms in PG12 so that may also be prudent to do likewise if you upgraded to PG12 and kept your old settings, assuming your I/O system can handle it.Otherwise, if you have a period of time when the activity is low for your database(s), then a last resort can be a daily scheduled vacuum analyze on all tables. Note- do not do vacuum FULL which requires an exclusive lock on the table to re-write it entirely. You are just looking to mark space re-usable for future transactions, not recover the disk space back to the OS to be consumed again if autovacuum still can't keep up. pg_repack extension would be an option if you need to recover disk space while online.\n-- Moisés López CalderónMobile: (+521) 477-752-22-30Twitter: @moylop260hangout: [email protected]://www.vauxoo.com - Odoo Gold PartnerTwitter: @vauxoo", "msg_date": "Thu, 30 Apr 2020 11:26:27 -0500", "msg_from": "Moises Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The query plan get all columns but I'm using only one column." } ]
[ { "msg_contents": "Hello,\n\nI have a performance/regression problem on a complicated query (placed \ninto a function) when some tables are empty.\n\nOn Pg 11.6 the query takes 121ms\nOn Pg 12.2 it takes 11450ms\n\nI first sent a message to the pgsql-bugs mailing list :\n\nhttps://www.postgresql.org/message-id/16390-e9866af103d5a03a%40postgresql.org\n\nBut was redirected here. I was also told to post the actual problem, not \na simplified version (called \"toy tables\" by Tom Lane).\n\n\nThis is the first line of the plan :\n\nSort \n(cost=812647915435832343755929914826593174290432.00..812652524250886044745873982078186103504896.00 \nrows=1843526021485360431505148111877616697344 width=1362) (actual \ntime=1.443..1.443 rows=0 loops=1)\n\nThe database is (full) vacuumed and analyzed.\n\nSince the query plan is more than 560 lines and the query itself ~400 \nlines, I'm not sure it's efficient to post everything in an email.\n\nI have rather prepared a .backup of the database in custom format (made \nwith PG 11.6), dropping all big unused tables so that it's ~500Kb. It is \navailable here :\n\nhttp://freesofts.thefreecat.org/sage11demo_simple.backup\n\n\nIn order to test the problem, you can just call :\n\nselect * from findcontracts('{13}',7,true);\n\n\n\nIf it is more convenient to post everything in an email, just let me know.\n\nThanks for your help.\n\n\n", "msg_date": "Mon, 27 Apr 2020 19:49:50 +0200", "msg_from": "Jean-Christophe Boggio <[email protected]>", "msg_from_op": true, "msg_subject": "Recursive query slow on strange conditions" }, { "msg_contents": "On Mon, Apr 27, 2020 at 07:49:50PM +0200, Jean-Christophe Boggio wrote:\n> I have a performance/regression problem on a complicated query (placed into\n> a function) when some tables are empty.\n\n> I first sent a message to the pgsql-bugs mailing list :\n> https://www.postgresql.org/message-id/16390-e9866af103d5a03a%40postgresql.org\n=> BUG #16390: Regression between 12.2 and 11.6 on a recursive query : very slow and overestimation of rows\n\nThe most obvious explanation is due to this change:\nhttps://www.postgresql.org/docs/12/release-12.html\n|Allow common table expressions (CTEs) to be inlined into the outer query (Andreas Karlsson, Andrew Gierth, David Fetter, Tom Lane)\n|Specifically, CTEs are automatically inlined if they have no side-effects, are not recursive, and are referenced only once in the query. Inlining can be prevented by specifying MATERIALIZED, or forced for multiply-referenced CTEs by specifying NOT MATERIALIZED. Previously, CTEs were never inlined and were always evaluated before the rest of the query.\n\nSo you could try the query with \".. AS MATERIALIZED\".\n\n> On Pg 11.6 the query takes 121ms\n> On Pg 12.2 it takes 11450ms\n> \n> Since the query plan is more than 560 lines and the query itself ~400 lines,\n> I'm not sure it's efficient to post everything in an email.\n\nYou can also send a link to the plan on https://explain.depesz.com/\nWhich maybe more people will look at than if it requires downloading and\nrestoring a DB.\n\nFYI, I had a similar issue:\nhttps://www.postgresql.org/message-id/flat/20171110204043.GS8563%40telsasoft.com\n\nAnd my solution was to 1) create an child table: CREATE TABLE x_child() INHERITS(x)\nand, 2) change the query to use select from ONLY. (1) allows the planner to\nbelieve that the table really is empty, a conclusion it otherwise avoids and\n(2) avoids decending into the child (for which the planner would likewise avoid\nthe conclusion that it's actually empty).\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 27 Apr 2020 13:10:34 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query slow on strange conditions" }, { "msg_contents": "På mandag 27. april 2020 kl. 20:10:34, skrev Justin Pryzby <\[email protected] <mailto:[email protected]>>: \nOn Mon, Apr 27, 2020 at 07:49:50PM +0200, Jean-Christophe Boggio wrote:\n > I have a performance/regression problem on a complicated query (placed into\n > a function) when some tables are empty.\n\n > I first sent a message to the pgsql-bugs mailing list :\n > \nhttps://www.postgresql.org/message-id/16390-e9866af103d5a03a%40postgresql.org\n => BUG #16390: Regression between 12.2 and 11.6 on a recursive query : very \nslow and overestimation of rows\n\n The most obvious explanation is due to this change:\n https://www.postgresql.org/docs/12/release-12.html\n |Allow common table expressions (CTEs) to be inlined into the outer query \n(Andreas Karlsson, Andrew Gierth, David Fetter, Tom Lane)\n |Specifically, CTEs are automatically inlined if they have no side-effects, \nare not recursive, and are referenced only once in the query. Inlining can be \nprevented by specifying MATERIALIZED, or forced for multiply-referenced CTEs by \nspecifying NOT MATERIALIZED. Previously, CTEs were never inlined and were \nalways evaluated before the rest of the query. \n\nThe OP's query is recursive, sow no inlining will take place... \n\n\n--\n Andreas Joseph Krogh", "msg_date": "Mon, 27 Apr 2020 21:37:41 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query slow on strange conditions" }, { "msg_contents": "> You can also send a link to the plan on https://explain.depesz.com/\n> Which maybe more people will look at than if it requires downloading and\n> restoring a DB.\nThanks for the advice.\n\nHere is the plan for PG 11.6 : https://explain.depesz.com/s/Ewt8\n\nAnd the one for PG 12.2 : https://explain.depesz.com/s/oPAu\n\n\nNow for the schemas.\n\nCREATE OR REPLACE FUNCTION tisnofcountrygroup(p_idcountrygroup INT) \nRETURNS INT[] AS ...\n\n simple function that does a SELECT ARRAY_AGG(INT) on table countrygroups\n\n\\d countrygroups (table has 0 row)\n Table « \npublic.countrygroups »\n Colonne | Type | Collationnement | NULL-able \n| Par défaut\n----------------+------------------------+-----------------+-----------+-------------------------------------------------------\n idcountrygroup | integer | | not null \n| nextval('countrygroups_idcountrygroup_seq'::regclass)\n name | character varying(150) | | |\nIndex :\n \"countrygroups_pkey\" PRIMARY KEY, btree (idcountrygroup)\nRéférencé par :\n TABLE \"contrats\" CONSTRAINT \"contrats_idcountrygroup_fkey\" FOREIGN \nKEY (idcountrygroup) REFERENCES countrygroups(idcountrygroup)\n TABLE \"thirdparty\" CONSTRAINT \"thirdparty_idcountrygroup_fkey\" \nFOREIGN KEY (idcountrygroup) REFERENCES countrygroups(idcountrygroup)\n TABLE \"tisningroups\" CONSTRAINT \"tisningroups_idcountrygroup_fkey\" \nFOREIGN KEY (idcountrygroup) REFERENCES countrygroups(idcountrygroup) ON \nDELETE CASCADE\n\n\n\n\\d thirdparty (7 rows)\n Table « public.thirdparty »\n Colonne | Type | Collationnement | \nNULL-able | Par défaut\n-------------------------+------------------------+-----------------+-----------+---------------------------------------\n idthirdparty | integer | | \nnot null | nextval('providers_id_seq'::regclass)\n nom | character varying(50) | | \nnot null |\n idterritoire | integer | | \n |\n pcttokeep | double precision | | \n | 100.0\n devise | character varying(3) | | \n |\n variante | character varying(100) | | \n |\n canreceivecatalogues | boolean | | \n | false\n idcountrygroup | integer | | \n |\n viewsubpublishers | boolean | | \n | false\n catexpchrono | boolean | | \n | false\n catexpcwr | boolean | | \n | false\n catexpcwr_receiver | character varying(5) | | \n |\n catexpcs | boolean | | \n | false\n catexptsul | boolean | | \n | false\n catexpboem | boolean | | \n | false\n categories | character varying(100) | | \n |\n catexpignoreterritories | boolean | | \n | false\nIndex :\n \"providers_pkey\" PRIMARY KEY, btree (idthirdparty)\n\n\n\n\\d territoires (268 rows)\n Table « public.territoires »\n Colonne | Type | Collationnement | \nNULL-able | Par défaut\n-----------------------+------------------------+-----------------+-----------+-----------------------------------------\n idterritoire | integer | | not \nnull | nextval('territoires_id_seq'::regclass)\n tisn | integer | | \n |\n nom | character varying(50) | | \n |\n smallcode | character varying(3) | | \n |\n longcode | character varying(8) | | \n |\n nom_en | character varying(100) | | \n |\n frenchsocialsecurity | boolean | | \n | false\n frenchvat | boolean | | \n | false\n frenchbroadcastagessa | boolean | | \n | false\n withtaxdep | double precision | | \n | 0.0\n withtaxdrm | double precision | | \n | 0.0\n stmtinenglish | boolean | | \n | true\nIndex :\n \"territoires_pkey\" PRIMARY KEY, btree (idterritoire)\n \"ix_tisn\" UNIQUE, btree (tisn)\n\n\n\n\\d copyrightad (280 rows)\n Table « public.copyrightad »\n Colonne | Type | Collationnement | \nNULL-able | Par défaut\n--------------------+-----------------------------+-----------------+-----------+-----------------------------------------\n idcopyright | integer | | \nnot null | nextval('copyrightad_id_seq'::regclass)\n idoeu | integer | | \nnot null |\n idad | integer | | \n |\n parent | integer | | \n |\n idimport | integer | | \n |\n role | character varying(3) | | \n |\n qpdepsacem | double precision | | \n |\n qpdrmsacem | double precision | | \n |\n qpphonosacem | double precision | | \n |\n mechowned | double precision | | \n |\n perfowned | double precision | | \n |\n syncowned | double precision | | \n |\n mechcoll | double precision | | \n |\n perfcoll | double precision | | \n |\n synccoll | double precision | | \n |\n idterritoire | integer | | \n |\n lettrage | character varying(1) | | \n |\n droitsreserves | boolean | | \n |\n avanceinitiale | double precision | | \n |\n ediacompteauteur | boolean | | \n |\n iscontrolled | boolean | | \n | false\n idcg | integer | | \n |\n idthirdparty | integer | | \n |\n qpspecialsplitrate | double precision | | \n |\n tisn | integer | | \n |\n tmpmatchparent | character varying(50) | | \n |\n creator | text | | \n | SESSION_USER\n created | timestamp without time zone | | \n | now()\n iscoedmanager | boolean | | \n | false\nIndex :\n \"copyrightad_pkey\" PRIMARY KEY, btree (idcopyright)\n \"copyrightad_idad\" btree (idad)\n \"copyrightad_idimport\" btree (idimport)\n \"copyrightad_idoeu\" btree (idoeu)\n \"copyrightad_parent\" btree (parent)\n \"ix_copyright_idad\" btree (idad)\n \"ix_copyright_idoeu\" btree (idoeu)\n\n\n\\d contrats (2 rows, none satisfying the condition in the query)\n Table « public.contrats »\n Colonne | Type | Collationnement | NULL-able \n| Par défaut\n----------------+------------------------+-----------------+-----------+---------------------------------------------\n idcontrat | integer | | not null \n| nextval('contrats_idcontrat_seq'::regclass)\n idsociete | integer | | |\n libelle | character varying(100) | | |\n territoire | character varying(255) | | |\n notes | text | | |\n datedebut | date | | |\n datefin | date | | |\n codeclegest | character varying(10) | | |\n idadgest | integer | | |\n codezp | character varying(20) | | |\n nivdec | integer | | |\n etage | integer | | not null | 1\n idtypecontrat | integer | | not null |\n idcountrygroup | integer | | |\n alsoglobal | boolean | | \n| false\nIndex :\n \"contrats_pkey\" PRIMARY KEY, btree (idcontrat)\n\n\n\\d ctract (0 row)\n Table « public.ctract »\n Colonne | Type | Collationnement | NULL-able | \n Par défaut\n------------+------------------+-----------------+-----------+------------------------------------------\n idctract | integer | | not null | \nnextval('ctract_idctract_seq'::regclass)\n idcontrat | integer | | not null |\n idad | integer | | |\n isassignor | boolean | | not null |\n copubshare | double precision | | |\n idclient | integer | | |\nIndex :\n \"ctract_pkey\" PRIMARY KEY, btree (idctract)\n\n\n\\d roles (19 rows)\n Table « public.roles »\n Colonne | Type | Collationnement | NULL-able | Par \ndéfaut\n------------+-----------------------+-----------------+-----------+------------\n role | character varying(3) | | not null |\n libelle | character varying(50) | | |\n type | character varying(1) | | not null |\n libelle_en | character varying(50) | | |\nIndex :\n \"roles_pkey\" PRIMARY KEY, btree (role)\n\n\n\n\\d ad (55 rows, many fields removed for readability)\n Table « public.ad »\n Colonne | Type | \nCollationnement | NULL-able | Par défaut\n------------------------------+-----------------------------+-----------------+-----------+--------------------------------\n idad | integer | \n | not null | nextval('ad_id_seq'::regclass)\n codecle | character varying(20) | \n | |\n nom | character varying(100) | \n | |\n idclient | integer | \n | |\nIndex :\n \"ad_pkey\" PRIMARY KEY, btree (idad)\n \"i_ad_codecle\" btree (codecle)\nContraintes de clés étrangères :\n \"ad_idclient_fkey\" FOREIGN KEY (idclient) REFERENCES \nclients(idclient) ON DELETE SET NULL\n\n\n\n\\d clients (0 row)\n Table « public.clients »\n Colonne | Type | Collationnement | NULL-able | \n Par défaut\n-----------+------------------------+-----------------+-----------+-------------------------------------------\n idclient | integer | | not null | \nnextval('clients_idclient_seq'::regclass)\n name | character varying(200) | | not null |\n idsociete | integer | | |\n is_us | boolean | | | false\nIndex :\n \"clients_pkey\" PRIMARY KEY, btree (idclient)\n\n\n\n\\d sprd (249 rows)\n Table « public.sprd »\n Colonne | Type | Collationnement | \nNULL-able | Par défaut\n------------------+------------------------+-----------------+-----------+------------\n idsprd | integer | | not null |\n name | character varying(30) | | not null |\n doesperf | boolean | | not null |\n doesmech | boolean | | not null |\n country | character varying(100) | | |\n perflocalclaim | double precision | | |\n mechlocalclaim | double precision | | |\n perfforeignclaim | double precision | | |\n mechforeignclaim | double precision | | |\n tisn | integer | | |\n wantsagreement | boolean | | \n | false\nIndex :\n \"sprd_pkey\" PRIMARY KEY, btree (idsprd)\n\n\n\n\nJC\n\n\n\n", "msg_date": "Mon, 27 Apr 2020 22:22:33 +0200", "msg_from": "Jean-Christophe Boggio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recursive query slow on strange conditions" }, { "msg_contents": "Hello,\n\nI have rewritten the function/query to make it a PLPGSQL function and \nsplit the query in ~20 smaller queries.\n\nNow the problem of the JIT compiler kicking in also happens on PG 11.6\nAlthough the 2 seconds induced delay is not a serious problem when I \nexecute the query for thousands of items, it really becomes one when \nquerying ONE item.\n\nIs there a way to disable JIT (I use the apt.postgresql.org repository) \nin both 11.6 and 12.2 ? I would have liked to disable it on this \nparticular query but maybe I could live with disabling JIT everywhere.\n\nThanks for your help,\n\nJC\n\n\n", "msg_date": "Mon, 4 May 2020 18:12:34 +0200", "msg_from": "Jean-Christophe Boggio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recursive query slow on strange conditions" }, { "msg_contents": "On Mon, May 4, 2020 at 9:12 AM Jean-Christophe Boggio <\[email protected]> wrote:\n\n> Is there a way to disable JIT (I use the apt.postgresql.org repository)\n> in both 11.6 and 12.2 ? I would have liked to disable it on this\n> particular query but maybe I could live with disabling JIT everywhere.\n>\n>\nhttps://www.postgresql.org/docs/12/jit-decision.html\n\nDavid J.\n\nOn Mon, May 4, 2020 at 9:12 AM Jean-Christophe Boggio <[email protected]> wrote:Is there a way to disable JIT (I use the apt.postgresql.org repository) \nin both 11.6 and 12.2 ? I would have liked to disable it on this \nparticular query but maybe I could live with disabling JIT everywhere.https://www.postgresql.org/docs/12/jit-decision.htmlDavid J.", "msg_date": "Mon, 4 May 2020 09:20:19 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recursive query slow on strange conditions" }, { "msg_contents": "> https://www.postgresql.org/docs/12/jit-decision.html\n\nThanks a lot David, I missed that part of the doc.\n\nJC\n\n\n\n", "msg_date": "Mon, 4 May 2020 18:25:15 +0200", "msg_from": "Jean-Christophe Boggio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recursive query slow on strange conditions" } ]
[ { "msg_contents": "Hello,\n\nI am trying to figure out the recommended settings for a PG dedicated \nmachine regarding NUMA.\n\nI assume that the shared buffers are using Huge Phages only. Please \ncorrect if I am wrong:\n\n1) postgres is started with numactl --interleave=all, in order to spread \nmemory pages evenly on nodes.\n2) wm.swappiness is left to the default 60 value, because Huge Pages \nnever swap, and we wish the idle backend to be swapped out if necessary.\n3) vm.zone_reclaim_mode = 0. I am not sure it is the right choice.\n4) kernel.numa_balancing = 1. Only if it is confirmed that it will not \naffect postgres, because started with the interleave policy.\n\nThanks\n\n\n\n", "msg_date": "Wed, 29 Apr 2020 08:54:08 +0200", "msg_from": "=?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]>", "msg_from_op": true, "msg_subject": "NUMA settings" }, { "msg_contents": "On Wed, 2020-04-29 at 08:54 +0200, Marc Rechté wrote:\n> I am trying to figure out the recommended settings for a PG dedicated \n> machine regarding NUMA.\n> \n> I assume that the shared buffers are using Huge Phages only. Please \n> correct if I am wrong:\n> \n> 1) postgres is started with numactl --interleave=all, in order to spread \n> memory pages evenly on nodes.\n> 2) wm.swappiness is left to the default 60 value, because Huge Pages \n> never swap, and we wish the idle backend to be swapped out if necessary.\n> 3) vm.zone_reclaim_mode = 0. I am not sure it is the right choice.\n> 4) kernel.numa_balancing = 1. Only if it is confirmed that it will not \n> affect postgres, because started with the interleave policy.\n\nI am not the top expert on this, but as far as I can tell:\n\n- Disabling NUMA is good if you want to run a single database cluster\n on the machine that should use all resources.\n\n If you want to run several clusters that share the resources, leaving\n NUMA support enabled might be the better thing to do.\n\n- If you can, disable NUMA in the BIOS, on as low a level as possible.\n\n- I think \"kernel.numa_balancing\" should be 0.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Wed, 29 Apr 2020 10:50:54 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NUMA settings" }, { "msg_contents": "Hi,\n\nOn 2020-04-29 10:50:54 +0200, Laurenz Albe wrote:\n> On Wed, 2020-04-29 at 08:54 +0200, Marc Recht� wrote:\n> > I am trying to figure out the recommended settings for a PG dedicated \n> > machine regarding NUMA.\n> > \n> > I assume that the shared buffers are using Huge Phages only. Please \n> > correct if I am wrong:\n> > \n> > 1) postgres is started with numactl --interleave=all, in order to spread \n> > memory pages evenly on nodes.\n> > 2) wm.swappiness is left to the default 60 value, because Huge Pages \n> > never swap, and we wish the idle backend to be swapped out if necessary.\n> > 3) vm.zone_reclaim_mode = 0. I am not sure it is the right choice.\n> > 4) kernel.numa_balancing = 1. Only if it is confirmed that it will not \n> > affect postgres, because started with the interleave policy.\n> \n> I am not the top expert on this, but as far as I can tell:\n> \n> - Disabling NUMA is good if you want to run a single database cluster\n> on the machine that should use all resources.\n> \n> If you want to run several clusters that share the resources, leaving\n> NUMA support enabled might be the better thing to do.\n> \n> - If you can, disable NUMA in the BIOS, on as low a level as possible.\n\nI am doubtful that that's generally going to be beneficial. I think the\nstrategy of starting postgres with interleave is probably a better\nanswer.\n\n- Andres\n\n\n", "msg_date": "Mon, 4 May 2020 09:20:42 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NUMA settings" }, { "msg_contents": "> Hi,\n> \n> On 2020-04-29 10:50:54 +0200, Laurenz Albe wrote:\n>> On Wed, 2020-04-29 at 08:54 +0200, Marc Rechtï¿œ wrote:\n>>> I am trying to figure out the recommended settings for a PG dedicated\n>>> machine regarding NUMA.\n>>>\n>>> I assume that the shared buffers are using Huge Phages only. Please\n>>> correct if I am wrong:\n>>>\n>>> 1) postgres is started with numactl --interleave=all, in order to spread\n>>> memory pages evenly on nodes.\n>>> 2) wm.swappiness is left to the default 60 value, because Huge Pages\n>>> never swap, and we wish the idle backend to be swapped out if necessary.\n>>> 3) vm.zone_reclaim_mode = 0. I am not sure it is the right choice.\n>>> 4) kernel.numa_balancing = 1. Only if it is confirmed that it will not\n>>> affect postgres, because started with the interleave policy.\n>>\n>> I am not the top expert on this, but as far as I can tell:\n>>\n>> - Disabling NUMA is good if you want to run a single database cluster\n>> on the machine that should use all resources.\n>>\n>> If you want to run several clusters that share the resources, leaving\n>> NUMA support enabled might be the better thing to do.\n>>\n>> - If you can, disable NUMA in the BIOS, on as low a level as possible.\n> \n> I am doubtful that that's generally going to be beneficial. I think the\n> strategy of starting postgres with interleave is probably a better\n> answer.\n> \n> - Andres\n> \n> \n\nThanks for answers. Further readings make me think that we should *not* \nstart postgres with numactl --interleave=all: this may have counter \nproductive effect on backends anon memory (heap, stack). IMHO, what is \nimportant is to use Huge Pages for shared buffers: they are allocated \n(reserved) by the kernel at boot time and spread evenly on all nodes. On \ntop of that they never swap.\n\nMy (temp) conclusions are following:\n\tvm.zone_reclaim_mode = 0\n\tkernel.numa_balancing = 0 (still not sure with that choice)\n\twm.swappiness = 60 (default)\n\tstart postgres as usual (no numactl)\n\n\n", "msg_date": "Tue, 5 May 2020 07:56:54 +0200", "msg_from": "=?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NUMA settings" }, { "msg_contents": "On Tue, 2020-05-05 at 07:56 +0200, Marc Rechté wrote:\n> Thanks for answers. Further readings make me think that we should *not* \n> start postgres with numactl --interleave=all: this may have counter \n> productive effect on backends anon memory (heap, stack). IMHO, what is \n> important is to use Huge Pages for shared buffers: they are allocated \n> (reserved) by the kernel at boot time and spread evenly on all nodes. On \n> top of that they never swap.\n> \n> My (temp) conclusions are following:\n> vm.zone_reclaim_mode = 0\n> kernel.numa_balancing = 0 (still not sure with that choice)\n> wm.swappiness = 60 (default)\n> start postgres as usual (no numactl)\n\nThanks for sharing your insights.\n\nI think that \"vm.swappiness\" should be 0.\nPostgreSQL does its own memory management, any swapping by the kernel\nwould go against that.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Tue, 05 May 2020 10:00:02 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NUMA settings" }, { "msg_contents": "> On Tue, 2020-05-05 at 07:56 +0200, Marc Rechté wrote:\n>> Thanks for answers. Further readings make me think that we should *not*\n>> start postgres with numactl --interleave=all: this may have counter\n>> productive effect on backends anon memory (heap, stack). IMHO, what is\n>> important is to use Huge Pages for shared buffers: they are allocated\n>> (reserved) by the kernel at boot time and spread evenly on all nodes. On\n>> top of that they never swap.\n>>\n>> My (temp) conclusions are following:\n>> vm.zone_reclaim_mode = 0\n>> kernel.numa_balancing = 0 (still not sure with that choice)\n>> wm.swappiness = 60 (default)\n>> start postgres as usual (no numactl)\n> \n> Thanks for sharing your insights.\n> \n> I think that \"vm.swappiness\" should be 0.\n> PostgreSQL does its own memory management, any swapping by the kernel\n> would go against that.\n> \n> Yours,\n> Laurenz Albe\n> \nAs said in the post, we wish the idle backends to be swapped out if \nnecessary. Therefore lowering swappiness would produce the opposite \neffect: swapping out Linux file cache rather than backends memory.\n\n\n", "msg_date": "Tue, 5 May 2020 10:11:20 +0200", "msg_from": "=?UTF-8?Q?Marc_Recht=c3=a9?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NUMA settings" }, { "msg_contents": "On Tue, 2020-05-05 at 10:11 +0200, Marc Rechté wrote:\n> > I think that \"vm.swappiness\" should be 0.\n> > PostgreSQL does its own memory management, any swapping by the kernel\n> > would go against that.\n> > \n> > Yours,\n> > Laurenz Albe\n> > \n> As said in the post, we wish the idle backends to be swapped out if \n> necessary. Therefore lowering swappiness would produce the opposite \n> effect: swapping out Linux file cache rather than backends memory.\n\nI see. Sorry for not paying attention.\n\nAn idle backend consumes only a few MB of RAM, though.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Tue, 05 May 2020 10:25:57 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NUMA settings" } ]
[ { "msg_contents": "I need to store about 600 million rows of property addresses across\nmultiple counties. I need to have partitioning setup on the table as\nthere will be updates and inserts performed to the table frequently\nand I want the queries to have good performance.\n\n From what I understand hash partitioning would not be the right\napproach in this case, since for each query PostgreSQL has to check\nthe indexes of all partitions?\n\nWould list partitioning be suitable? if I want PostgreSQL to know\nwhich partition the row is it can directly load the relevant index\nwithout having to check other partitions. Should I be including the\npartition key in the where clause?\n\nI'd like to hear some recommendations on the best way to approach\nthis. I'm using PostgreSQL 12\n\n\n", "msg_date": "Sat, 2 May 2020 09:20:06 -0400", "msg_from": "Arya F <[email protected]>", "msg_from_op": true, "msg_subject": "Best partition type for billions of addresses" }, { "msg_contents": "On Sat, May 02, 2020 at 09:20:06AM -0400, Arya F wrote:\n> I need to store about 600 million rows of property addresses across\n> multiple counties. I need to have partitioning setup on the table as\n> there will be updates and inserts performed to the table frequently\n> and I want the queries to have good performance.\n\nI dug up the last messages about this:\nhttps://www.postgresql.org/message-id/flat/CAFoK1aztep-079Fxmaos6umR8X6m3x1K_aZLGtQGpYxfENh9%3DA%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/CAFoK1azkv1Z%3DRr7ZWrJjk0RQSoF6ah%2BQMpLSSyBs1KsEiQ3%3Dvw%40mail.gmail.com\nhttps://www.postgresql.org/message-id/CAFoK1axr_T6nB8ZAq8g2QBcqv_pE%3DdsZsxyjatz8Q67k1VKAnw%40mail.gmail.com\n\n\n> From what I understand hash partitioning would not be the right\n> approach in this case, since for each query PostgreSQL has to check\n> the indexes of all partitions?\n\nIndexes are separate from partitioning. Typically, the partitioned columns are\nindexed, but it's not required.\n\nIf the partition key isn't used in your typical query, then partitioning didn't\nhelp you, and you chose the wrong partition strategy/key.\n\n> Would list partitioning be suitable? if I want PostgreSQL to know\n> which partition the row is it can directly load the relevant index\n> without having to check other partitions. Should I be including the\n> partition key in the where clause?\n\nIt sounds like you're thinking about this backwards.\n\nWhat are your typical queries ? That should determines the partition strategy\nand key, not the other way around. You should maybe think about whether there\nare views/functions/joins of the partitioned column.\n\nFor example, at telsasoft, our report queries *always* say \"tbl.start_time >=\nt1 AND tbl.start_time < t2\", so I partitioned our tables BY RANGE(start_time),\nso a typical report hits only a single table. And, start_time has an index on\nit, so a typical query over 1-2 days will only hit a fraction of that table.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 2 May 2020 09:00:32 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best partition type for billions of addresses" }, { "msg_contents": "Greetings,\n\n* Arya F ([email protected]) wrote:\n> I need to store about 600 million rows of property addresses across\n> multiple counties. I need to have partitioning setup on the table as\n> there will be updates and inserts performed to the table frequently\n> and I want the queries to have good performance.\n\nThat's not what partitioning is for, and 600m rows isn't all *that*\nmany.\n\n> >From what I understand hash partitioning would not be the right\n> approach in this case, since for each query PostgreSQL has to check\n> the indexes of all partitions?\n> \n> Would list partitioning be suitable? if I want PostgreSQL to know\n> which partition the row is it can directly load the relevant index\n> without having to check other partitions. Should I be including the\n> partition key in the where clause?\n> \n> I'd like to hear some recommendations on the best way to approach\n> this. I'm using PostgreSQL 12\n\nIn this case, it sounds like \"don't\" is probably the best option.\n\nPartitioning is good for data management, particularly when you have\ndata that \"ages out\" or should be removed/dropped at some point,\nprovided your queries use the partition key. Partitioning doesn't speed\nup routine inserts and updates that are using a proper index and only\nupdating a small set of rows at a time.\n\nThanks,\n\nStephen", "msg_date": "Sat, 2 May 2020 10:01:09 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best partition type for billions of addresses" }, { "msg_contents": "> * Arya F ([email protected]) wrote:\n> > I need to store about 600 million rows of property addresses across\n> > multiple counties. I need to have partitioning setup on the table as\n> > there will be updates and inserts performed to the table frequently\n> > and I want the queries to have good performance.\n>\n> That's not what partitioning is for, and 600m rows isn't all *that*\n> many.\n>\n\nBut I have noticed that my updates and inserts have slowed down\ndramatically when I started going over about 20 million rows and the\nreason was because every time it has to update the index. When I\nremoved the index, my insert performance stayed good no matter the\nsize of the table.\n\nSo I should be able to achieve good performance with just one\npartition? Maybe I just need to get hardware with more memory?\n\n\n", "msg_date": "Sat, 2 May 2020 10:33:47 -0400", "msg_from": "Arya F <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best partition type for billions of addresses" }, { "msg_contents": "Greetings,\n\n* Arya F ([email protected]) wrote:\n> > * Arya F ([email protected]) wrote:\n> > > I need to store about 600 million rows of property addresses across\n> > > multiple counties. I need to have partitioning setup on the table as\n> > > there will be updates and inserts performed to the table frequently\n> > > and I want the queries to have good performance.\n> >\n> > That's not what partitioning is for, and 600m rows isn't all *that*\n> > many.\n> \n> But I have noticed that my updates and inserts have slowed down\n> dramatically when I started going over about 20 million rows and the\n> reason was because every time it has to update the index. When I\n> removed the index, my insert performance stayed good no matter the\n> size of the table.\n\nSure it does.\n\n> So I should be able to achieve good performance with just one\n> partition? Maybe I just need to get hardware with more memory?\n\nInstead of jumping to partitioning, I'd suggest you post your actual\ntable structures, queries, and explain results here and ask for help.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nThanks,\n\nStephen", "msg_date": "Sat, 2 May 2020 10:39:24 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best partition type for billions of addresses" } ]
[ { "msg_contents": "Hi,\n\nHoping someone can help with this performance issue that's been driving a\nfew of us crazy :-) Any guidance greatly appreciated.\n\nA description of what you are trying to achieve and what results you\nexpect.:\n - I'd like to get an understanding of why the following query (presented\nin full, but there are specific parts that are confusing me) starts off\ntaking ~second in duration but 'switches' to taking over 4 minutes.\n - we initially saw this behaviour for the exact same sql with a different\nindex that resulted in an index scan. To try and fix the issue we've\ncreated an additional index with additional included fields so we now have\nIndex Only Scans, but are still seeing the same problem.\n - execution plan is from auto_explain output when it took just over 4\nminutes to execute. The time is shared ~equally between these two\nindex-only scans.\n - There are no checkpoints occurring concurrently with this (based on\n\"checkpoint starting\" and \"checkpoint complete\" in logs)\n - bloat on the index is about 30%\n\n Segments of interest:\n 1. -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on\ntable1 table1alias1 (cost=0.56..17.25 rows=1 width=60) (actual\ntime=110.539..123828.134 rows=67000 loops=1)\n Index Cond: (col20 = $2005)\n Filter: (((col3 = $2004) OR (col3 IS NULL)) AND ((col8)::text = ANY\n((ARRAY[$1004, ..., $2003])::text[])))\n Rows Removed by Filter: 2662652\n Heap Fetches: 6940\n Buffers: shared hit=46619 read=42784 written=52\n\n 2. -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on\ntable1 table1alias2 (cost=0.56..17.23 rows=1 width=36) (actual\ntime=142.855..122034.039 rows=67000 loops=1)\n Index Cond: (col20 = $1001)\n Filter: ((col8)::text = ANY ((ARRAY[$1, ..., $1000])::text[]))\n Rows Removed by Filter: 2662652\n Heap Fetches: 6891\n Buffers: shared hit=47062 read=42331 written=37\n\nIf I run the same queries now:\nIndex Only Scan using table1_typea_include_uniqueid_col16_idx on table1\ntable1alias1 (cost=0.56..2549.69 rows=69 width=36)\n(actual time=1.017..1221.375 rows=67000 loops=1)\nHeap Fetches: 24\nBuffers: shared hit=2849 read=2483\n\nbuffers do look different - but still, reading 42k doesn't seem like it\nwould cause a delay of 4m?\n\nActually, here's another example of segment 2 from logs.\n\n Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\ntable1alias2 (cost=0.56..17.23 rows=1 width=36) (actual\ntime=36.559..120649.742 rows=65000 loops=1)\n Index Cond: (col20 = $1001)\n Filter: ((col8)::text = ANY ((ARRAY[$1, $1000]::text[]))\n Rows Removed by Filter: 2664256\n Heap Fetches: 6306\n Buffers: shared hit=87712 read=1507\n\nOne note: I've replaced table/column names (sorry, a requirement).\n\nFull subquery execution plan (i've stripped out the view materialization\nfrom row 14 onwards but left the header in):\nhttps://explain.depesz.com/s/vsdH\n\nFull Sql:\nSELECT\n subquery.id\nFROM (\n SELECT\n table1alias1.id,\n table1alias1.uniqueid,\n table1alias1.col16 AS order_by\n FROM\n table1 AS table1alias1\n LEFT OUTER JOIN (\n SELECT\n inlinealias1.id,\n inlinealias1.uniqueid,\n inlinealias1.col4,\n inlinealias1.col5,\n inlinealias1.col6,\n inlinealias1.col7\n FROM (\n SELECT\n table2alias.id,\n table2alias.uniqueid,\n table2alias.col3,\n table2alias.col4,\n table2alias.col5,\n table2alias.col6,\n row_number() OVER (PARTITION BY table2alias.uniqueid ORDER BY\ntable2alias.col13 DESC, table2alias.col3 DESC, table2alias.id DESC) AS rn\n FROM\n table2 AS table2alias\n JOIN ( SELECT DISTINCT\n table1alias2.uniqueid\n FROM\n table1 AS table1alias2\n WHERE (table1alias2.col8 IN ($1, $2, $3, $4, $5, $6, $7,\n$8, $9, $10, $11, $12, $13, $14, $15, $16, $17, $18, $19, $20, $21, $22,\n$23, $24, $25, $26, $27, $28, $29, $30, $31, $32, $33, $34, $35, $36, $37,\n$38, $39, $40, $41, $42, $43, $44, $45, $46, $47, $48, $49, $50, $51, $52,\n$53, $54, $55, $56, $57, $58, $59, $60, $61, $62, $63, $64, $65, $66, $67,\n$68, $69, $70, $71, $72, $73, $74, $75, $76, $77, $78, $79, $80, $81, $82,\n$83, $84, $85, $86, $87, $88, $89, $90, $91, $92, $93, $94, $95, $96, $97,\n$98, $99, $100, $101, $102, $103, $104, $105, $106, $107, $108, $109, $110,\n$111, $112, $113, $114, $115, $116, $117, $118, $119, $120, $121, $122,\n$123, $124, $125, $126, $127, $128, $129, $130, $131, $132, $133, $134,\n$135, $136, $137, $138, $139, $140, $141, $142, $143, $144, $145, $146,\n$147, $148, $149, $150, $151, $152, $153, $154, $155, $156, $157, $158,\n$159, $160, $161, $162, $163, $164, $165, $166, $167, $168, $169, $170,\n$171, $172, $173, $174, $175, $176, $177, $178, $179, $180, $181, $182,\n$183, $184, $185, $186, $187, $188, $189, $190, $191, $192, $193, $194,\n$195, $196, $197, $198, $199, $200, $201, $202, $203, $204, $205, $206,\n$207, $208, $209, $210, $211, $212, $213, $214, $215, $216, $217, $218,\n$219, $220, $221, $222, $223, $224, $225, $226, $227, $228, $229, $230,\n$231, $232, $233, $234, $235, $236, $237, $238, $239, $240, $241, $242,\n$243, $244, $245, $246, $247, $248, $249, $250, $251, $252, $253, $254,\n$255, $256, $257, $258, $259, $260, $261, $262, $263, $264, $265, $266,\n$267, $268, $269, $270, $271, $272, $273, $274, $275, $276, $277, $278,\n$279, $280, $281, $282, $283, $284, $285, $286, $287, $288, $289, $290,\n$291, $292, $293, $294, $295, $296, $297, $298, $299, $300, $301, $302,\n$303, $304, $305, $306, $307, $308, $309, $310, $311, $312, $313, $314,\n$315, $316, $317, $318, $319, $320, $321, $322, $323, $324, $325, $326,\n$327, $328, $329, $330, $331, $332, $333, $334, $335, $336, $337, $338,\n$339, $340, $341, $342, $343, $344, $345, $346, $347, $348, $349, $350,\n$351, $352, $353, $354, $355, $356, $357, $358, $359, $360, $361, $362,\n$363, $364, $365, $366, $367, $368, $369, $370, $371, $372, $373, $374,\n$375, $376, $377, $378, $379, $380, $381, $382, $383, $384, $385, $386,\n$387, $388, $389, $390, $391, $392, $393, $394, $395, $396, $397, $398,\n$399, $400, $401, $402, $403, $404, $405, $406, $407, $408, $409, $410,\n$411, $412, $413, $414, $415, $416, $417, $418, $419, $420, $421, $422,\n$423, $424, $425, $426, $427, $428, $429, $430, $431, $432, $433, $434,\n$435, $436, $437, $438, $439, $440, $441, $442, $443, $444, $445, $446,\n$447, $448, $449, $450, $451, $452, $453, $454, $455, $456, $457, $458,\n$459, $460, $461, $462, $463, $464, $465, $466, $467, $468, $469, $470,\n$471, $472, $473, $474, $475, $476, $477, $478, $479, $480, $481, $482,\n$483, $484, $485, $486, $487, $488, $489, $490, $491, $492, $493, $494,\n$495, $496, $497, $498, $499, $500, $501, $502, $503, $504, $505, $506,\n$507, $508, $509, $510, $511, $512, $513, $514, $515, $516, $517, $518,\n$519, $520, $521, $522, $523, $524, $525, $526, $527, $528, $529, $530,\n$531, $532, $533, $534, $535, $536, $537, $538, $539, $540, $541, $542,\n$543, $544, $545, $546, $547, $548, $549, $550, $551, $552, $553, $554,\n$555, $556, $557, $558, $559, $560, $561, $562, $563, $564, $565, $566,\n$567, $568, $569, $570, $571, $572, $573, $574, $575, $576, $577, $578,\n$579, $580, $581, $582, $583, $584, $585, $586, $587, $588, $589, $590,\n$591, $592, $593, $594, $595, $596, $597, $598, $599, $600, $601, $602,\n$603, $604, $605, $606, $607, $608, $609, $610, $611, $612, $613, $614,\n$615, $616, $617, $618, $619, $620, $621, $622, $623, $624, $625, $626,\n$627, $628, $629, $630, $631, $632, $633, $634, $635, $636, $637, $638,\n$639, $640, $641, $642, $643, $644, $645, $646, $647, $648, $649, $650,\n$651, $652, $653, $654, $655, $656, $657, $658, $659, $660, $661, $662,\n$663, $664, $665, $666, $667, $668, $669, $670, $671, $672, $673, $674,\n$675, $676, $677, $678, $679, $680, $681, $682, $683, $684, $685, $686,\n$687, $688, $689, $690, $691, $692, $693, $694, $695, $696, $697, $698,\n$699, $700, $701, $702, $703, $704, $705, $706, $707, $708, $709, $710,\n$711, $712, $713, $714, $715, $716, $717, $718, $719, $720, $721, $722,\n$723, $724, $725, $726, $727, $728, $729, $730, $731, $732, $733, $734,\n$735, $736, $737, $738, $739, $740, $741, $742, $743, $744, $745, $746,\n$747, $748, $749, $750, $751, $752, $753, $754, $755, $756, $757, $758,\n$759, $760, $761, $762, $763, $764, $765, $766, $767, $768, $769, $770,\n$771, $772, $773, $774, $775, $776, $777, $778, $779, $780, $781, $782,\n$783, $784, $785, $786, $787, $788, $789, $790, $791, $792, $793, $794,\n$795, $796, $797, $798, $799, $800, $801, $802, $803, $804, $805, $806,\n$807, $808, $809, $810, $811, $812, $813, $814, $815, $816, $817, $818,\n$819, $820, $821, $822, $823, $824, $825, $826, $827, $828, $829, $830,\n$831, $832, $833, $834, $835, $836, $837, $838, $839, $840, $841, $842,\n$843, $844, $845, $846, $847, $848, $849, $850, $851, $852, $853, $854,\n$855, $856, $857, $858, $859, $860, $861, $862, $863, $864, $865, $866,\n$867, $868, $869, $870, $871, $872, $873, $874, $875, $876, $877, $878,\n$879, $880, $881, $882, $883, $884, $885, $886, $887, $888, $889, $890,\n$891, $892, $893, $894, $895, $896, $897, $898, $899, $900, $901, $902,\n$903, $904, $905, $906, $907, $908, $909, $910, $911, $912, $913, $914,\n$915, $916, $917, $918, $919, $920, $921, $922, $923, $924, $925, $926,\n$927, $928, $929, $930, $931, $932, $933, $934, $935, $936, $937, $938,\n$939, $940, $941, $942, $943, $944, $945, $946, $947, $948, $949, $950,\n$951, $952, $953, $954, $955, $956, $957, $958, $959, $960, $961, $962,\n$963, $964, $965, $966, $967, $968, $969, $970, $971, $972, $973, $974,\n$975, $976, $977, $978, $979, $980, $981, $982, $983, $984, $985, $986,\n$987, $988, $989, $990, $991, $992, $993, $994, $995, $996, $997, $998,\n$999, $1000)\n AND (table1alias2.datatype IN (CAST('TypeA' AS\ndatatype_enum)))\n AND table1alias2.col20 IN ($1001))) AS\ncandidateUniqueId ON table2alias.uniqueid = candidateUniqueId.uniqueid) AS\ninlinealias1\n WHERE\n inlinealias1.rn = $1002) AS inlinealias2 ON table1alias1.uniqueid =\ninlinealias2.uniqueid\nWHERE (EXISTS (\n SELECT\n 1 AS one\n FROM\n view1\n WHERE (col8 = $1003\n AND table1alias1.col20 = col2))\n AND table1alias1.col8 IN ($1004, $1005, $1006, $1007, $1008, $1009,\n$1010, $1011, $1012, $1013, $1014, $1015, $1016, $1017, $1018, $1019,\n$1020, $1021, $1022, $1023, $1024, $1025, $1026, $1027, $1028, $1029,\n$1030, $1031, $1032, $1033, $1034, $1035, $1036, $1037, $1038, $1039,\n$1040, $1041, $1042, $1043, $1044, $1045, $1046, $1047, $1048, $1049,\n$1050, $1051, $1052, $1053, $1054, $1055, $1056, $1057, $1058, $1059,\n$1060, $1061, $1062, $1063, $1064, $1065, $1066, $1067, $1068, $1069,\n$1070, $1071, $1072, $1073, $1074, $1075, $1076, $1077, $1078, $1079,\n$1080, $1081, $1082, $1083, $1084, $1085, $1086, $1087, $1088, $1089,\n$1090, $1091, $1092, $1093, $1094, $1095, $1096, $1097, $1098, $1099,\n$1100, $1101, $1102, $1103, $1104, $1105, $1106, $1107, $1108, $1109,\n$1110, $1111, $1112, $1113, $1114, $1115, $1116, $1117, $1118, $1119,\n$1120, $1121, $1122, $1123, $1124, $1125, $1126, $1127, $1128, $1129,\n$1130, $1131, $1132, $1133, $1134, $1135, $1136, $1137, $1138, $1139,\n$1140, $1141, $1142, $1143, $1144, $1145, $1146, $1147, $1148, $1149,\n$1150, $1151, $1152, $1153, $1154, $1155, $1156, $1157, $1158, $1159,\n$1160, $1161, $1162, $1163, $1164, $1165, $1166, $1167, $1168, $1169,\n$1170, $1171, $1172, $1173, $1174, $1175, $1176, $1177, $1178, $1179,\n$1180, $1181, $1182, $1183, $1184, $1185, $1186, $1187, $1188, $1189,\n$1190, $1191, $1192, $1193, $1194, $1195, $1196, $1197, $1198, $1199,\n$1200, $1201, $1202, $1203, $1204, $1205, $1206, $1207, $1208, $1209,\n$1210, $1211, $1212, $1213, $1214, $1215, $1216, $1217, $1218, $1219,\n$1220, $1221, $1222, $1223, $1224, $1225, $1226, $1227, $1228, $1229,\n$1230, $1231, $1232, $1233, $1234, $1235, $1236, $1237, $1238, $1239,\n$1240, $1241, $1242, $1243, $1244, $1245, $1246, $1247, $1248, $1249,\n$1250, $1251, $1252, $1253, $1254, $1255, $1256, $1257, $1258, $1259,\n$1260, $1261, $1262, $1263, $1264, $1265, $1266, $1267, $1268, $1269,\n$1270, $1271, $1272, $1273, $1274, $1275, $1276, $1277, $1278, $1279,\n$1280, $1281, $1282, $1283, $1284, $1285, $1286, $1287, $1288, $1289,\n$1290, $1291, $1292, $1293, $1294, $1295, $1296, $1297, $1298, $1299,\n$1300, $1301, $1302, $1303, $1304, $1305, $1306, $1307, $1308, $1309,\n$1310, $1311, $1312, $1313, $1314, $1315, $1316, $1317, $1318, $1319,\n$1320, $1321, $1322, $1323, $1324, $1325, $1326, $1327, $1328, $1329,\n$1330, $1331, $1332, $1333, $1334, $1335, $1336, $1337, $1338, $1339,\n$1340, $1341, $1342, $1343, $1344, $1345, $1346, $1347, $1348, $1349,\n$1350, $1351, $1352, $1353, $1354, $1355, $1356, $1357, $1358, $1359,\n$1360, $1361, $1362, $1363, $1364, $1365, $1366, $1367, $1368, $1369,\n$1370, $1371, $1372, $1373, $1374, $1375, $1376, $1377, $1378, $1379,\n$1380, $1381, $1382, $1383, $1384, $1385, $1386, $1387, $1388, $1389,\n$1390, $1391, $1392, $1393, $1394, $1395, $1396, $1397, $1398, $1399,\n$1400, $1401, $1402, $1403, $1404, $1405, $1406, $1407, $1408, $1409,\n$1410, $1411, $1412, $1413, $1414, $1415, $1416, $1417, $1418, $1419,\n$1420, $1421, $1422, $1423, $1424, $1425, $1426, $1427, $1428, $1429,\n$1430, $1431, $1432, $1433, $1434, $1435, $1436, $1437, $1438, $1439,\n$1440, $1441, $1442, $1443, $1444, $1445, $1446, $1447, $1448, $1449,\n$1450, $1451, $1452, $1453, $1454, $1455, $1456, $1457, $1458, $1459,\n$1460, $1461, $1462, $1463, $1464, $1465, $1466, $1467, $1468, $1469,\n$1470, $1471, $1472, $1473, $1474, $1475, $1476, $1477, $1478, $1479,\n$1480, $1481, $1482, $1483, $1484, $1485, $1486, $1487, $1488, $1489,\n$1490, $1491, $1492, $1493, $1494, $1495, $1496, $1497, $1498, $1499,\n$1500, $1501, $1502, $1503, $1504, $1505, $1506, $1507, $1508, $1509,\n$1510, $1511, $1512, $1513, $1514, $1515, $1516, $1517, $1518, $1519,\n$1520, $1521, $1522, $1523, $1524, $1525, $1526, $1527, $1528, $1529,\n$1530, $1531, $1532, $1533, $1534, $1535, $1536, $1537, $1538, $1539,\n$1540, $1541, $1542, $1543, $1544, $1545, $1546, $1547, $1548, $1549,\n$1550, $1551, $1552, $1553, $1554, $1555, $1556, $1557, $1558, $1559,\n$1560, $1561, $1562, $1563, $1564, $1565, $1566, $1567, $1568, $1569,\n$1570, $1571, $1572, $1573, $1574, $1575, $1576, $1577, $1578, $1579,\n$1580, $1581, $1582, $1583, $1584, $1585, $1586, $1587, $1588, $1589,\n$1590, $1591, $1592, $1593, $1594, $1595, $1596, $1597, $1598, $1599,\n$1600, $1601, $1602, $1603, $1604, $1605, $1606, $1607, $1608, $1609,\n$1610, $1611, $1612, $1613, $1614, $1615, $1616, $1617, $1618, $1619,\n$1620, $1621, $1622, $1623, $1624, $1625, $1626, $1627, $1628, $1629,\n$1630, $1631, $1632, $1633, $1634, $1635, $1636, $1637, $1638, $1639,\n$1640, $1641, $1642, $1643, $1644, $1645, $1646, $1647, $1648, $1649,\n$1650, $1651, $1652, $1653, $1654, $1655, $1656, $1657, $1658, $1659,\n$1660, $1661, $1662, $1663, $1664, $1665, $1666, $1667, $1668, $1669,\n$1670, $1671, $1672, $1673, $1674, $1675, $1676, $1677, $1678, $1679,\n$1680, $1681, $1682, $1683, $1684, $1685, $1686, $1687, $1688, $1689,\n$1690, $1691, $1692, $1693, $1694, $1695, $1696, $1697, $1698, $1699,\n$1700, $1701, $1702, $1703, $1704, $1705, $1706, $1707, $1708, $1709,\n$1710, $1711, $1712, $1713, $1714, $1715, $1716, $1717, $1718, $1719,\n$1720, $1721, $1722, $1723, $1724, $1725, $1726, $1727, $1728, $1729,\n$1730, $1731, $1732, $1733, $1734, $1735, $1736, $1737, $1738, $1739,\n$1740, $1741, $1742, $1743, $1744, $1745, $1746, $1747, $1748, $1749,\n$1750, $1751, $1752, $1753, $1754, $1755, $1756, $1757, $1758, $1759,\n$1760, $1761, $1762, $1763, $1764, $1765, $1766, $1767, $1768, $1769,\n$1770, $1771, $1772, $1773, $1774, $1775, $1776, $1777, $1778, $1779,\n$1780, $1781, $1782, $1783, $1784, $1785, $1786, $1787, $1788, $1789,\n$1790, $1791, $1792, $1793, $1794, $1795, $1796, $1797, $1798, $1799,\n$1800, $1801, $1802, $1803, $1804, $1805, $1806, $1807, $1808, $1809,\n$1810, $1811, $1812, $1813, $1814, $1815, $1816, $1817, $1818, $1819,\n$1820, $1821, $1822, $1823, $1824, $1825, $1826, $1827, $1828, $1829,\n$1830, $1831, $1832, $1833, $1834, $1835, $1836, $1837, $1838, $1839,\n$1840, $1841, $1842, $1843, $1844, $1845, $1846, $1847, $1848, $1849,\n$1850, $1851, $1852, $1853, $1854, $1855, $1856, $1857, $1858, $1859,\n$1860, $1861, $1862, $1863, $1864, $1865, $1866, $1867, $1868, $1869,\n$1870, $1871, $1872, $1873, $1874, $1875, $1876, $1877, $1878, $1879,\n$1880, $1881, $1882, $1883, $1884, $1885, $1886, $1887, $1888, $1889,\n$1890, $1891, $1892, $1893, $1894, $1895, $1896, $1897, $1898, $1899,\n$1900, $1901, $1902, $1903, $1904, $1905, $1906, $1907, $1908, $1909,\n$1910, $1911, $1912, $1913, $1914, $1915, $1916, $1917, $1918, $1919,\n$1920, $1921, $1922, $1923, $1924, $1925, $1926, $1927, $1928, $1929,\n$1930, $1931, $1932, $1933, $1934, $1935, $1936, $1937, $1938, $1939,\n$1940, $1941, $1942, $1943, $1944, $1945, $1946, $1947, $1948, $1949,\n$1950, $1951, $1952, $1953, $1954, $1955, $1956, $1957, $1958, $1959,\n$1960, $1961, $1962, $1963, $1964, $1965, $1966, $1967, $1968, $1969,\n$1970, $1971, $1972, $1973, $1974, $1975, $1976, $1977, $1978, $1979,\n$1980, $1981, $1982, $1983, $1984, $1985, $1986, $1987, $1988, $1989,\n$1990, $1991, $1992, $1993, $1994, $1995, $1996, $1997, $1998, $1999,\n$2000, $2001, $2002, $2003)\n AND (table1alias1.col3 = $2004\n OR table1alias1.col3 IS NULL)\n AND (table1alias1.datatype IN (CAST('TypeA' AS datatype_enum)))\n AND table1alias1.col20 IN ($2005))\nORDER BY\n table1alias1.col16 ASC) AS subquery\nORDER BY\n subquery.order_by ASC\n\n\nTables:\n\\d table1\n Table \"table1\"\n Column | Type |\nCollation | Nullable | Default\n-----------------------------------+-----------------------------+-----------+----------+---------------------------------\n id | bigint |\n | not null |\n col2 | bigint |\n | |\n col3 | boolean |\n | |\n col4 | timestamp without time zone |\n | |\n col5 | timestamp without time zone |\n | |\n col6 | timestamp without time zone |\n | |\n col7 | timestamp without time zone |\n | |\n col8 | character varying(1000) |\n | |\n col9 | character varying(1000) |\n | |\n col10 | character varying(1000) |\n | |\n col11 | character varying(1000) |\n | |\n col12 | character varying(1000) |\n | |\n col13 | character varying(1000) |\n | |\n col14 | character varying(1000) |\n | |\n col15 | character varying(1000) |\n | |\n col16 | bigint |\n | |\n col17 | bigint |\n | |\n col18 | bigint |\n | |\n col19 | bigint |\n | |\n col20 | bigint |\n | |\n col21 | character varying(255) |\n | |\n uniqueid | character varying(255) |\n | |\n col23 | timestamp without time zone |\n | |\n col24 | boolean |\n | |\n col25 | boolean |\n | |\n col26 | character varying(255) |\n | |\n col27 | timestamp without time zone |\n | |\n col28 | timestamp without time zone |\n | |\n col29 | bigint |\n | |\n col30 | integer |\n | not null | 0\n col31 | character varying(255) |\n | |\n col32 | boolean |\n | |\n col33 | boolean |\n | |\n col34 | boolean |\n | |\n col35 | boolean |\n | |\n col36 | character varying(1000) |\n | |\n col37 | character varying(1000) |\n | |\n col38 | boolean |\n | |\n col39 | character varying(1000) |\n | |\n col40 | character varying(1000) |\n | |\n col41 | character varying(1000) |\n | |\n col42 | character varying(255) |\n | |\n col43 | character varying(1000) |\n | |\n col44 | other_enum |\n | not null |\n datatype | datatype_enum |\n | not null |\n col46 | bigint |\n | |\n col47 | text |\n | |\n col48 | bytea |\n | not null |\n col49 | bytea |\n | |\n col50 | bigint |\n | |\n col51 | bigint |\n | |\n col52 | bigint |\n | |\n col53 | bigint |\n | |\n col54 | bigint |\n | |\nIndexes:\n \"table1_pkey\" PRIMARY KEY, btree (id)\n \"table1_unique_col47_for_col46\" UNIQUE CONSTRAINT, btree (col47, col46)\n \"table1_col20_datatype_idx\" btree (col20, datatype)\n \"table1_col2_datatype_col20_idx\" btree (col2, datatype, col20)\n \"table1_col42_idx\" btree (col42)\n \"table1_col28_idx\" btree (col28)\n \"table1_col52_idx\" btree (col52)\n \"table1_col50_idx\" btree (col50)\n \"table1_col12_idx\" btree (col12)\n \"table1_col37_idx\" btree (col37) WHERE col37 IS NOT NULL\n \"table1_col40_notnull_idx\" btree (col40) WHERE col40 IS NOT NULL\n \"table1_typea_idx\" btree (col20, (col8::text)) WHERE datatype =\n'TypeA'::table1_type\n \"table1_typea_include_uniqueid_col16_idx\" btree (col20, col8, deleted)\nINCLUDE (uniqueid, col16, id) WHERE datatype = 'TypeA'::table1_type\n \"table1_typea_idx\" btree (col20, (col8::text)) WHERE datatype =\n'TypeA'::table1_type INVALID\n \"table1_uniqueid_idx\" btree (uniqueid)\nCheck constraints:\n \"table1_col32_not_null\" CHECK (col32 IS NOT NULL)\n \"table1_col34_not_null\" CHECK (col34 IS NOT NULL)\n \"table1_col33_not_null\" CHECK (col33 IS NOT NULL)\n \"table1_col35_not_null\" CHECK (col35 IS NOT NULL) NOT VALID\n \"col49_or_col46\" CHECK (col49 IS NOT NULL OR col46 IS NOT NULL) NOT\nVALID\nForeign-key constraints:\n \"fk_table1_col20\" FOREIGN KEY (col20) REFERENCES constainttable4(id)\n \"fk_table1_col29\" FOREIGN KEY (col29) REFERENCES constainttable5(id)\n \"table1_col46_fkey\" FOREIGN KEY (col46) REFERENCES constainttable6(id)\nReferenced by:\n TABLE \"referencetable2\" CONSTRAINT \"fk_table1_id\" FOREIGN KEY\n(table1_id) REFERENCES table1(id)\n TABLE \"referencetable3\" CONSTRAINT \"referencetable3_table1_id_fkey\"\nFOREIGN KEY (table1_id) REFERENCES table1(id)\n TABLE \"referencetable4\" CONSTRAINT \"referencetable4_table1_fkey\"\nFOREIGN KEY (table1_id) REFERENCES table1(id)\nPublications:\n \"puba\"\n\n\n\\d view1\n View \"view1\"\n Column | Type |\nCollation | Nullable | Default\n-----------------------------------------+------------------------+-----------+----------+---------\n col1 | text |\n | |\n col2 | bigint |\n | |\n col3 | bytea |\n | |\n col4 | integer |\n | |\n col5 | character varying(255) |\n | |\n col6 | bytea |\n | |\n col7 | character varying(255) |\n | |\n col8 | bigint |\n | |\n col9 | bigint |\n | |\n col10 | bytea |\n | |\n col11 | integer |\n | |\n col12 | character varying |\n | |\n col13 | bytea |\n | |\n col14 | character varying |\n | |\n\n\n\n \\d table2\n Table \"table2\"\n Column | Type | Collation | Nullable\n| Default\n-----------------------+-----------------------------+-----------+----------+-------------------------------------------\n id | bigint | | not null\n| nextval('table2_id_seq'::regclass)\n uniqueid | character varying(255) | | not null\n|\n col3 | timestamp without time zone | | not null\n|\n col4 | boolean | | not null\n|\n col5 | boolean | | not null\n|\n col6 | boolean | | not null\n|\n col7 | boolean | | not null\n|\n col8 | bigint | |\n |\n col9 | bigint | |\n |\n col10 | bigint | |\n |\n col11 | character varying(255) | |\n |\n col12 | character varying(255) | | not null\n|\n col13 | boolean | | not null\n| true\nIndexes:\n \"id\" PRIMARY KEY, btree (id)\n \"idx_table2_uniqueid\" btree (uniqueid)\nForeign-key constraints:\n \"fk_table2_source_constainttable1_id\" FOREIGN KEY\n(source_constainttable1_id) REFERENCES constainttable1(id)\n \"fk_table2_source_constainttable2_id\" FOREIGN KEY\n(source_constainttable2_id) REFERENCES constainttable2(id)\n \"fk_table2_source_constainttable3_id\" FOREIGN KEY\n(source_constainttable3_id) REFERENCES constainttable3(id)\nReferenced by:\n TABLE \"referencetable1\" CONSTRAINT \"fk_referencetable1_table2_id\"\nFOREIGN KEY (table2_id) REFERENCES table2(id)\n TABLE \"referencetable1\" CONSTRAINT \"fk_referencetable1_table2_id_old\"\nFOREIGN KEY (table2_id_old) REFERENCES table2(id)\n\n\n\\d app.table1_typea_include_uniqueid_col16_idx\nIndex \"table1_typea_include_uniqueid_col16_idx\"\n Column | Type | Key? | Definition\n------------+-------------------------+------+------------\n col20 | bigint | yes | account_id\n col8 | character varying(1000) | yes | string01\n col3 | boolean | yes | deleted\n col26 | character varying(255) | no | uniqueid\n col16 | bigint | no | long01\n id | bigint | no | id\nbtree, for table \"table1\", predicate (datatype = 'TypeA'::table1_type)\n\n\n\nPostgreSQL version number you are running:\nSELECT version();\n version\n\n-----------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 11.6 (Ubuntu 11.6-1.pgdg18.04+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit\n(1 row)\n\n\nHow you installed PostgreSQL:\nOfficial apt repo\n\nChanges made to the settings in the postgresql.conf file: see Server\nConfiguration for a quick way to list them all.\nSELECT name, current_setting(name), SOURCE\n FROM pg_settings\n WHERE SOURCE NOT IN ('default', 'override');\n name | current_setting |\n source\n---------------------------------+---------------------------------------+----------------------\n application_name | psql |\nclient\n archive_command | true |\nconfiguration file\n archive_mode | on |\nconfiguration file\n auto_explain.log_analyze | on |\nconfiguration file\n auto_explain.log_buffers | on |\nconfiguration file\n auto_explain.log_min_duration | 10s |\nconfiguration file\n auto_explain.log_timing | on |\nconfiguration file\n autovacuum | on |\nconfiguration file\n checkpoint_timeout | 30min |\nconfiguration file\n client_encoding | UTF8 |\nclient\n DateStyle | ISO, MDY |\nconfiguration file\n default_text_search_config | pg_catalog.english |\nconfiguration file\n dynamic_shared_memory_type | posix |\nconfiguration file\n effective_cache_size | 8GB |\nconfiguration file\n effective_io_concurrency | 100 |\nconfiguration file\n external_pid_file | /var/run/postgresql/11-main.pid |\ncommand line\n hot_standby | on |\nconfiguration file\n lc_messages | en_US.utf8 |\nconfiguration file\n lc_monetary | en_US.utf8 |\nconfiguration file\n lc_numeric | en_US.utf8 |\nconfiguration file\n lc_time | en_US.utf8 |\nconfiguration file\n listen_addresses | * |\nconfiguration file\n log_autovacuum_min_duration | 0 |\nconfiguration file\n log_checkpoints | on |\nconfiguration file\n log_connections | on |\nconfiguration file\n log_destination | syslog |\nconfiguration file\n log_directory | pg_log |\nconfiguration file\n log_disconnections | on |\nconfiguration file\n log_filename | postgresql-%a.log |\nconfiguration file\n\nOperating system and version:\n5.0.0-1029-gcp #30~18.04.1-Ubuntu SMP Mon Jan 13 05:40:56 UTC 2020 x86_64\nx86_64 x86_64 GNU/Linux\n\nWhat program you're using to connect to PostgreSQL:\nJava JDBC 4.2 (JRE 8+) driver for PostgreSQL database\n\nIs there anything relevant or unusual in the PostgreSQL server logs?:\nnothing in the logs, but I've done a bunch of pg_locks/stat_activity dumps\nwhilst the query is running. pg_locks/activity show:\n - wait_event_type/wait_event NULL each time for this query\n - granted = true for all modes for this query\n - an autovacuum was running, but for an unrelated table.\n\nExtra perf details:\nCPU: 16 vCPUs on GCP (Intel Xeon E5 v4)\nRAM: 60GB\nDisk: 3TB persistent SSD on GCP\n\nsudo time dd if=/dev/sdc of=/dev/null bs=1M count=1k\nskip=$((128*RANDOM/32)):\n1024+0 records in\n1024+0 records out\n1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.855067 s, 1.3 GB/s\n0.00user 0.45system 0:00.85elapsed 53%CPU (0avgtext+0avgdata\n3236maxresident)k\n2097816inputs+0outputs (1major+348minor)pagefaults 0swaps\n\nEstimated performance (GCP quoted)\nOperation type Read Write\nSustained random IOPS limit 25,000.00 25,000.00\nSustained throughput limit (MB/s) 1,200.00 800.00\n\nHi,Hoping someone can help with this performance issue that's been driving a few of us crazy :-) Any guidance greatly appreciated.A description of what you are trying to achieve and what results you expect.: - I'd like to get an understanding of why the following query (presented in full, but there are specific parts that are confusing me) starts off taking ~second in duration but 'switches' to taking over 4 minutes. - we initially saw this behaviour for the exact same sql with a different index that resulted in an index scan. To try and fix the issue we've created an additional index with additional included fields so we now have Index Only Scans, but are still seeing the same problem. - execution plan is from auto_explain output when it took just over 4 minutes to execute. The time is shared ~equally between these two index-only scans. - There are no checkpoints occurring concurrently with this (based on \"checkpoint starting\" and \"checkpoint complete\" in logs) - bloat on the index is about 30%  Segments of interest: 1. ->  Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1 table1alias1  (cost=0.56..17.25 rows=1 width=60) (actual time=110.539..123828.134 rows=67000 loops=1)        Index Cond: (col20 = $2005)        Filter: (((col3 = $2004) OR (col3 IS NULL)) AND ((col8)::text = ANY ((ARRAY[$1004, ..., $2003])::text[])))        Rows Removed by Filter: 2662652        Heap Fetches: 6940        Buffers: shared hit=46619 read=42784 written=52 2. ->  Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1 table1alias2  (cost=0.56..17.23 rows=1 width=36) (actual time=142.855..122034.039 rows=67000 loops=1)        Index Cond: (col20 = $1001)        Filter: ((col8)::text = ANY ((ARRAY[$1, ..., $1000])::text[]))        Rows Removed by Filter: 2662652        Heap Fetches: 6891        Buffers: shared hit=47062 read=42331 written=37 If I run the same queries now: Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1 table1alias1  (cost=0.56..2549.69 rows=69 width=36) (actual time=1.017..1221.375 rows=67000 loops=1)Heap Fetches: 24Buffers: shared hit=2849 read=2483buffers do look different - but still, reading 42k doesn't seem like it would cause a delay of 4m?Actually, here's another example of segment 2 from logs.    Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1 table1alias2  (cost=0.56..17.23 rows=1 width=36) (actual time=36.559..120649.742 rows=65000 loops=1)        Index Cond: (col20 = $1001)        Filter: ((col8)::text = ANY ((ARRAY[$1, $1000]::text[]))         Rows Removed by Filter: 2664256        Heap Fetches: 6306        Buffers: shared hit=87712 read=1507One note: I've replaced table/column names (sorry, a requirement).Full subquery execution plan (i've stripped out the view materialization from row 14 onwards but left the header in):https://explain.depesz.com/s/vsdHFull Sql:SELECT    subquery.idFROM (    SELECT        table1alias1.id,        table1alias1.uniqueid,        table1alias1.col16 AS order_by    FROM        table1 AS table1alias1    LEFT OUTER JOIN (    SELECT        inlinealias1.id,        inlinealias1.uniqueid,        inlinealias1.col4,        inlinealias1.col5,        inlinealias1.col6,        inlinealias1.col7    FROM (        SELECT            table2alias.id,            table2alias.uniqueid,            table2alias.col3,            table2alias.col4,            table2alias.col5,            table2alias.col6,            row_number() OVER (PARTITION BY table2alias.uniqueid ORDER BY table2alias.col13 DESC, table2alias.col3 DESC, table2alias.id DESC) AS rn        FROM            table2 AS table2alias            JOIN ( SELECT DISTINCT                    table1alias2.uniqueid                FROM                    table1 AS table1alias2                WHERE (table1alias2.col8 IN ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15, $16, $17, $18, $19, $20, $21, $22, $23, $24, $25, $26, $27, $28, $29, $30, $31, $32, $33, $34, $35, $36, $37, $38, $39, $40, $41, $42, $43, $44, $45, $46, $47, $48, $49, $50, $51, $52, $53, $54, $55, $56, $57, $58, $59, $60, $61, $62, $63, $64, $65, $66, $67, $68, $69, $70, $71, $72, $73, $74, $75, $76, $77, $78, $79, $80, $81, $82, $83, $84, $85, $86, $87, $88, $89, $90, $91, $92, $93, $94, $95, $96, $97, $98, $99, $100, $101, $102, $103, $104, $105, $106, $107, $108, $109, $110, $111, $112, $113, $114, $115, $116, $117, $118, $119, $120, $121, $122, $123, $124, $125, $126, $127, $128, $129, $130, $131, $132, $133, $134, $135, $136, $137, $138, $139, $140, $141, $142, $143, $144, $145, $146, $147, $148, $149, $150, $151, $152, $153, $154, $155, $156, $157, $158, $159, $160, $161, $162, $163, $164, $165, $166, $167, $168, $169, $170, $171, $172, $173, $174, $175, $176, $177, $178, $179, $180, $181, $182, $183, $184, $185, $186, $187, $188, $189, $190, $191, $192, $193, $194, $195, $196, $197, $198, $199, $200, $201, $202, $203, $204, $205, $206, $207, $208, $209, $210, $211, $212, $213, $214, $215, $216, $217, $218, $219, $220, $221, $222, $223, $224, $225, $226, $227, $228, $229, $230, $231, $232, $233, $234, $235, $236, $237, $238, $239, $240, $241, $242, $243, $244, $245, $246, $247, $248, $249, $250, $251, $252, $253, $254, $255, $256, $257, $258, $259, $260, $261, $262, $263, $264, $265, $266, $267, $268, $269, $270, $271, $272, $273, $274, $275, $276, $277, $278, $279, $280, $281, $282, $283, $284, $285, $286, $287, $288, $289, $290, $291, $292, $293, $294, $295, $296, $297, $298, $299, $300, $301, $302, $303, $304, $305, $306, $307, $308, $309, $310, $311, $312, $313, $314, $315, $316, $317, $318, $319, $320, $321, $322, $323, $324, $325, $326, $327, $328, $329, $330, $331, $332, $333, $334, $335, $336, $337, $338, $339, $340, $341, $342, $343, $344, $345, $346, $347, $348, $349, $350, $351, $352, $353, $354, $355, $356, $357, $358, $359, $360, $361, $362, $363, $364, $365, $366, $367, $368, $369, $370, $371, $372, $373, $374, $375, $376, $377, $378, $379, $380, $381, $382, $383, $384, $385, $386, $387, $388, $389, $390, $391, $392, $393, $394, $395, $396, $397, $398, $399, $400, $401, $402, $403, $404, $405, $406, $407, $408, $409, $410, $411, $412, $413, $414, $415, $416, $417, $418, $419, $420, $421, $422, $423, $424, $425, $426, $427, $428, $429, $430, $431, $432, $433, $434, $435, $436, $437, $438, $439, $440, $441, $442, $443, $444, $445, $446, $447, $448, $449, $450, $451, $452, $453, $454, $455, $456, $457, $458, $459, $460, $461, $462, $463, $464, $465, $466, $467, $468, $469, $470, $471, $472, $473, $474, $475, $476, $477, $478, $479, $480, $481, $482, $483, $484, $485, $486, $487, $488, $489, $490, $491, $492, $493, $494, $495, $496, $497, $498, $499, $500, $501, $502, $503, $504, $505, $506, $507, $508, $509, $510, $511, $512, $513, $514, $515, $516, $517, $518, $519, $520, $521, $522, $523, $524, $525, $526, $527, $528, $529, $530, $531, $532, $533, $534, $535, $536, $537, $538, $539, $540, $541, $542, $543, $544, $545, $546, $547, $548, $549, $550, $551, $552, $553, $554, $555, $556, $557, $558, $559, $560, $561, $562, $563, $564, $565, $566, $567, $568, $569, $570, $571, $572, $573, $574, $575, $576, $577, $578, $579, $580, $581, $582, $583, $584, $585, $586, $587, $588, $589, $590, $591, $592, $593, $594, $595, $596, $597, $598, $599, $600, $601, $602, $603, $604, $605, $606, $607, $608, $609, $610, $611, $612, $613, $614, $615, $616, $617, $618, $619, $620, $621, $622, $623, $624, $625, $626, $627, $628, $629, $630, $631, $632, $633, $634, $635, $636, $637, $638, $639, $640, $641, $642, $643, $644, $645, $646, $647, $648, $649, $650, $651, $652, $653, $654, $655, $656, $657, $658, $659, $660, $661, $662, $663, $664, $665, $666, $667, $668, $669, $670, $671, $672, $673, $674, $675, $676, $677, $678, $679, $680, $681, $682, $683, $684, $685, $686, $687, $688, $689, $690, $691, $692, $693, $694, $695, $696, $697, $698, $699, $700, $701, $702, $703, $704, $705, $706, $707, $708, $709, $710, $711, $712, $713, $714, $715, $716, $717, $718, $719, $720, $721, $722, $723, $724, $725, $726, $727, $728, $729, $730, $731, $732, $733, $734, $735, $736, $737, $738, $739, $740, $741, $742, $743, $744, $745, $746, $747, $748, $749, $750, $751, $752, $753, $754, $755, $756, $757, $758, $759, $760, $761, $762, $763, $764, $765, $766, $767, $768, $769, $770, $771, $772, $773, $774, $775, $776, $777, $778, $779, $780, $781, $782, $783, $784, $785, $786, $787, $788, $789, $790, $791, $792, $793, $794, $795, $796, $797, $798, $799, $800, $801, $802, $803, $804, $805, $806, $807, $808, $809, $810, $811, $812, $813, $814, $815, $816, $817, $818, $819, $820, $821, $822, $823, $824, $825, $826, $827, $828, $829, $830, $831, $832, $833, $834, $835, $836, $837, $838, $839, $840, $841, $842, $843, $844, $845, $846, $847, $848, $849, $850, $851, $852, $853, $854, $855, $856, $857, $858, $859, $860, $861, $862, $863, $864, $865, $866, $867, $868, $869, $870, $871, $872, $873, $874, $875, $876, $877, $878, $879, $880, $881, $882, $883, $884, $885, $886, $887, $888, $889, $890, $891, $892, $893, $894, $895, $896, $897, $898, $899, $900, $901, $902, $903, $904, $905, $906, $907, $908, $909, $910, $911, $912, $913, $914, $915, $916, $917, $918, $919, $920, $921, $922, $923, $924, $925, $926, $927, $928, $929, $930, $931, $932, $933, $934, $935, $936, $937, $938, $939, $940, $941, $942, $943, $944, $945, $946, $947, $948, $949, $950, $951, $952, $953, $954, $955, $956, $957, $958, $959, $960, $961, $962, $963, $964, $965, $966, $967, $968, $969, $970, $971, $972, $973, $974, $975, $976, $977, $978, $979, $980, $981, $982, $983, $984, $985, $986, $987, $988, $989, $990, $991, $992, $993, $994, $995, $996, $997, $998, $999, $1000)                    AND (table1alias2.datatype IN (CAST('TypeA' AS datatype_enum)))                    AND table1alias2.col20 IN ($1001))) AS candidateUniqueId ON table2alias.uniqueid = candidateUniqueId.uniqueid) AS inlinealias1    WHERE        inlinealias1.rn = $1002) AS inlinealias2 ON table1alias1.uniqueid = inlinealias2.uniqueidWHERE (EXISTS (        SELECT            1 AS one        FROM            view1        WHERE (col8 = $1003            AND table1alias1.col20 = col2))    AND table1alias1.col8 IN ($1004, $1005, $1006, $1007, $1008, $1009, $1010, $1011, $1012, $1013, $1014, $1015, $1016, $1017, $1018, $1019, $1020, $1021, $1022, $1023, $1024, $1025, $1026, $1027, $1028, $1029, $1030, $1031, $1032, $1033, $1034, $1035, $1036, $1037, $1038, $1039, $1040, $1041, $1042, $1043, $1044, $1045, $1046, $1047, $1048, $1049, $1050, $1051, $1052, $1053, $1054, $1055, $1056, $1057, $1058, $1059, $1060, $1061, $1062, $1063, $1064, $1065, $1066, $1067, $1068, $1069, $1070, $1071, $1072, $1073, $1074, $1075, $1076, $1077, $1078, $1079, $1080, $1081, $1082, $1083, $1084, $1085, $1086, $1087, $1088, $1089, $1090, $1091, $1092, $1093, $1094, $1095, $1096, $1097, $1098, $1099, $1100, $1101, $1102, $1103, $1104, $1105, $1106, $1107, $1108, $1109, $1110, $1111, $1112, $1113, $1114, $1115, $1116, $1117, $1118, $1119, $1120, $1121, $1122, $1123, $1124, $1125, $1126, $1127, $1128, $1129, $1130, $1131, $1132, $1133, $1134, $1135, $1136, $1137, $1138, $1139, $1140, $1141, $1142, $1143, $1144, $1145, $1146, $1147, $1148, $1149, $1150, $1151, $1152, $1153, $1154, $1155, $1156, $1157, $1158, $1159, $1160, $1161, $1162, $1163, $1164, $1165, $1166, $1167, $1168, $1169, $1170, $1171, $1172, $1173, $1174, $1175, $1176, $1177, $1178, $1179, $1180, $1181, $1182, $1183, $1184, $1185, $1186, $1187, $1188, $1189, $1190, $1191, $1192, $1193, $1194, $1195, $1196, $1197, $1198, $1199, $1200, $1201, $1202, $1203, $1204, $1205, $1206, $1207, $1208, $1209, $1210, $1211, $1212, $1213, $1214, $1215, $1216, $1217, $1218, $1219, $1220, $1221, $1222, $1223, $1224, $1225, $1226, $1227, $1228, $1229, $1230, $1231, $1232, $1233, $1234, $1235, $1236, $1237, $1238, $1239, $1240, $1241, $1242, $1243, $1244, $1245, $1246, $1247, $1248, $1249, $1250, $1251, $1252, $1253, $1254, $1255, $1256, $1257, $1258, $1259, $1260, $1261, $1262, $1263, $1264, $1265, $1266, $1267, $1268, $1269, $1270, $1271, $1272, $1273, $1274, $1275, $1276, $1277, $1278, $1279, $1280, $1281, $1282, $1283, $1284, $1285, $1286, $1287, $1288, $1289, $1290, $1291, $1292, $1293, $1294, $1295, $1296, $1297, $1298, $1299, $1300, $1301, $1302, $1303, $1304, $1305, $1306, $1307, $1308, $1309, $1310, $1311, $1312, $1313, $1314, $1315, $1316, $1317, $1318, $1319, $1320, $1321, $1322, $1323, $1324, $1325, $1326, $1327, $1328, $1329, $1330, $1331, $1332, $1333, $1334, $1335, $1336, $1337, $1338, $1339, $1340, $1341, $1342, $1343, $1344, $1345, $1346, $1347, $1348, $1349, $1350, $1351, $1352, $1353, $1354, $1355, $1356, $1357, $1358, $1359, $1360, $1361, $1362, $1363, $1364, $1365, $1366, $1367, $1368, $1369, $1370, $1371, $1372, $1373, $1374, $1375, $1376, $1377, $1378, $1379, $1380, $1381, $1382, $1383, $1384, $1385, $1386, $1387, $1388, $1389, $1390, $1391, $1392, $1393, $1394, $1395, $1396, $1397, $1398, $1399, $1400, $1401, $1402, $1403, $1404, $1405, $1406, $1407, $1408, $1409, $1410, $1411, $1412, $1413, $1414, $1415, $1416, $1417, $1418, $1419, $1420, $1421, $1422, $1423, $1424, $1425, $1426, $1427, $1428, $1429, $1430, $1431, $1432, $1433, $1434, $1435, $1436, $1437, $1438, $1439, $1440, $1441, $1442, $1443, $1444, $1445, $1446, $1447, $1448, $1449, $1450, $1451, $1452, $1453, $1454, $1455, $1456, $1457, $1458, $1459, $1460, $1461, $1462, $1463, $1464, $1465, $1466, $1467, $1468, $1469, $1470, $1471, $1472, $1473, $1474, $1475, $1476, $1477, $1478, $1479, $1480, $1481, $1482, $1483, $1484, $1485, $1486, $1487, $1488, $1489, $1490, $1491, $1492, $1493, $1494, $1495, $1496, $1497, $1498, $1499, $1500, $1501, $1502, $1503, $1504, $1505, $1506, $1507, $1508, $1509, $1510, $1511, $1512, $1513, $1514, $1515, $1516, $1517, $1518, $1519, $1520, $1521, $1522, $1523, $1524, $1525, $1526, $1527, $1528, $1529, $1530, $1531, $1532, $1533, $1534, $1535, $1536, $1537, $1538, $1539, $1540, $1541, $1542, $1543, $1544, $1545, $1546, $1547, $1548, $1549, $1550, $1551, $1552, $1553, $1554, $1555, $1556, $1557, $1558, $1559, $1560, $1561, $1562, $1563, $1564, $1565, $1566, $1567, $1568, $1569, $1570, $1571, $1572, $1573, $1574, $1575, $1576, $1577, $1578, $1579, $1580, $1581, $1582, $1583, $1584, $1585, $1586, $1587, $1588, $1589, $1590, $1591, $1592, $1593, $1594, $1595, $1596, $1597, $1598, $1599, $1600, $1601, $1602, $1603, $1604, $1605, $1606, $1607, $1608, $1609, $1610, $1611, $1612, $1613, $1614, $1615, $1616, $1617, $1618, $1619, $1620, $1621, $1622, $1623, $1624, $1625, $1626, $1627, $1628, $1629, $1630, $1631, $1632, $1633, $1634, $1635, $1636, $1637, $1638, $1639, $1640, $1641, $1642, $1643, $1644, $1645, $1646, $1647, $1648, $1649, $1650, $1651, $1652, $1653, $1654, $1655, $1656, $1657, $1658, $1659, $1660, $1661, $1662, $1663, $1664, $1665, $1666, $1667, $1668, $1669, $1670, $1671, $1672, $1673, $1674, $1675, $1676, $1677, $1678, $1679, $1680, $1681, $1682, $1683, $1684, $1685, $1686, $1687, $1688, $1689, $1690, $1691, $1692, $1693, $1694, $1695, $1696, $1697, $1698, $1699, $1700, $1701, $1702, $1703, $1704, $1705, $1706, $1707, $1708, $1709, $1710, $1711, $1712, $1713, $1714, $1715, $1716, $1717, $1718, $1719, $1720, $1721, $1722, $1723, $1724, $1725, $1726, $1727, $1728, $1729, $1730, $1731, $1732, $1733, $1734, $1735, $1736, $1737, $1738, $1739, $1740, $1741, $1742, $1743, $1744, $1745, $1746, $1747, $1748, $1749, $1750, $1751, $1752, $1753, $1754, $1755, $1756, $1757, $1758, $1759, $1760, $1761, $1762, $1763, $1764, $1765, $1766, $1767, $1768, $1769, $1770, $1771, $1772, $1773, $1774, $1775, $1776, $1777, $1778, $1779, $1780, $1781, $1782, $1783, $1784, $1785, $1786, $1787, $1788, $1789, $1790, $1791, $1792, $1793, $1794, $1795, $1796, $1797, $1798, $1799, $1800, $1801, $1802, $1803, $1804, $1805, $1806, $1807, $1808, $1809, $1810, $1811, $1812, $1813, $1814, $1815, $1816, $1817, $1818, $1819, $1820, $1821, $1822, $1823, $1824, $1825, $1826, $1827, $1828, $1829, $1830, $1831, $1832, $1833, $1834, $1835, $1836, $1837, $1838, $1839, $1840, $1841, $1842, $1843, $1844, $1845, $1846, $1847, $1848, $1849, $1850, $1851, $1852, $1853, $1854, $1855, $1856, $1857, $1858, $1859, $1860, $1861, $1862, $1863, $1864, $1865, $1866, $1867, $1868, $1869, $1870, $1871, $1872, $1873, $1874, $1875, $1876, $1877, $1878, $1879, $1880, $1881, $1882, $1883, $1884, $1885, $1886, $1887, $1888, $1889, $1890, $1891, $1892, $1893, $1894, $1895, $1896, $1897, $1898, $1899, $1900, $1901, $1902, $1903, $1904, $1905, $1906, $1907, $1908, $1909, $1910, $1911, $1912, $1913, $1914, $1915, $1916, $1917, $1918, $1919, $1920, $1921, $1922, $1923, $1924, $1925, $1926, $1927, $1928, $1929, $1930, $1931, $1932, $1933, $1934, $1935, $1936, $1937, $1938, $1939, $1940, $1941, $1942, $1943, $1944, $1945, $1946, $1947, $1948, $1949, $1950, $1951, $1952, $1953, $1954, $1955, $1956, $1957, $1958, $1959, $1960, $1961, $1962, $1963, $1964, $1965, $1966, $1967, $1968, $1969, $1970, $1971, $1972, $1973, $1974, $1975, $1976, $1977, $1978, $1979, $1980, $1981, $1982, $1983, $1984, $1985, $1986, $1987, $1988, $1989, $1990, $1991, $1992, $1993, $1994, $1995, $1996, $1997, $1998, $1999, $2000, $2001, $2002, $2003)    AND (table1alias1.col3 = $2004        OR table1alias1.col3 IS NULL)    AND (table1alias1.datatype IN (CAST('TypeA' AS datatype_enum)))    AND table1alias1.col20 IN ($2005))ORDER BY    table1alias1.col16 ASC) AS subqueryORDER BY    subquery.order_by ASCTables:\\d table1                                                  Table \"table1\"              Column               |            Type             | Collation | Nullable |             Default             -----------------------------------+-----------------------------+-----------+----------+--------------------------------- id                                | bigint                      |           | not null |  col2                              | bigint                      |           |          |  col3                              | boolean                     |           |          |  col4                              | timestamp without time zone |           |          |  col5                              | timestamp without time zone |           |          |  col6                              | timestamp without time zone |           |          |  col7                              | timestamp without time zone |           |          |  col8                              | character varying(1000)     |           |          |  col9                              | character varying(1000)     |           |          |  col10                             | character varying(1000)     |           |          |  col11                             | character varying(1000)     |           |          |  col12                             | character varying(1000)     |           |          |  col13                             | character varying(1000)     |           |          |  col14                             | character varying(1000)     |           |          |  col15                             | character varying(1000)     |           |          |  col16                             | bigint                      |           |          |  col17                             | bigint                      |           |          |  col18                             | bigint                      |           |          |  col19                             | bigint                      |           |          |  col20                             | bigint                      |           |          |  col21                             | character varying(255)      |           |          |  uniqueid                          | character varying(255)      |           |          |  col23                             | timestamp without time zone |           |          |  col24                             | boolean                     |           |          |  col25                             | boolean                     |           |          |  col26                             | character varying(255)      |           |          |  col27                             | timestamp without time zone |           |          |  col28                             | timestamp without time zone |           |          |  col29                             | bigint                      |           |          |  col30                             | integer                     |           | not null | 0 col31                             | character varying(255)      |           |          |  col32                             | boolean                     |           |          |  col33                             | boolean                     |           |          |  col34                             | boolean                     |           |          |  col35                             | boolean                     |           |          |  col36                             | character varying(1000)     |           |          |  col37                             | character varying(1000)     |           |          |  col38                             | boolean                     |           |          |  col39                             | character varying(1000)     |           |          |  col40                             | character varying(1000)     |           |          |  col41                             | character varying(1000)     |           |          |  col42                             | character varying(255)      |           |          |  col43                             | character varying(1000)     |           |          |  col44                             | other_enum                  |           | not null |  datatype                          | datatype_enum               |           | not null |  col46                             | bigint                      |           |          |  col47                             | text                        |           |          |  col48                             | bytea                       |           | not null |  col49                             | bytea                       |           |          |  col50                             | bigint                      |           |          |  col51                             | bigint                      |           |          |  col52                             | bigint                      |           |          |  col53                             | bigint                      |           |          |  col54                             | bigint                      |           |          | Indexes:    \"table1_pkey\" PRIMARY KEY, btree (id)    \"table1_unique_col47_for_col46\" UNIQUE CONSTRAINT, btree (col47, col46)    \"table1_col20_datatype_idx\" btree (col20, datatype)    \"table1_col2_datatype_col20_idx\" btree (col2, datatype, col20)    \"table1_col42_idx\" btree (col42)    \"table1_col28_idx\" btree (col28)    \"table1_col52_idx\" btree (col52)    \"table1_col50_idx\" btree (col50)    \"table1_col12_idx\" btree (col12)    \"table1_col37_idx\" btree (col37) WHERE col37 IS NOT NULL    \"table1_col40_notnull_idx\" btree (col40) WHERE col40 IS NOT NULL    \"table1_typea_idx\" btree (col20, (col8::text)) WHERE datatype = 'TypeA'::table1_type    \"table1_typea_include_uniqueid_col16_idx\" btree (col20, col8, deleted) INCLUDE (uniqueid, col16, id) WHERE datatype = 'TypeA'::table1_type    \"table1_typea_idx\" btree (col20, (col8::text)) WHERE datatype = 'TypeA'::table1_type INVALID    \"table1_uniqueid_idx\" btree (uniqueid)Check constraints:    \"table1_col32_not_null\" CHECK (col32 IS NOT NULL)    \"table1_col34_not_null\" CHECK (col34 IS NOT NULL)    \"table1_col33_not_null\" CHECK (col33 IS NOT NULL)    \"table1_col35_not_null\" CHECK (col35 IS NOT NULL) NOT VALID    \"col49_or_col46\" CHECK (col49 IS NOT NULL OR col46 IS NOT NULL) NOT VALIDForeign-key constraints:    \"fk_table1_col20\" FOREIGN KEY (col20) REFERENCES constainttable4(id)    \"fk_table1_col29\" FOREIGN KEY (col29) REFERENCES constainttable5(id)    \"table1_col46_fkey\" FOREIGN KEY (col46) REFERENCES constainttable6(id)Referenced by:    TABLE \"referencetable2\" CONSTRAINT \"fk_table1_id\" FOREIGN KEY (table1_id) REFERENCES table1(id)    TABLE \"referencetable3\" CONSTRAINT \"referencetable3_table1_id_fkey\" FOREIGN KEY (table1_id) REFERENCES table1(id)    TABLE \"referencetable4\" CONSTRAINT \"referencetable4_table1_fkey\" FOREIGN KEY (table1_id) REFERENCES table1(id)Publications:    \"puba\"\\d view1                                View \"view1\"                 Column                  |          Type          | Collation | Nullable | Default -----------------------------------------+------------------------+-----------+----------+--------- col1                                    | text                   |           |          |  col2                                    | bigint                 |           |          |  col3                                    | bytea                  |           |          |  col4                                    | integer                |           |          |  col5                                    | character varying(255) |           |          |  col6                                    | bytea                  |           |          |  col7                                    | character varying(255) |           |          |  col8                                    | bigint                 |           |          |  col9                                    | bigint                 |           |          |  col10                                   | bytea                  |           |          |  col11                                   | integer                |           |          |  col12                                   | character varying      |           |          |  col13                                   | bytea                  |           |          |  col14                                   | character varying      |           |          |  \\d table2                                              Table \"table2\"        Column         |            Type             | Collation | Nullable |                  Default                  -----------------------+-----------------------------+-----------+----------+------------------------------------------- id                    | bigint                      |           | not null | nextval('table2_id_seq'::regclass) uniqueid              | character varying(255)      |           | not null |  col3                  | timestamp without time zone |           | not null |  col4                  | boolean                     |           | not null |  col5                  | boolean                     |           | not null |  col6                  | boolean                     |           | not null |  col7                  | boolean                     |           | not null |  col8                  | bigint                      |           |          |  col9                  | bigint                      |           |          |  col10                 | bigint                      |           |          |  col11                 | character varying(255)      |           |          |  col12                 | character varying(255)      |           | not null |  col13                 | boolean                     |           | not null | trueIndexes:    \"id\" PRIMARY KEY, btree (id)    \"idx_table2_uniqueid\" btree (uniqueid)Foreign-key constraints:    \"fk_table2_source_constainttable1_id\" FOREIGN KEY (source_constainttable1_id) REFERENCES constainttable1(id)    \"fk_table2_source_constainttable2_id\" FOREIGN KEY (source_constainttable2_id) REFERENCES constainttable2(id)    \"fk_table2_source_constainttable3_id\" FOREIGN KEY (source_constainttable3_id) REFERENCES constainttable3(id)Referenced by:    TABLE \"referencetable1\" CONSTRAINT \"fk_referencetable1_table2_id\" FOREIGN KEY (table2_id) REFERENCES table2(id)    TABLE \"referencetable1\" CONSTRAINT \"fk_referencetable1_table2_id_old\" FOREIGN KEY (table2_id_old) REFERENCES table2(id)\\d app.table1_typea_include_uniqueid_col16_idxIndex \"table1_typea_include_uniqueid_col16_idx\"   Column   |          Type           | Key? | Definition ------------+-------------------------+------+------------ col20      | bigint                  | yes  | account_id col8       | character varying(1000) | yes  | string01 col3       | boolean                 | yes  | deleted col26      | character varying(255)  | no   | uniqueid col16      | bigint                  | no   | long01 id         | bigint                  | no   | idbtree, for table \"table1\", predicate (datatype = 'TypeA'::table1_type)PostgreSQL version number you are running:SELECT version();                                                              version                                                              ----------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 11.6 (Ubuntu 11.6-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit(1 row)How you installed PostgreSQL:Official apt repoChanges made to the settings in the postgresql.conf file:  see Server Configuration for a quick way to list them all.SELECT name, current_setting(name), SOURCE  FROM pg_settings  WHERE SOURCE NOT IN ('default', 'override');              name               |            current_setting            |        source        ---------------------------------+---------------------------------------+---------------------- application_name                | psql                                  | client archive_command                 | true                                  | configuration file archive_mode                    | on                                    | configuration file auto_explain.log_analyze        | on                                    | configuration file auto_explain.log_buffers        | on                                    | configuration file auto_explain.log_min_duration   | 10s                                   | configuration file auto_explain.log_timing         | on                                    | configuration file autovacuum                      | on                                    | configuration file checkpoint_timeout              | 30min                                 | configuration file client_encoding                 | UTF8                                  | client DateStyle                       | ISO, MDY                              | configuration file default_text_search_config      | pg_catalog.english                    | configuration file dynamic_shared_memory_type      | posix                                 | configuration file effective_cache_size            | 8GB                                   | configuration file effective_io_concurrency        | 100                                   | configuration file external_pid_file               | /var/run/postgresql/11-main.pid       | command line hot_standby                     | on                                    | configuration file lc_messages                     | en_US.utf8                            | configuration file lc_monetary                     | en_US.utf8                            | configuration file lc_numeric                      | en_US.utf8                            | configuration file lc_time                         | en_US.utf8                            | configuration file listen_addresses                | *                                     | configuration file log_autovacuum_min_duration     | 0                                     | configuration file log_checkpoints                 | on                                    | configuration file log_connections                 | on                                    | configuration file log_destination                 | syslog                                | configuration file log_directory                   | pg_log                                | configuration file log_disconnections              | on                                    | configuration file log_filename                    | postgresql-%a.log                     | configuration fileOperating system and version:5.0.0-1029-gcp #30~18.04.1-Ubuntu SMP Mon Jan 13 05:40:56 UTC 2020 x86_64 x86_64 x86_64 GNU/LinuxWhat program you're using to connect to PostgreSQL:Java JDBC 4.2 (JRE 8+) driver for PostgreSQL database Is there anything relevant or unusual in the PostgreSQL server logs?:nothing in the logs, but I've done a bunch of pg_locks/stat_activity dumps whilst the query is running. pg_locks/activity show: - wait_event_type/wait_event NULL each time for this query - granted = true for all modes for this query - an autovacuum was running, but for an unrelated table.Extra perf details:CPU: 16 vCPUs on GCP (Intel Xeon E5 v4)RAM: 60GB Disk: 3TB persistent SSD on GCPsudo time dd if=/dev/sdc of=/dev/null bs=1M count=1k skip=$((128*RANDOM/32)):1024+0 records in1024+0 records out1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.855067 s, 1.3 GB/s0.00user 0.45system 0:00.85elapsed 53%CPU (0avgtext+0avgdata 3236maxresident)k2097816inputs+0outputs (1major+348minor)pagefaults 0swapsEstimated performance (GCP quoted)Operation type\tRead\tWriteSustained random IOPS limit\t25,000.00\t25,000.00Sustained throughput limit (MB/s)\t1,200.00\t800.00", "msg_date": "Sun, 3 May 2020 09:58:27 +0100", "msg_from": "James Thompson <[email protected]>", "msg_from_op": true, "msg_subject": "Please help! Query jumps from 1s -> 4m" }, { "msg_contents": "On Sun, May 03, 2020 at 09:58:27AM +0100, James Thompson wrote:\n> Hi,\n> \n> Hoping someone can help with this performance issue that's been driving a\n> few of us crazy :-) Any guidance greatly appreciated.\n> \n> A description of what you are trying to achieve and what results you\n> expect.:\n> - I'd like to get an understanding of why the following query (presented\n> in full, but there are specific parts that are confusing me) starts off\n> taking ~second in duration but 'switches' to taking over 4 minutes.\n\nDoes it \"switch\" abruptly or do you get progressively slower queries ?\nIf it's abrupt following the 5th execution, I guess you're hitting this:\n\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/A737B7A37273E048B164557ADEF4A58B50FB8D5E@ntex2010i.host.magwien.gv.at\n\n> - we initially saw this behaviour for the exact same sql with a different\n> index that resulted in an index scan. To try and fix the issue we've\n> created an additional index with additional included fields so we now have\n> Index Only Scans, but are still seeing the same problem.\n\n> Segments of interest:\n> 1. -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on\n> table1 table1alias1 (cost=0.56..17.25 rows=1 width=60) (actual\n> time=110.539..123828.134 rows=67000 loops=1)\n> Index Cond: (col20 = $2005)\n> Filter: (((col3 = $2004) OR (col3 IS NULL)) AND ((col8)::text = ANY\n> ((ARRAY[$1004, ..., $2003])::text[])))\n> Rows Removed by Filter: 2662652\n> Heap Fetches: 6940\n> Buffers: shared hit=46619 read=42784 written=52\n\n> If I run the same queries now:\n> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\n> table1alias1 (cost=0.56..2549.69 rows=69 width=36)\n> (actual time=1.017..1221.375 rows=67000 loops=1)\n> Heap Fetches: 24\n> Buffers: shared hit=2849 read=2483\n\nIt looks to me like you're getting good performance following a vacuum, when\nHeap Fetches is low. So you'd want to run vacuum more often, like:\n| ALTER TABLE table1 SET (autovacuum_vacuum_scale_factor=0.005).\n\nBut maybe I've missed something - you showed the bad query plan, but not the\ngood one, and I wonder if they may be subtly different, and that's maybe masked\nby the replaced identifiers.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 3 May 2020 10:38:26 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help! Query jumps from 1s -> 4m" }, { "msg_contents": "On Mon, May 04, 2020 at 08:07:07PM +0100, Jamie Thompson wrote:\n> Additionally, the execution plans for the 10th + following queries look\n> fine, they have the same structure as if I run the query manually. It's not\n> that the query plan switches, it seems as though the same query plan is\n> just > 200X slower than usual.\n\nAre you able to reproduce the problem manually ?\n\nWith/without PREPARE ?\nhttps://www.postgresql.org/docs/current/sql-prepare.html\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 4 May 2020 14:12:01 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help! Query jumps from 1s -> 4m" }, { "msg_contents": "The change is abrupt, on the 10th execution (but I hadn't spotted it was\nalways after the same number of executions until your suggestion - thanks\nfor pointing me in that direction).\n\nI don't see any custom configuration on our end that changes the threshold\nfor this from 5->10. Debugging the query call I also see that PgConnection\nhas the prepareThreshold set to 5.\n\nAdditionally, the execution plans for the 10th + following queries look\nfine, they have the same structure as if I run the query manually. It's not\nthat the query plan switches, it seems as though the same query plan is\njust > 200X slower than usual.\n\nAs for the heap fetches -> as far as I can tell, on both occasions the\nfetches are relatively low and shouldn't account for minutes of execution\n(even if one is lower than the other). Looking through one days logs I do\nfind cases with lower heap fetches too, for example as below which has 1977\nfetches instead of the previous 6940 but took approx the same time:\n-> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\ntable1alias1 (cost=0.56..17.25 rows=1 width=60) (actual\ntime=56.858..120893.874 rows=67000 loops=1)\n Index Cond: (col20 = $2005)\n Filter: (((col3 = $2004) OR (col3 IS NULL)) AND ((col8)::text = ANY\n((ARRAY[$1004, ..., $2003])::text[])))\n Rows Removed by Filter: 2662793\n Heap Fetches: 1977\n Buffers: shared hit=84574 read=3522\n\nWould you agree the statement threshold / heap fetches seems unlikely to be\ncausing this? Any other thoughts?\n\nThanks!\n\nOn Sun, 3 May 2020 at 16:38, Justin Pryzby <[email protected]> wrote:\n\n> On Sun, May 03, 2020 at 09:58:27AM +0100, James Thompson wrote:\n> > Hi,\n> >\n> > Hoping someone can help with this performance issue that's been driving a\n> > few of us crazy :-) Any guidance greatly appreciated.\n> >\n> > A description of what you are trying to achieve and what results you\n> > expect.:\n> > - I'd like to get an understanding of why the following query (presented\n> > in full, but there are specific parts that are confusing me) starts off\n> > taking ~second in duration but 'switches' to taking over 4 minutes.\n>\n> Does it \"switch\" abruptly or do you get progressively slower queries ?\n> If it's abrupt following the 5th execution, I guess you're hitting this:\n>\n>\n> https://www.postgresql.org/message-id/[email protected]\n>\n> https://www.postgresql.org/message-id/A737B7A37273E048B164557ADEF4A58B50FB8D5E@ntex2010i.host.magwien.gv.at\n>\n> > - we initially saw this behaviour for the exact same sql with a\n> different\n> > index that resulted in an index scan. To try and fix the issue we've\n> > created an additional index with additional included fields so we now\n> have\n> > Index Only Scans, but are still seeing the same problem.\n>\n> > Segments of interest:\n> > 1. -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on\n> > table1 table1alias1 (cost=0.56..17.25 rows=1 width=60) (actual\n> > time=110.539..123828.134 rows=67000 loops=1)\n> > Index Cond: (col20 = $2005)\n> > Filter: (((col3 = $2004) OR (col3 IS NULL)) AND ((col8)::text =\n> ANY\n> > ((ARRAY[$1004, ..., $2003])::text[])))\n> > Rows Removed by Filter: 2662652\n> > Heap Fetches: 6940\n> > Buffers: shared hit=46619 read=42784 written=52\n>\n> > If I run the same queries now:\n> > Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\n> > table1alias1 (cost=0.56..2549.69 rows=69 width=36)\n> > (actual time=1.017..1221.375 rows=67000 loops=1)\n> > Heap Fetches: 24\n> > Buffers: shared hit=2849 read=2483\n>\n> It looks to me like you're getting good performance following a vacuum,\n> when\n> Heap Fetches is low. So you'd want to run vacuum more often, like:\n> | ALTER TABLE table1 SET (autovacuum_vacuum_scale_factor=0.005).\n>\n> But maybe I've missed something - you showed the bad query plan, but not\n> the\n> good one, and I wonder if they may be subtly different, and that's maybe\n> masked\n> by the replaced identifiers.\n>\n> --\n> Justin\n>\n\nThe change is abrupt, on the 10th execution (but I hadn't spotted it was always after the same number of executions until your suggestion - thanks for pointing me in that direction).I don't see any custom configuration on our end that changes the threshold for this from 5->10. Debugging the query call I also see that PgConnection has the prepareThreshold set to 5.Additionally, the execution plans for the 10th + following queries look fine, they have the same structure as if I run the query manually. It's not that the query plan switches, it seems as though the same query plan is just > 200X slower than usual.As for the heap fetches -> as far as I can tell, on both occasions the fetches are relatively low and shouldn't account for minutes of execution (even if one is lower than the other). Looking through one days logs I do find cases with lower heap fetches too, for example as below which has 1977 fetches instead of the previous 6940 but took approx the same time:->  Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1 table1alias1  (cost=0.56..17.25 rows=1 width=60) (actual time=56.858..120893.874 rows=67000 loops=1)        Index Cond: (col20 = $2005)        Filter: (((col3 = $2004) OR (col3 IS NULL)) AND ((col8)::text = ANY ((ARRAY[$1004, ..., $2003])::text[])))        Rows Removed by Filter: 2662793        Heap Fetches: 1977        Buffers: shared hit=84574 read=3522Would you agree the statement threshold / heap fetches seems unlikely to be causing this? Any other thoughts?Thanks!On Sun, 3 May 2020 at 16:38, Justin Pryzby <[email protected]> wrote:On Sun, May 03, 2020 at 09:58:27AM +0100, James Thompson wrote:\n> Hi,\n> \n> Hoping someone can help with this performance issue that's been driving a\n> few of us crazy :-) Any guidance greatly appreciated.\n> \n> A description of what you are trying to achieve and what results you\n> expect.:\n>  - I'd like to get an understanding of why the following query (presented\n> in full, but there are specific parts that are confusing me) starts off\n> taking ~second in duration but 'switches' to taking over 4 minutes.\n\nDoes it \"switch\" abruptly or do you get progressively slower queries ?\nIf it's abrupt following the 5th execution, I guess you're hitting this:\n\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/A737B7A37273E048B164557ADEF4A58B50FB8D5E@ntex2010i.host.magwien.gv.at\n\n>  - we initially saw this behaviour for the exact same sql with a different\n> index that resulted in an index scan. To try and fix the issue we've\n> created an additional index with additional included fields so we now have\n> Index Only Scans, but are still seeing the same problem.\n\n>  Segments of interest:\n>  1. ->  Index Only Scan using table1_typea_include_uniqueid_col16_idx on\n> table1 table1alias1  (cost=0.56..17.25 rows=1 width=60) (actual\n> time=110.539..123828.134 rows=67000 loops=1)\n>         Index Cond: (col20 = $2005)\n>         Filter: (((col3 = $2004) OR (col3 IS NULL)) AND ((col8)::text = ANY\n> ((ARRAY[$1004, ..., $2003])::text[])))\n>         Rows Removed by Filter: 2662652\n>         Heap Fetches: 6940\n>         Buffers: shared hit=46619 read=42784 written=52\n\n> If I run the same queries now:\n> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\n> table1alias1  (cost=0.56..2549.69 rows=69 width=36)\n> (actual time=1.017..1221.375 rows=67000 loops=1)\n> Heap Fetches: 24\n> Buffers: shared hit=2849 read=2483\n\nIt looks to me like you're getting good performance following a vacuum, when\nHeap Fetches is low.  So you'd want to run vacuum more often, like:\n| ALTER TABLE table1 SET (autovacuum_vacuum_scale_factor=0.005).\n\nBut maybe I've missed something - you showed the bad query plan, but not the\ngood one, and I wonder if they may be subtly different, and that's maybe masked\nby the replaced identifiers.\n\n-- \nJustin", "msg_date": "Mon, 4 May 2020 20:12:06 +0100", "msg_from": "James Thompson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Please help! Query jumps from 1s -> 4m" }, { "msg_contents": "On Mon, 4 May 2020 at 02:35, James Thompson <[email protected]> wrote:\n> buffers do look different - but still, reading 42k doesn't seem like it would cause a delay of 4m?\n\nYou could do: SET track_io_timing TO on;\n\nthen: EXPLAIN (ANALYZE, BUFFERS) your query and see if the time is\nspent doing IO.\n\nDavid\n\n\n", "msg_date": "Tue, 5 May 2020 08:12:51 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help! Query jumps from 1s -> 4m" }, { "msg_contents": "On Mon, 2020-05-04 at 20:12 +0100, James Thompson wrote:\n> The change is abrupt, on the 10th execution (but I hadn't spotted it was always after the\n> same number of executions until your suggestion - thanks for pointing me in that direction).\n> \n> I don't see any custom configuration on our end that changes the threshold for this from 5->10.\n> Debugging the query call I also see that PgConnection has the prepareThreshold set to 5.\n> \n> Additionally, the execution plans for the 10th + following queries look fine, they have the\n> same structure as if I run the query manually. It's not that the query plan switches,\n> it seems as though the same query plan is just > 200X slower than usual.\n> \n> As for the heap fetches -> as far as I can tell, on both occasions the fetches are relatively\n> low and shouldn't account for minutes of execution (even if one is lower than the other).\n> Looking through one days logs I do find cases with lower heap fetches too, for example as\n> below which has 1977 fetches instead of the previous 6940 but took approx the same time:\n> -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1 table1alias1 (cost=0.56..17.25 rows=1 width=60) (actual time=56.858..120893.874 rows=67000 loops=1)\n> Index Cond: (col20 = $2005)\n> Filter: (((col3 = $2004) OR (col3 IS NULL)) AND ((col8)::text = ANY ((ARRAY[$1004, ..., $2003])::text[])))\n> Rows Removed by Filter: 2662793\n> Heap Fetches: 1977\n> Buffers: shared hit=84574 read=3522\n> \n> Would you agree the statement threshold / heap fetches seems unlikely to be causing this? Any other thoughts?\n\nIt does sound suspiciously like custom plans vs. generic plan.\n\nIf you are using JDBC, then the cut-off of 10 would make sense:\nthe JDBC driver uses (server) prepared statements only after the\nfifth execution, and the prepared statement will use a generic plan\nonly after the fifth execution.\n\nIt would be good to see the execution plan from the third, seventh\nand thirteenth execution. You could use \"auto_explain\" for that.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Tue, 05 May 2020 10:08:20 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help! Query jumps from 1s -> 4m" }, { "msg_contents": "On Mon, May 04, 2020 at 02:12:01PM -0500, Justin Pryzby wrote:\n> On Mon, May 04, 2020 at 08:07:07PM +0100, Jamie Thompson wrote:\n> > Additionally, the execution plans for the 10th + following queries look\n> > fine, they have the same structure as if I run the query manually. It's not\n> > that the query plan switches, it seems as though the same query plan is\n> > just > 200X slower than usual.\n> \n> Are you able to reproduce the problem manually ?\n> \n> With/without PREPARE ?\n> https://www.postgresql.org/docs/current/sql-prepare.html\n\nAlso, you should be able to check if that's the problem by doing either:\nplan_cache_mode = force_generic_plan;\nOr (I would think) DISCARD PLANS;\n\nhttps://www.postgresql.org/docs/12/runtime-config-query.html\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 5 May 2020 14:28:36 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help! Query jumps from 1s -> 4m" }, { "msg_contents": "I've managed to replicate this now with prepared statements. Thanks for all\nthe guidance so far.\n\nThe slowness occurs when the prepared statement changes to a generic plan.\n\nInitial plan:\n-> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\ntable1alias2 (cost=0.56..2549.70 rows=70 width=36) (actual\ntime=1.901..45.256 rows=65000 loops=1)\n Output: table1alias2.uniqueid\n Index Cond: ((table1alias2.col20 = '12345'::bigint) AND (table1alias2.\ncol8 = ANY ('{c5986b02-3a02-4639-8147-f286972413ba,...\n98ed24b1-76f5-4b0e-bb94-86cf13a4809c}'::text[])))\n Heap Fetches: 10\n Buffers: shared hit=5048\n\nafter 5 executions of the statement:\n-> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\ntable1alias2 (cost=0.56..17.23 rows=1 width=36) (actual\ntime=125.344..126877.822 rows=65000 loops=1)\n Output: table1alias2.uniqueid\n Index Cond: (table1alias2.col20 = $1001)\n Filter: ((table1alias2.col8)::text = ANY ((ARRAY[$1, ...,\n$1000])::text[]))\n Rows Removed by Filter: 2670023\n Heap Fetches: 428\n Buffers: shared hit=45933 read=42060 dirtied=4\n\nThe second plan looks worse to me as it's applying a filter rather than\nusing an index condition? I don't understand why it's not part of the\ncondition and also why this is so much slower though.\nIf I force a retrieval of all index rows for col20 = '12345' using an\nad-hoc query (below, which in my mind is what the 'bad' plan is doing),\nthat only takes 2s (2.7 mil rows). Where's the difference?\n\nEXPLAIN (ANALYZE, BUFFERS, TIMING) SELECT COUNT(DISTINCT id) FROM table1\nWHERE datatype='TypeA' AND col20 = 12345;\n-> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\n(cost=0.56..2762.95 rows=55337 width=8) (actual time=3.976..1655.645\nrows=2735023 loops=1)\n Index Cond: (col20 = 12345)\n Heap Fetches: 417\n Buffers: shared hit=43843 read=44147 dirtied=8\n\n>You could do: SET track_io_timing TO on;\nI've not tried this yet, and haven't used it before - sounds like there's\nsome risks associated with running it on a production server / clocks going\nbackwards?\n\n>Also, you should be able to check if that's the problem by doing either:\n>plan_cache_mode = force_generic_plan;\n>Or (I would think) DISCARD PLANS;\nI think plan_cache_mode is just for pg12+ unfortunately? We're on 11\ncurrently.\nJust tested DISCARD PLANS locally, it didn't switch back from the generic\nplan. Was that your expectation?\n\nI've managed to replicate this now with prepared statements. Thanks for all the guidance so far.The slowness occurs when the prepared statement changes to a generic plan.Initial plan:->  Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1 table1alias2  (cost=0.56..2549.70 rows=70 width=36) (actual time=1.901..45.256 rows=65000 loops=1)    Output: table1alias2.uniqueid    Index Cond: ((table1alias2.col20 = '12345'::bigint) AND (table1alias2.col8 = ANY ('{c5986b02-3a02-4639-8147-f286972413ba,... 98ed24b1-76f5-4b0e-bb94-86cf13a4809c}'::text[])))    Heap Fetches: 10    Buffers: shared hit=5048after 5 executions of the statement:->  Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1 table1alias2  (cost=0.56..17.23 rows=1 width=36) (actual time=125.344..126877.822 rows=65000 loops=1)    Output: table1alias2.uniqueid    Index Cond: (table1alias2.col20 = $1001)    Filter: ((table1alias2.col8)::text = ANY ((ARRAY[$1, ..., $1000])::text[]))    Rows Removed by Filter: 2670023    Heap Fetches: 428    Buffers: shared hit=45933 read=42060 dirtied=4The second plan looks worse to me as it's applying a filter rather than using an index condition? I don't understand why it's not part of the condition and also why this is so much slower though. If I force a retrieval of all index rows for col20 = '12345' using an ad-hoc query (below, which in my mind is what the 'bad' plan is doing), that only takes 2s (2.7 mil rows). Where's the difference?EXPLAIN (ANALYZE, BUFFERS, TIMING) SELECT COUNT(DISTINCT id) FROM table1 WHERE datatype='TypeA' AND col20 = 12345;-> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1  (cost=0.56..2762.95 rows=55337 width=8) (actual time=3.976..1655.645 rows=2735023 loops=1)         Index Cond: (col20 = 12345)         Heap Fetches: 417         Buffers: shared hit=43843 read=44147 dirtied=8>You could do: SET track_io_timing TO on;I've not tried this yet, and haven't used it before - sounds like there's some risks associated with running it on a production server / clocks going backwards?>Also, you should be able to check if that's the problem by doing either:\n>plan_cache_mode = force_generic_plan;\n>Or (I would think) DISCARD PLANS;I think plan_cache_mode is just for pg12+ unfortunately? We're on 11 currently.Just tested DISCARD PLANS locally, it didn't switch back from the generic plan. Was that your expectation?", "msg_date": "Tue, 5 May 2020 22:10:18 +0100", "msg_from": "James Thompson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Please help! Query jumps from 1s -> 4m" }, { "msg_contents": "On Tue, May 05, 2020 at 10:10:18PM +0100, James Thompson wrote:\n> I've managed to replicate this now with prepared statements. Thanks for all\n> the guidance so far.\n> \n> The slowness occurs when the prepared statement changes to a generic plan.\n> \n> Initial plan:\n> -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1 table1alias2 (cost=0.56..2549.70 rows=70 width=36) (actual time=1.901..45.256 rows=65000 loops=1)\n> Output: table1alias2.uniqueid\n> Index Cond: ((table1alias2.col20 = '12345'::bigint) AND (table1alias2.col8 = ANY ('{c5986b02-3a02-4639-8147-f286972413ba,...98ed24b1-76f5-4b0e-bb94-86cf13a4809c}'::text[])))\n\nThe rowcount is off by a factor of 1000x.\n\n> after 5 executions of the statement:\n> -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1 table1alias2 (cost=0.56..17.23 rows=1 width=36) (actual time=125.344..126877.822 rows=65000 loops=1)\n> Output: table1alias2.uniqueid\n> Index Cond: (table1alias2.col20 = $1001)\n> Filter: ((table1alias2.col8)::text = ANY ((ARRAY[$1, ...,$1000])::text[]))\n> Rows Removed by Filter: 2670023\n\nAnd the generic plan is cheaper than the previous, custom plan; but slower,\nundoubtedly due to rowcount mis-estimate.\n\n> The second plan looks worse to me as it's applying a filter rather than\n> using an index condition? I don't understand why it's not part of the\n> condition and also why this is so much slower though.\n> If I force a retrieval of all index rows for col20 = '12345' using an\n> ad-hoc query (below, which in my mind is what the 'bad' plan is doing),\n> that only takes 2s (2.7 mil rows). Where's the difference?\n> \n> EXPLAIN (ANALYZE, BUFFERS, TIMING) SELECT COUNT(DISTINCT id) FROM table1\n> WHERE datatype='TypeA' AND col20 = 12345;\n> -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\n> (cost=0.56..2762.95 rows=55337 width=8) (actual time=3.976..1655.645 rows=2735023 loops=1)\n\nI see you're querying on datetype, which I think isn't in the original query\n(but I'm not sure if it's just renamed?).\n\nUnderestimate usually means that the conditions are redundant or correlated.\nYou could mitigate that either by creating an index on both columns (or add\ndatatype to the existing index), or CREATE STATISTICS ndistinct on those\ncolumns. Or maybe you just need to increase the stats target for col20 (?).\nThen re-analyze.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 5 May 2020 16:35:54 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help! Query jumps from 1s -> 4m" }, { "msg_contents": "James Thompson <[email protected]> writes:\n> The slowness occurs when the prepared statement changes to a generic plan.\n\n> Initial plan:\n> -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\n> table1alias2 (cost=0.56..2549.70 rows=70 width=36) (actual\n> time=1.901..45.256 rows=65000 loops=1)\n> Output: table1alias2.uniqueid\n> Index Cond: ((table1alias2.col20 = '12345'::bigint) AND (table1alias2.\n> col8 = ANY ('{c5986b02-3a02-4639-8147-f286972413ba,...\n> 98ed24b1-76f5-4b0e-bb94-86cf13a4809c}'::text[])))\n> Heap Fetches: 10\n> Buffers: shared hit=5048\n\n> after 5 executions of the statement:\n> -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\n> table1alias2 (cost=0.56..17.23 rows=1 width=36) (actual\n> time=125.344..126877.822 rows=65000 loops=1)\n> Output: table1alias2.uniqueid\n> Index Cond: (table1alias2.col20 = $1001)\n> Filter: ((table1alias2.col8)::text = ANY ((ARRAY[$1, ...,\n> $1000])::text[]))\n> Rows Removed by Filter: 2670023\n> Heap Fetches: 428\n> Buffers: shared hit=45933 read=42060 dirtied=4\n\nYeah, this is a dynamic we've seen before. The rowcount estimate, and\nhence the cost estimate, for the plan with explicit parameter values is\nway off; but the estimate for the generic plan is even more way off,\ncausing the system to falsely decide that the latter is cheaper.\n\nI've speculated about refusing to believe generic cost estimates if they are\nmore than epsilon less than the concrete cost estimate, but it's not quite\nclear how that should work or whether it'd have its own failure modes.\n\nThe one thing that is totally clear is that these rowcount estimates are\ncrappy. Can you improve them by increasing the stats target for that\ntable? Maybe with less-garbage-y inputs, the system would make the right\nplan choice here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 05 May 2020 17:42:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help! Query jumps from 1s -> 4m" }, { "msg_contents": "Just to follow up on this...\nTried increasing stats targets last week + re-analyzing but the query was\njust as bad.\nEnded up increasing the prepareThreshold to prevent server-side prepares\nfor now (and thus later generic statements). This 'fixed' the issue and had\nno noticeable negative effect for our workloads.\n\nI still don't understand why the plan being off makes the query so much\nslower in this case (the plans I shared in the last email don't look too\ndifferent, I don't understand how the filter can add on 2mins of execution\ntime to an index-only scan). If anyone does have thoughts on what could be\nhappening I would be very interested to hear, but the main performance\nproblem is effectively solved.\n\nThanks all for the valuable help getting to the bottom of what was\nhappening.\n\nOn Tue, 5 May 2020 at 22:42, Tom Lane <[email protected]> wrote:\n\n> James Thompson <[email protected]> writes:\n> > The slowness occurs when the prepared statement changes to a generic\n> plan.\n>\n> > Initial plan:\n> > -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on\n> table1\n> > table1alias2 (cost=0.56..2549.70 rows=70 width=36) (actual\n> > time=1.901..45.256 rows=65000 loops=1)\n> > Output: table1alias2.uniqueid\n> > Index Cond: ((table1alias2.col20 = '12345'::bigint) AND\n> (table1alias2.\n> > col8 = ANY ('{c5986b02-3a02-4639-8147-f286972413ba,...\n> > 98ed24b1-76f5-4b0e-bb94-86cf13a4809c}'::text[])))\n> > Heap Fetches: 10\n> > Buffers: shared hit=5048\n>\n> > after 5 executions of the statement:\n> > -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on\n> table1\n> > table1alias2 (cost=0.56..17.23 rows=1 width=36) (actual\n> > time=125.344..126877.822 rows=65000 loops=1)\n> > Output: table1alias2.uniqueid\n> > Index Cond: (table1alias2.col20 = $1001)\n> > Filter: ((table1alias2.col8)::text = ANY ((ARRAY[$1, ...,\n> > $1000])::text[]))\n> > Rows Removed by Filter: 2670023\n> > Heap Fetches: 428\n> > Buffers: shared hit=45933 read=42060 dirtied=4\n>\n> Yeah, this is a dynamic we've seen before. The rowcount estimate, and\n> hence the cost estimate, for the plan with explicit parameter values is\n> way off; but the estimate for the generic plan is even more way off,\n> causing the system to falsely decide that the latter is cheaper.\n>\n> I've speculated about refusing to believe generic cost estimates if they\n> are\n> more than epsilon less than the concrete cost estimate, but it's not quite\n> clear how that should work or whether it'd have its own failure modes.\n>\n> The one thing that is totally clear is that these rowcount estimates are\n> crappy. Can you improve them by increasing the stats target for that\n> table? Maybe with less-garbage-y inputs, the system would make the right\n> plan choice here.\n>\n> regards, tom lane\n>\n\nJust to follow up on this...Tried increasing stats targets last week + re-analyzing but the query was just as bad. Ended up increasing the prepareThreshold to prevent server-side prepares for now (and thus later generic statements). This 'fixed' the issue and had no noticeable negative effect for our workloads.I still don't understand why the plan being off makes the query so much slower in this case (the plans I shared in the last email don't look too different, I don't understand how the filter can add on 2mins of execution time to an index-only scan). If anyone does have thoughts on what could be happening I would be very interested to hear, but the main performance problem is effectively solved.Thanks all for the valuable help getting to the bottom of what was happening.On Tue, 5 May 2020 at 22:42, Tom Lane <[email protected]> wrote:James Thompson <[email protected]> writes:\n> The slowness occurs when the prepared statement changes to a generic plan.\n\n> Initial plan:\n> ->  Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\n> table1alias2  (cost=0.56..2549.70 rows=70 width=36) (actual\n> time=1.901..45.256 rows=65000 loops=1)\n>     Output: table1alias2.uniqueid\n>     Index Cond: ((table1alias2.col20 = '12345'::bigint) AND (table1alias2.\n> col8 = ANY ('{c5986b02-3a02-4639-8147-f286972413ba,...\n> 98ed24b1-76f5-4b0e-bb94-86cf13a4809c}'::text[])))\n>     Heap Fetches: 10\n>     Buffers: shared hit=5048\n\n> after 5 executions of the statement:\n> ->  Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1\n> table1alias2  (cost=0.56..17.23 rows=1 width=36) (actual\n> time=125.344..126877.822 rows=65000 loops=1)\n>     Output: table1alias2.uniqueid\n>     Index Cond: (table1alias2.col20 = $1001)\n>     Filter: ((table1alias2.col8)::text = ANY ((ARRAY[$1, ...,\n> $1000])::text[]))\n>     Rows Removed by Filter: 2670023\n>     Heap Fetches: 428\n>     Buffers: shared hit=45933 read=42060 dirtied=4\n\nYeah, this is a dynamic we've seen before.  The rowcount estimate, and\nhence the cost estimate, for the plan with explicit parameter values is\nway off; but the estimate for the generic plan is even more way off,\ncausing the system to falsely decide that the latter is cheaper.\n\nI've speculated about refusing to believe generic cost estimates if they are\nmore than epsilon less than the concrete cost estimate, but it's not quite\nclear how that should work or whether it'd have its own failure modes.\n\nThe one thing that is totally clear is that these rowcount estimates are\ncrappy.  Can you improve them by increasing the stats target for that\ntable?  Maybe with less-garbage-y inputs, the system would make the right\nplan choice here.\n\n                        regards, tom lane", "msg_date": "Wed, 13 May 2020 15:17:03 +0100", "msg_from": "James Thompson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Please help! Query jumps from 1s -> 4m" } ]
[ { "msg_contents": "I have created the following table to duplicate my performance\nnumbers, but I have simplified the table for this question.\n\nI'm running PostgreSQL 12 on the following hardware.\n\nDual Xeon Quad-Core E5320 1.86GHz\n4GB of RAM\n\nThe table structure is\n\n id uuid\n address_api_url text\n check_timestamp timestamp with time zone\n address text\nIndexes:\n \"new_table_pkey\" PRIMARY KEY, btree (id)\n \"test_table_check_timestamp_idx\" btree (check_timestamp)\n\n\nRight now the table has 100 Million rows, but I expect it to reach\nabout 600-700 Million. I am faced with slow updates/inserts and the\nissue is caused by the indices as it gets updates on each\ninsert/update, If I remove the indexes the insert performance remains\nexcellent with millions of rows.\n\nTo demonstrate the update performance I have constructed the following\nquery which updates the timestamp of 10000 rows\n\nUPDATE test_table set check_timestamp = now() FROM(select id from\ntest_table limit 10000) AS subquery where test_table.id = subquery.id;\n\nThat update took about 1 minute and 44 seconds\nTime: 104254.392 ms (01:44.254)\n\nBelow is the EXPLAIN ANALYZE\n\n\nEXPLAIN ANALYZE UPDATE test_table set check_timestamp = now()\nFROM(select id from test_table limit 10000) AS subquery where\ntest_table.id = subquery.id;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n Update on test_table (cost=0.57..28234.86 rows=10000 width=160)\n(actual time=102081.905..102081.905 rows=0 loops=1)\n -> Nested Loop (cost=0.57..28234.86 rows=10000 width=160) (actual\ntime=32.286..101678.652 rows=10000 loops=1)\n -> Subquery Scan on subquery (cost=0.00..514.96 rows=10000\nwidth=56) (actual time=0.048..45.127 rows=10000 loops=1)\n -> Limit (cost=0.00..414.96 rows=10000 width=16)\n(actual time=0.042..26.319 rows=10000 loops=1)\n -> Seq Scan on test_table test_table_1\n(cost=0.00..4199520.04 rows=101204004 width=16) (actual\ntime=0.040..21.542 rows=10000 loops=1)\n -> Index Scan using new_table_pkey on test_table\n(cost=0.57..2.77 rows=1 width=92) (actual time=10.160..10.160 rows=1\nloops=10000)\n Index Cond: (id = subquery.id)\n Planning Time: 0.319 ms\n Execution Time: 102081.967 ms\n(9 rows)\n\nTime: 102122.421 ms (01:42.122)\n\n\n\nwith the right hardware can one partition handle 600 millions of rows\nwith good insert/update performance? if so what kind of hardware\nshould I be looking at? Or would I need to create partitions? I'd like\nto hear some recommendations.\n\n\n", "msg_date": "Sun, 3 May 2020 23:27:54 -0400", "msg_from": "Arya F <[email protected]>", "msg_from_op": true, "msg_subject": "600 million rows of data. Bad hardware or need partitioning?" }, { "msg_contents": "What kinds of storage (ssd or old 5400 rpm)? What else is this machine\nrunning?\n\nWhat configs have been customized such as work_mem or random_page_cost?\n\nWhat kinds of storage (ssd or old 5400 rpm)? What else is this machine running?What configs have been customized such as work_mem or random_page_cost?", "msg_date": "Sun, 3 May 2020 21:45:48 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 600 million rows of data. Bad hardware or need partitioning?" }, { "msg_contents": "On Sun, May 3, 2020 at 11:46 PM Michael Lewis <[email protected]> wrote:\n>\n> What kinds of storage (ssd or old 5400 rpm)? What else is this machine running?\n\nNot an SSD, but an old 1TB 7200 RPM HDD\n\n> What configs have been customized such as work_mem or random_page_cost?\n\nwork_mem = 2403kB\nrandom_page_cost = 1.1\n\n\n", "msg_date": "Sun, 3 May 2020 23:51:44 -0400", "msg_from": "Arya F <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 600 million rows of data. Bad hardware or need partitioning?" }, { "msg_contents": "On Mon, 4 May 2020 at 15:52, Arya F <[email protected]> wrote:\n>\n> On Sun, May 3, 2020 at 11:46 PM Michael Lewis <[email protected]> wrote:\n> >\n> > What kinds of storage (ssd or old 5400 rpm)? What else is this machine running?\n>\n> Not an SSD, but an old 1TB 7200 RPM HDD\n>\n> > What configs have been customized such as work_mem or random_page_cost?\n>\n> work_mem = 2403kB\n> random_page_cost = 1.1\n\nHow long does it take if you first do:\n\nSET enable_nestloop TO off;\n\nIf you find it's faster then you most likely have random_page_cost set\nunrealistically low. In fact, I'd say it's very unlikely that a nested\nloop join will be a win in this case when random pages must be read\nfrom a mechanical disk, but by all means, try disabling it with the\nabove command and see for yourself.\n\nIf you set random_page_cost so low to solve some other performance\nproblem, then you may wish to look at the effective_cache_size\nsetting. Having that set to something realistic should allow indexes\nto be used more in situations where they're likely to not require as\nmuch random I/O from the disk.\n\nDavid\n\n\n", "msg_date": "Mon, 4 May 2020 16:44:03 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 600 million rows of data. Bad hardware or need partitioning?" }, { "msg_contents": "On Sun, May 03, 2020 at 11:51:44PM -0400, Arya F wrote:\n> On Sun, May 3, 2020 at 11:46 PM Michael Lewis <[email protected]> wrote:\n> > What kinds of storage (ssd or old 5400 rpm)? What else is this machine running?\n> \n> Not an SSD, but an old 1TB 7200 RPM HDD\n> \n> > What configs have been customized such as work_mem or random_page_cost?\n> \n> work_mem = 2403kB\n> random_page_cost = 1.1\n\nI mentioned in February and March that you should plan to set shared_buffers\nto fit the indexes currently being updated.\n\nPartitioning can help with that *if* the writes mostly affect 1-2 partitions at\na time (otherwise not).\n\nOn Wed, Feb 05, 2020 at 11:15:48AM -0600, Justin Pryzby wrote:\n> > Would that work? Or any recommendations how I can achieve good performance\n> > for a lot of writes?\n> \n> Can you use partitioning so the updates are mostly affecting only one table at\n> once, and its indices are of reasonable size, such that they can fit easily in\n> shared_buffers.\n\nOn Sun, Mar 22, 2020 at 08:29:04PM -0500, Justin Pryzby wrote:\n> On Sun, Mar 22, 2020 at 09:22:50PM -0400, Arya F wrote:\n> > I have noticed that my write/update performance starts to dramatically\n> > reduce after about 10 million rows on my hardware. The reason for the\n> > slowdown is the index updates on every write/update.\n> \n> It's commonly true that the indexes need to fit entirely in shared_buffers for\n> good write performance. I gave some suggestions here:\n> https://www.postgresql.org/message-id/20200223101209.GU31889%40telsasoft.com\n\n\n", "msg_date": "Mon, 4 May 2020 04:21:30 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 600 million rows of data. Bad hardware or need partitioning?" }, { "msg_contents": "On Mon, May 4, 2020 at 12:44 AM David Rowley <[email protected]> wrote:\n> How long does it take if you first do:\n>\n> SET enable_nestloop TO off;\n\nI tried this, but it takes much longer\n\nTime: 318620.319 ms (05:18.620)\n\nBelow is the EXPLAIN ANALYZE\n\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Update on test_table (cost=639.96..4581378.80 rows=10000 width=160)\n(actual time=290593.159..290593.159 rows=0 loops=1)\n -> Hash Join (cost=639.96..4581378.80 rows=10000 width=160)\n(actual time=422.313..194430.318 rows=10000 loops=1)\n Hash Cond: (test_table.id = subquery.id)\n -> Seq Scan on test_table (cost=0.00..4200967.98\nrows=101238898 width=92) (actual time=296.970..177731.611\nrows=101189271 loops=1)\n -> Hash (cost=514.96..514.96 rows=10000 width=56) (actual\ntime=125.312..125.312 rows=10000 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 959kB\n -> Subquery Scan on subquery (cost=0.00..514.96\nrows=10000 width=56) (actual time=0.030..123.031 rows=10000 loops=1)\n -> Limit (cost=0.00..414.96 rows=10000\nwidth=16) (actual time=0.024..121.014 rows=10000 loops=1)\n -> Seq Scan on test_table test_table_1\n(cost=0.00..4200967.98 rows=101238898 width=16) (actual\ntime=0.021..120.106 rows=10000 loops=1)\n Planning Time: 0.304 ms\n JIT:\n Functions: 12\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 2.178 ms, Inlining 155.980 ms, Optimization\n100.611 ms, Emission 39.481 ms, Total 298.250 ms\n Execution Time: 290595.448 ms\n(15 rows)\n\n\n> If you find it's faster then you most likely have random_page_cost set\n> unrealistically low. In fact, I'd say it's very unlikely that a nested\n> loop join will be a win in this case when random pages must be read\n> from a mechanical disk, but by all means, try disabling it with the\n> above command and see for yourself.\n\nIt's much slower with SET enable_nestloop TO off. Any other suggestions?\n\n\n", "msg_date": "Tue, 5 May 2020 20:15:14 -0400", "msg_from": "Arya F <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 600 million rows of data. Bad hardware or need partitioning?" }, { "msg_contents": "On Mon, May 4, 2020 at 5:21 AM Justin Pryzby <[email protected]> wrote:\n\n> I mentioned in February and March that you should plan to set shared_buffers\n> to fit the indexes currently being updated.\n>\n\nThe following command gives me\n\nselect pg_size_pretty (pg_indexes_size('test_table'));\n pg_size_pretty\n----------------\n 5216 MB\n(1 row)\n\n\nSo right now, the indexes on that table are taking about 5.2 GB, if a\nmachine has 512 GB of RAM and SSDs, is it safe to assume I can achieve\nthe same update that takes 1.5 minutes in less than 5 seconds while\nhaving 600 million rows of data without partitioning?\n\n\n", "msg_date": "Tue, 5 May 2020 20:31:29 -0400", "msg_from": "Arya F <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 600 million rows of data. Bad hardware or need partitioning?" }, { "msg_contents": "On Tue, May 05, 2020 at 08:31:29PM -0400, Arya F wrote:\n> On Mon, May 4, 2020 at 5:21 AM Justin Pryzby <[email protected]> wrote:\n> \n> > I mentioned in February and March that you should plan to set shared_buffers\n> > to fit the indexes currently being updated.\n> \n> The following command gives me\n> \n> select pg_size_pretty (pg_indexes_size('test_table'));\n> pg_size_pretty > 5216 MB\n> \n> So right now, the indexes on that table are taking about 5.2 GB, if a\n> machine has 512 GB of RAM and SSDs, is it safe to assume I can achieve\n> the same update that takes 1.5 minutes in less than 5 seconds while\n> having 600 million rows of data without partitioning?\n\nI am not prepared to guarantee server performance..\n\nBut, to my knowledge, you haven't configured shared_buffers at all. Which I\nthink might be the single most important thing to configure for loading speed\n(with indexes).\n\nCouple months ago, you said your server had 4GB RAM, which isn't much, but if\nshared_buffers is ~100MB, I think that deserves attention.\n\nIf you get good performance with a million rows and 32MB buffers, then you\ncould reasonably hope to get good performance (at least initially) with\n100million rows and 320MB buffers. Scale that up to whatever you expect your\nindex size to be. Be conservative since you may need to add indexes later, and\nyou can expect they'll become bloated, so you may want to run a reindex job.\n\nshared_buffers is frequently set to ~25% of RAM, and if you need to efficiently\nuse indexes larger than what that supports, then you should add RAM, or\nimplement partitioning.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 5 May 2020 20:37:41 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 600 million rows of data. Bad hardware or need partitioning?" }, { "msg_contents": "On Tue, May 5, 2020 at 9:37 PM Justin Pryzby <[email protected]> wrote:\n>\n> On Tue, May 05, 2020 at 08:31:29PM -0400, Arya F wrote:\n> > On Mon, May 4, 2020 at 5:21 AM Justin Pryzby <[email protected]> wrote:\n> >\n> > > I mentioned in February and March that you should plan to set shared_buffers\n> > > to fit the indexes currently being updated.\n> >\n> > The following command gives me\n> >\n> > select pg_size_pretty (pg_indexes_size('test_table'));\n> > pg_size_pretty > 5216 MB\n> >\n> > So right now, the indexes on that table are taking about 5.2 GB, if a\n> > machine has 512 GB of RAM and SSDs, is it safe to assume I can achieve\n> > the same update that takes 1.5 minutes in less than 5 seconds while\n> > having 600 million rows of data without partitioning?\n>\n> I am not prepared to guarantee server performance..\n>\n> But, to my knowledge, you haven't configured shared_buffers at all. Which I\n> think might be the single most important thing to configure for loading speed\n> (with indexes).\n>\n\nJust wanted to give an update. I tried this on a VPS with 8GB ram and\nSSDs, the same query now takes 1.2 seconds! What a huge difference!\nthat's without making any changes to postgres.conf file. Very\nimpressive.\n\n\n", "msg_date": "Sun, 10 May 2020 00:10:20 -0400", "msg_from": "Arya F <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 600 million rows of data. Bad hardware or need partitioning?" } ]
[ { "msg_contents": "Hi,\n\nI am Oracle DBA for 20+ years and well verse with Oracle internal and all\nrelated details, performance optimization , replication etc...\nSo I 'm looking for acquiring similar expertise for Postgresql.\n\nNow I am using Aurora Postgresql and looking for excellent technical book\nfor Posgresql internal, optimizer and debugging technique , replication\ninternal related book. Or any other resources as appropriate.\n\nPlease suggest. Thanks in advance.\n\nI saw few books:\n\n1. PostgreSQL for DBA volume 1: Structure and Administration\n ISBN-13: 978-1791794125\n ISBN-10: 1791794122\n2. PostgreSQL for DBA: PostgreSQL 12--\n ISBN-13: 978-1796506044\n ISBN-10: 1796506044\n3. *Title*: The Art of PostgreSQL\n *Author*: Dimitri Fontaine\n4. PostgreSQL 11 Administration Cookbook\n author: Simon Riggs, Gianni Ciolli, Et al\n\nhttps://www.packtpub.com/big-data-and-business-intelligence/postgresql-11-administration-cookbook\n\nThanks.\nBhupendra B Babu\n\nHi,I am Oracle DBA for 20+ years and well verse with Oracle internal and all related details, performance optimization , replication etc...So I 'm looking for acquiring similar expertise for Postgresql.Now I am using Aurora Postgresql and looking for excellent technical book for Posgresql internal, optimizer and debugging technique , replication internal related book. Or any other resources as appropriate.Please suggest. Thanks in advance.I saw few books:1. PostgreSQL for DBA volume 1: Structure and Administration   \n \n ISBN-13:\n 978-1791794125\n\n   \n \n ISBN-10:\n 1791794122\n2. PostgreSQL for DBA: PostgreSQL 12--    \n \n ISBN-13:\n 978-1796506044\n\n    ISBN-10:\n 1796506044\n3. Title: The Art of PostgreSQL   \n Author: Dimitri Fontaine4. PostgreSQL 11 Administration Cookbook    author: Simon Riggs, Gianni Ciolli, Et al     https://www.packtpub.com/big-data-and-business-intelligence/postgresql-11-administration-cookbookThanks.Bhupendra B Babu", "msg_date": "Mon, 4 May 2020 15:45:22 -0700", "msg_from": "Bhupendra Babu <[email protected]>", "msg_from_op": true, "msg_subject": "good book or any other resources for Postgresql" }, { "msg_contents": "I don't know the others, but have enjoyed and learned a great deal from The\nArt of PostgreSQL.\n\n>\n\nI don't know the others, but have enjoyed and learned a great deal from The Art of PostgreSQL.", "msg_date": "Mon, 4 May 2020 17:41:41 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good book or any other resources for Postgresql" }, { "msg_contents": "We are currently engaged in an Oracle to Postgres migration. Our DBA team\nhas been going through this book and we have learned a lot from it.\n\nPostgreSQL 12 High Availability Cookbook - Third Edition\nhttps://www.packtpub.com/data/postgresql-12-high-availability-cookbook-third-edition\n\nOn Mon, May 4, 2020 at 5:42 PM Michael Lewis <[email protected]> wrote:\n\n> I don't know the others, but have enjoyed and learned a great deal from\n> The Art of PostgreSQL.\n>\n>>\n\n-- \nCraig\n\nWe are currently engaged in an Oracle to Postgres migration. Our DBA team has been going through this book and we have learned a lot from it. PostgreSQL 12 High Availability Cookbook - Third Editionhttps://www.packtpub.com/data/postgresql-12-high-availability-cookbook-third-editionOn Mon, May 4, 2020 at 5:42 PM Michael Lewis <[email protected]> wrote:I don't know the others, but have enjoyed and learned a great deal from The Art of PostgreSQL.\n\n-- Craig", "msg_date": "Mon, 4 May 2020 17:45:57 -0600", "msg_from": "Craig Jackson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: good book or any other resources for Postgresql" }, { "msg_contents": "Thanks all for suggestions.\n\nOn Mon, May 4, 2020 at 4:46 PM Craig Jackson <[email protected]>\nwrote:\n\n> We are currently engaged in an Oracle to Postgres migration. Our DBA team\n> has been going through this book and we have learned a lot from it.\n>\n> PostgreSQL 12 High Availability Cookbook - Third Edition\n>\n> https://www.packtpub.com/data/postgresql-12-high-availability-cookbook-third-edition\n>\n> On Mon, May 4, 2020 at 5:42 PM Michael Lewis <[email protected]> wrote:\n>\n>> I don't know the others, but have enjoyed and learned a great deal from\n>> The Art of PostgreSQL.\n>>\n>>>\n>\n> --\n> Craig\n>\n\n\n-- \nThanks.\nBhupendra B Babu\n\nThanks all for suggestions.On Mon, May 4, 2020 at 4:46 PM Craig Jackson <[email protected]> wrote:We are currently engaged in an Oracle to Postgres migration. Our DBA team has been going through this book and we have learned a lot from it. PostgreSQL 12 High Availability Cookbook - Third Editionhttps://www.packtpub.com/data/postgresql-12-high-availability-cookbook-third-editionOn Mon, May 4, 2020 at 5:42 PM Michael Lewis <[email protected]> wrote:I don't know the others, but have enjoyed and learned a great deal from The Art of PostgreSQL.\n\n-- Craig \n-- Thanks.Bhupendra B Babu", "msg_date": "Mon, 4 May 2020 20:37:49 -0700", "msg_from": "Bhupendra Babu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: good book or any other resources for Postgresql" } ]
[ { "msg_contents": "Version: Postgres 9.6.3 production system (but also tested on Postgres 12)\n\nFor my query the Planner is sometimes choosing an execution plan that uses\n\"Bitmap And\" (depending on the parameters):\n\n-> Bitmap Heap Scan on observation (cost=484.92..488.93 rows=1 width=203)\n(actual time=233.129..330.886 rows=15636 loops=1)\n Recheck Cond: (((user_id)::text = 'USER123'::text) AND ((loc_id)::text =\nANY ('{LOC12345678}'::text[])))\n Filter: ((taxa)::text = 'Birds'::text)\n Rows Removed by Filter: 3\n Heap Blocks: exact=1429\n Buffers: shared hit=721 read=944\n -> BitmapAnd (cost=484.92..484.92 rows=1 width=0) (actual\ntime=232.888..232.888 rows=0 loops=1)\n Buffers: shared hit=3 read=233\n -> Bitmap Index Scan on indx_observation_user_id (cost=0.00..81.14\nrows=3277 width=0) (actual time=169.003..169.003 rows=32788 loops=1)\n Index Cond: ((user_id)::text = 'USER123'::text)\n Buffers: shared hit=2 read=134\n -> Bitmap Index Scan on indx_observation_loc_id (cost=0.00..403.52\nrows=13194 width=0) (actual time=63.520..63.520 rows=15853 loops=1)\n Index Cond: ((loc_id)::text = ANY ('{LOC12345678}'::text[]))\n Buffers: shared hit=1 read=99\n\n(fragment of explain plan)\n\nHowever it is estimating the number of rows as 1, whereas in this case the\nactual number of rows is 15636 (it can be much higher).\n\nThe Planner then carries this estimate of \"1 row\" through the rest of the\nquery (which is quite complex), and then makes poor choices about joins.\ne.g. uses \"Nested Loop Left Join\" because it's only expecting one row,\nwhereas in practice it has to do 15636 loops which is very slow.\n\nNote that in cases where the Planner selects a single Index Scan for this\nquery (with different parameters), the Planner makes an accurate estimate\nof the number of rows and then makes sensible selections of joins (i.e.\nquick).\ni.e. the issue seems to be with the \"Bitmap And\".\n\nI don't have an index with both user_id & loc_id, as this is one of several\ndifferent combinations that can arise (it would require quite a few indexes\nto cover all the possible combinations). However if I did have such an\nindex, the planner would presumably be able to use the statistics for\nuser_id and loc_id to estimate the number of rows.\n\nSo why can't it make an accurate estimate of the rows with a \"Bitmap And\" &\n\" Bitmap Heap Scan\"? (as above)\n\nSteve Pritchard\n-- \nSteve Pritchard\nDatabase Developer\n\nBritish Trust for Ornithology, The Nunnery, Thetford, Norfolk IP24 2PU, UK\nTel: +44 (0)1842 750050, fax: +44 (0)1842 750030\nRegistered Charity No 216652 (England & Wales) No SC039193 (Scotland)\nCompany Limited by Guarantee No 357284 (England & Wales)\n\nVersion: Postgres 9.6.3 production system (but also tested on Postgres 12)For my query the Planner is sometimes choosing an execution plan that uses \"Bitmap And\" (depending on the parameters):->  Bitmap Heap Scan on observation  (cost=484.92..488.93 rows=1 width=203) (actual time=233.129..330.886 rows=15636 loops=1)  Recheck Cond: (((user_id)::text = 'USER123'::text) AND ((loc_id)::text = ANY ('{LOC12345678}'::text[])))  Filter: ((taxa)::text = 'Birds'::text)  Rows Removed by Filter: 3  Heap Blocks: exact=1429  Buffers: shared hit=721 read=944  ->  BitmapAnd  (cost=484.92..484.92 rows=1 width=0) (actual time=232.888..232.888 rows=0 loops=1)    Buffers: shared hit=3 read=233    ->  Bitmap Index Scan on indx_observation_user_id  (cost=0.00..81.14 rows=3277 width=0) (actual time=169.003..169.003 rows=32788 loops=1)        Index Cond: ((user_id)::text = 'USER123'::text)        Buffers: shared hit=2 read=134    ->  Bitmap Index Scan on indx_observation_loc_id  (cost=0.00..403.52 rows=13194 width=0) (actual time=63.520..63.520 rows=15853 loops=1)        Index Cond: ((loc_id)::text = ANY ('{LOC12345678}'::text[]))        Buffers: shared hit=1 read=99(fragment of explain plan)However it is estimating the number of rows as 1, whereas in this case the actual number of rows is 15636 (it can be much higher).The Planner then carries this estimate of \"1 row\" through the rest of the query (which is quite complex), and then makes poor choices about joins.e.g. uses \"Nested Loop Left Join\" because it's only expecting one row, whereas in practice it has to do 15636 loops which is very slow.Note that in cases where the Planner selects a single Index Scan for this query (with different parameters), the Planner makes an accurate estimate of the number of rows and then makes sensible selections of joins (i.e. quick).i.e. the issue seems to be with the \"Bitmap And\".I don't have an index with both user_id & loc_id, as this is one of several different combinations that can arise (it would require quite a few indexes to cover all the possible combinations). However if I did have such an index, the planner would presumably be able to use the statistics for user_id and loc_id to estimate the number of rows.So why can't it make an accurate estimate of the rows with a \"Bitmap And\" & \"\n\nBitmap Heap Scan\"? (as above)Steve Pritchard-- Steve PritchardDatabase DeveloperBritish Trust for Ornithology, The Nunnery, Thetford, Norfolk IP24 2PU, UK Tel: +44 (0)1842 750050, fax: +44 (0)1842 750030Registered Charity No 216652 (England & Wales) No SC039193 (Scotland)Company Limited by Guarantee No 357284 (England & Wales)", "msg_date": "Wed, 6 May 2020 17:19:48 +0100", "msg_from": "Steve Pritchard <[email protected]>", "msg_from_op": true, "msg_subject": "Inaccurate Rows estimate for \"Bitmap And\" causes Planner to choose\n wrong join" }, { "msg_contents": "On Wed, May 06, 2020 at 05:19:48PM +0100, Steve Pritchard wrote:\n> Version: Postgres 9.6.3 production system (but also tested on Postgres 12)\n> \n> For my query the Planner is sometimes choosing an execution plan that uses\n> \"Bitmap And\" (depending on the parameters):\n> \n> The Planner then carries this estimate of \"1 row\" through the rest of the\n> query (which is quite complex), and then makes poor choices about joins.\n> e.g. uses \"Nested Loop Left Join\" because it's only expecting one row,\n> whereas in practice it has to do 15636 loops which is very slow.\n\n> Note that in cases where the Planner selects a single Index Scan for this\n> query (with different parameters), the Planner makes an accurate estimate\n> of the number of rows and then makes sensible selections of joins (i.e.\n> quick).\n> i.e. the issue seems to be with the \"Bitmap And\".\n> \n> I don't have an index with both user_id & loc_id, as this is one of several\n> different combinations that can arise (it would require quite a few indexes\n> to cover all the possible combinations). However if I did have such an\n> index, the planner would presumably be able to use the statistics for\n> user_id and loc_id to estimate the number of rows.\n> \n> So why can't it make an accurate estimate of the rows with a \"Bitmap And\" &\n> \" Bitmap Heap Scan\"? (as above)\n\nIt probably *has* statistics for user_id and loc_id, but doesn't have stats for\n(user_id,loc_id).\n\nPresumbly the conditions are partially redundant, so loc_id => user_id\n(strictly implies or just correlated) or the other way around.\n\nIn pg10+ you can use \"CREATE STATISTICS (dependencies)\" to improve that.\nhttps://www.postgresql.org/docs/devel/sql-createstatistics.html\n\nOtherwise you can use the \"CREATE TYPE / CREATE INDEX\" trick Tomas described here:\nhttps://www.postgresql.org/message-id/20190424003633.ruvhbv5ro3fawo67%40development\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 6 May 2020 11:34:57 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inaccurate Rows estimate for \"Bitmap And\" causes Planner to\n choose wrong join" }, { "msg_contents": "On Wed, May 6, 2020 at 12:20 PM Steve Pritchard <[email protected]>\nwrote:\n\n> Version: Postgres 9.6.3 production system (but also tested on Postgres 12)\n>\n> For my query the Planner is sometimes choosing an execution plan that uses\n> \"Bitmap And\" (depending on the parameters):\n>\n> -> Bitmap Heap Scan on observation (cost=484.92..488.93 rows=1\n> width=203) (actual time=233.129..330.886 rows=15636 loops=1)\n> Recheck Cond: (((user_id)::text = 'USER123'::text) AND ((loc_id)::text =\n> ANY ('{LOC12345678}'::text[])))\n>\n\nIf you change \" = ANY(array_of_one)\" to \" = scalar\", does that change\nanything? You might be able to fix this (in v12) using CREATE STATISTICS,\nbut I don't know if that mechanism can see through the ANY(array_of_one)\nwrapper.\n\n\n> Note that in cases where the Planner selects a single Index Scan for this\n> query (with different parameters), the Planner makes an accurate estimate\n> of the number of rows and then makes sensible selections of joins (i.e.\n> quick).\n> i.e. the issue seems to be with the \"Bitmap And\".\n>\n\n\nI don't know if this nitpick matters, but I don't think that that is how\nthe planner works. The row estimates work from the top down, not the\nbottom up. The row estimate of 1 is based on what conditions the bitmap\nheap scan implements, it is not arrived at by combining the estimates from\nthe index scans below it. If it were to change to a different type of node\nbut implemented the same conditions, I think it would have the same row\nestimate.\n\n\n>\n> I don't have an index with both user_id & loc_id, as this is one of\n> several different combinations that can arise (it would require quite a few\n> indexes to cover all the possible combinations).\n>\n\nAre you actually experiencing problems with those other combinations as\nwell? If not, I wouldn't worry about solving hypothetical problems. If\nthose other combinations are actually problems and you go with CREATE\nSTATISTICS, then you would have to be creating a lot of different\nstatistics. That would still be ugly, but at least the overhead for\nstatistics is lower than for indexes.\n\n\n> However if I did have such an index, the planner would presumably be able\n> to use the statistics for user_id and loc_id to estimate the number of rows.\n>\n\nIndexes on physical columns do not have statistics, so making that index\nwould not help with the estimation. (Expressional indexes do have\nstatistics, but I don't see that helping you here). So while this node\nwould execute faster with that index, it would still be kicking the unshown\nnested loop left join 15,636 times when it thinks it will be doing it\nonce, and so would still be slow. The most robust solution might be to\nmake the outer part of that nested loop left join faster, so that your\nsystem would be more tolerant of statistics problems.\n\n\n>\n> So why can't it make an accurate estimate of the rows with a \"Bitmap And\"\n> & \" Bitmap Heap Scan\"? (as above)\n>\n\nIn the absence of custom statistics, it assumes the selectivities of user_id\n= 'USER123', of loc_id = ANY ('{LOC12345678}'::text[]), and of taxa =\n'Birds' are all independent of each other and can be multiplied to arrive\nat the overall selectivity. But clearly that is not the case. Bird\nwatchers mostly watch near where they live, not in random other places.\n\nCheers,\n\nJeff\n\nOn Wed, May 6, 2020 at 12:20 PM Steve Pritchard <[email protected]> wrote:Version: Postgres 9.6.3 production system (but also tested on Postgres 12)For my query the Planner is sometimes choosing an execution plan that uses \"Bitmap And\" (depending on the parameters):->  Bitmap Heap Scan on observation  (cost=484.92..488.93 rows=1 width=203) (actual time=233.129..330.886 rows=15636 loops=1)  Recheck Cond: (((user_id)::text = 'USER123'::text) AND ((loc_id)::text = ANY ('{LOC12345678}'::text[])))If you change \" = ANY(array_of_one)\" to \" = scalar\", does that change anything?  You might be able to fix this (in v12) using CREATE STATISTICS, but I don't know if that mechanism can see through the ANY(array_of_one) wrapper. Note that in cases where the Planner selects a single Index Scan for this query (with different parameters), the Planner makes an accurate estimate of the number of rows and then makes sensible selections of joins (i.e. quick).i.e. the issue seems to be with the \"Bitmap And\".I don't know if this nitpick matters, but I don't think that that is how the planner works.  The row estimates work from the top down, not the bottom up.  The row estimate of 1 is based on what conditions the bitmap heap scan implements, it is not arrived at by combining the estimates from the index scans below it.  If it were to change to a different type of node but implemented the same conditions, I think it would have the same row estimate.   I don't have an index with both user_id & loc_id, as this is one of several different combinations that can arise (it would require quite a few indexes to cover all the possible combinations). Are you actually experiencing problems with those other combinations as well?  If not, I wouldn't worry about solving hypothetical problems.  If those other combinations are actually problems and you go with CREATE STATISTICS, then you would have to be creating a lot of different statistics.  That would still be ugly, but at least the overhead for statistics is lower than for indexes. However if I did have such an index, the planner would presumably be able to use the statistics for user_id and loc_id to estimate the number of rows.Indexes on physical columns do not have statistics, so making that index would not help with the estimation.  (Expressional indexes do have statistics, but I don't see that helping you here).  \n\nSo while this node would execute faster with that index, it would still be kicking the unshown nested loop left join 15,636 times when it thinks it will be doing it once, and so would still be slow.  The most robust solution might be to make the outer part of that nested loop left join faster, so that your system would be more tolerant of statistics problems.   So why can't it make an accurate estimate of the rows with a \"Bitmap And\" & \"\n\nBitmap Heap Scan\"? (as above)In the absence of custom statistics, it assumes the selectivities of user_id = 'USER123', of loc_id = ANY ('{LOC12345678}'::text[]), and of taxa = 'Birds' are all independent of each other and can be multiplied to arrive at the overall selectivity.  But clearly that is not the case.  Bird watchers mostly watch near where they live, not in random other places.Cheers,Jeff", "msg_date": "Wed, 6 May 2020 14:24:55 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inaccurate Rows estimate for \"Bitmap And\" causes Planner to\n choose wrong join" }, { "msg_contents": "Many thanks Justin & Jeff for your replies.\n\nPresumbly the conditions are partially redundant, so loc_id => user_id\n\n\nYes you're right. I had overlooked this.\n\nI've done some further testing and this confirms what you say: if the WHERE\ncolumns are independent, then the Planner makes a reasonable estimate of\nthe number of rows. irrespective of whether it uses a single index or a\n\"Bitmap And\" of two indexes.\n\nI've also tested \"create statistics\" on Postgres 12:\n\n - gives good estimate with WHERE user_id = 'USER123' and loc_id =\n 'LOC12345678'\n - but Plan Rows = 5 with WHERE user_id = 'USER123' and loc_id = ANY('{\n LOC12345678 }'::text[])\n - Note: if I omit the user_id condition then it gives a good estimate,\n i.e. with WHERE loc_id = ANY('{ LOC12345678 }'::text[])\n\nSo statistics objects don't seem to be able to handle the combination of\ndependencies and arrays (at least in 12.2).\n\nSteve\n\nOn Wed, 6 May 2020 at 19:25, Jeff Janes <[email protected]> wrote:\n\n> On Wed, May 6, 2020 at 12:20 PM Steve Pritchard <[email protected]>\n> wrote:\n>\n>> Version: Postgres 9.6.3 production system (but also tested on Postgres 12)\n>>\n>> For my query the Planner is sometimes choosing an execution plan that\n>> uses \"Bitmap And\" (depending on the parameters):\n>>\n>> -> Bitmap Heap Scan on observation (cost=484.92..488.93 rows=1\n>> width=203) (actual time=233.129..330.886 rows=15636 loops=1)\n>> Recheck Cond: (((user_id)::text = 'USER123'::text) AND ((loc_id)::text\n>> = ANY ('{LOC12345678}'::text[])))\n>>\n>\n> If you change \" = ANY(array_of_one)\" to \" = scalar\", does that change\n> anything? You might be able to fix this (in v12) using CREATE STATISTICS,\n> but I don't know if that mechanism can see through the ANY(array_of_one)\n> wrapper.\n>\n>\n>> Note that in cases where the Planner selects a single Index Scan for this\n>> query (with different parameters), the Planner makes an accurate estimate\n>> of the number of rows and then makes sensible selections of joins (i.e.\n>> quick).\n>> i.e. the issue seems to be with the \"Bitmap And\".\n>>\n>\n>\n> I don't know if this nitpick matters, but I don't think that that is how\n> the planner works. The row estimates work from the top down, not the\n> bottom up. The row estimate of 1 is based on what conditions the bitmap\n> heap scan implements, it is not arrived at by combining the estimates from\n> the index scans below it. If it were to change to a different type of node\n> but implemented the same conditions, I think it would have the same row\n> estimate.\n>\n>\n>>\n>> I don't have an index with both user_id & loc_id, as this is one of\n>> several different combinations that can arise (it would require quite a few\n>> indexes to cover all the possible combinations).\n>>\n>\n> Are you actually experiencing problems with those other combinations as\n> well? If not, I wouldn't worry about solving hypothetical problems. If\n> those other combinations are actually problems and you go with CREATE\n> STATISTICS, then you would have to be creating a lot of different\n> statistics. That would still be ugly, but at least the overhead for\n> statistics is lower than for indexes.\n>\n>\n>> However if I did have such an index, the planner would presumably be able\n>> to use the statistics for user_id and loc_id to estimate the number of rows.\n>>\n>\n> Indexes on physical columns do not have statistics, so making that index\n> would not help with the estimation. (Expressional indexes do have\n> statistics, but I don't see that helping you here). So while this node\n> would execute faster with that index, it would still be kicking the unshown\n> nested loop left join 15,636 times when it thinks it will be doing it\n> once, and so would still be slow. The most robust solution might be to\n> make the outer part of that nested loop left join faster, so that your\n> system would be more tolerant of statistics problems.\n>\n>\n>>\n>> So why can't it make an accurate estimate of the rows with a \"Bitmap And\"\n>> & \" Bitmap Heap Scan\"? (as above)\n>>\n>\n> In the absence of custom statistics, it assumes the selectivities of user_id\n> = 'USER123', of loc_id = ANY ('{LOC12345678}'::text[]), and of taxa =\n> 'Birds' are all independent of each other and can be multiplied to arrive\n> at the overall selectivity. But clearly that is not the case. Bird\n> watchers mostly watch near where they live, not in random other places.\n>\n> Cheers,\n>\n> Jeff\n>\n\n\n-- \nSteve Pritchard\nDatabase Developer\n\nBritish Trust for Ornithology, The Nunnery, Thetford, Norfolk IP24 2PU, UK\nTel: +44 (0)1842 750050, fax: +44 (0)1842 750030\nRegistered Charity No 216652 (England & Wales) No SC039193 (Scotland)\nCompany Limited by Guarantee No 357284 (England & Wales)\n\nMany thanks Justin & Jeff for your replies.Presumbly the conditions are partially redundant, so loc_id => user_idYes you're right. I had overlooked this.I've done some further testing and this confirms what you say: if the WHERE columns are independent, then the Planner makes a reasonable estimate of the number of rows. irrespective of whether it uses a single index or a \"Bitmap And\" of two indexes.I've also tested \"create statistics\" on Postgres 12:gives good estimate with WHERE user_id = 'USER123' and loc_id = 'LOC12345678'but \n\nPlan Rows = 5\n\nwith WHERE \n\nuser_id = 'USER123'  and loc_id = ANY('{\n\nLOC12345678\n\n}'::text[])Note: if I omit the user_id condition \n\nthen it gives a good estimate, i.e. with WHERE loc_id = ANY('{\n\nLOC12345678\n\n}'::text[])So statistics objects don't seem to be able to handle the combination of dependencies and arrays (at least in 12.2).SteveOn Wed, 6 May 2020 at 19:25, Jeff Janes <[email protected]> wrote:On Wed, May 6, 2020 at 12:20 PM Steve Pritchard <[email protected]> wrote:Version: Postgres 9.6.3 production system (but also tested on Postgres 12)For my query the Planner is sometimes choosing an execution plan that uses \"Bitmap And\" (depending on the parameters):->  Bitmap Heap Scan on observation  (cost=484.92..488.93 rows=1 width=203) (actual time=233.129..330.886 rows=15636 loops=1)  Recheck Cond: (((user_id)::text = 'USER123'::text) AND ((loc_id)::text = ANY ('{LOC12345678}'::text[])))If you change \" = ANY(array_of_one)\" to \" = scalar\", does that change anything?  You might be able to fix this (in v12) using CREATE STATISTICS, but I don't know if that mechanism can see through the ANY(array_of_one) wrapper. Note that in cases where the Planner selects a single Index Scan for this query (with different parameters), the Planner makes an accurate estimate of the number of rows and then makes sensible selections of joins (i.e. quick).i.e. the issue seems to be with the \"Bitmap And\".I don't know if this nitpick matters, but I don't think that that is how the planner works.  The row estimates work from the top down, not the bottom up.  The row estimate of 1 is based on what conditions the bitmap heap scan implements, it is not arrived at by combining the estimates from the index scans below it.  If it were to change to a different type of node but implemented the same conditions, I think it would have the same row estimate.   I don't have an index with both user_id & loc_id, as this is one of several different combinations that can arise (it would require quite a few indexes to cover all the possible combinations). Are you actually experiencing problems with those other combinations as well?  If not, I wouldn't worry about solving hypothetical problems.  If those other combinations are actually problems and you go with CREATE STATISTICS, then you would have to be creating a lot of different statistics.  That would still be ugly, but at least the overhead for statistics is lower than for indexes. However if I did have such an index, the planner would presumably be able to use the statistics for user_id and loc_id to estimate the number of rows.Indexes on physical columns do not have statistics, so making that index would not help with the estimation.  (Expressional indexes do have statistics, but I don't see that helping you here).  \n\nSo while this node would execute faster with that index, it would still be kicking the unshown nested loop left join 15,636 times when it thinks it will be doing it once, and so would still be slow.  The most robust solution might be to make the outer part of that nested loop left join faster, so that your system would be more tolerant of statistics problems.   So why can't it make an accurate estimate of the rows with a \"Bitmap And\" & \"\n\nBitmap Heap Scan\"? (as above)In the absence of custom statistics, it assumes the selectivities of user_id = 'USER123', of loc_id = ANY ('{LOC12345678}'::text[]), and of taxa = 'Birds' are all independent of each other and can be multiplied to arrive at the overall selectivity.  But clearly that is not the case.  Bird watchers mostly watch near where they live, not in random other places.Cheers,Jeff\n-- Steve PritchardDatabase DeveloperBritish Trust for Ornithology, The Nunnery, Thetford, Norfolk IP24 2PU, UK Tel: +44 (0)1842 750050, fax: +44 (0)1842 750030Registered Charity No 216652 (England & Wales) No SC039193 (Scotland)Company Limited by Guarantee No 357284 (England & Wales)", "msg_date": "Wed, 13 May 2020 10:33:35 +0100", "msg_from": "Steve Pritchard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inaccurate Rows estimate for \"Bitmap And\" causes Planner to\n choose wrong join" } ]
[ { "msg_contents": "Hi,\n\nPostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by\ngcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n\nWe have noticed huge difference interms of execution plan ( response time)\n, When we pass the direct values Vs inner query to IN clause.\n\nHigh level details of the use case are as follows\n\n - As part of the SQL there are 2 tables named Process_instance (master)\n and Process_activity ( child)\n - Wanted to fetch TOP 50 rows from Process_activity table for the given\n values of the Process_instance.\n - When we used Inner Join / Inner query ( query1) between parent table\n and child table , LIMIT is not really taking in to account. Instead it is\n fetching more rows and columns that required, and finally limiting the\n result\n -\n\n\n*Query1*\n\nweb_1=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT\npa.process_activity_id FROM process_activity pa WHERE pa.app_id =\n'427380312000560' AND pa.created > '1970-01-01 00:00:00' AND\npa.process_instance_id in *(SELECT pi.process_instance_id FROM\nprocess_instance pi WHERE pi.user_id = '317079413683604' AND pi.app_id =\n'427380312000560')* ORDER BY pa.process_instance_id,pa.created limit 50;\n\n\n QUERY PLAN\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1071.47..1071.55 rows=31 width=24) (actual\ntime=85.958..85.991 rows=50 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id, pa.created\n Buffers: shared hit=43065\n -> Sort (cost=1071.47..1071.55 rows=31 width=24) (actual\ntime=85.956..85.971 rows=50 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id, pa.created\n Sort Key: pa.process_instance_id, pa.created\n Sort Method: top-N heapsort Memory: 28kB\n Buffers: shared hit=43065\n -> Nested Loop (cost=1.14..1070.70 rows=31 width=24) (actual\ntime=0.031..72.183 rows=46992 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id,\npa.created\n Buffers: shared hit=43065\n -> Index Scan using fki_conv_konotor_user_user_id on\npublic.process_instance pi (cost=0.43..2.66 rows=1 width=8) (actual\ntime=0.010..0.013 rows=2 loops=1)\n Output: pi.process_instance_id\n Index Cond: (pi.user_id = '317079413683604'::bigint)\n Filter: (pi.app_id = '427380312000560'::bigint)\n Buffers: shared hit=5\n -> Index Scan using\nprocess_activity_process_instance_id_app_id_created_idx on\npublic.process_activity pa (cost=0.70..1053.80 rows=1425 width=24) (actual\ntime=0.015..20.702 rows=*23496* loops=2)\n\n* Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url,\npa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias,\npa.read_by_user, pa.source, pa.label_category_id, pa.label_id,\npa.csat_response_id, pa.process_activity_fragments, pa.created, pa.updated,\npa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments,\npa.status_fragment, pa.internal_meta, pa.interaction_id,\npa.do_not_translate, pa.should_translate, pa.in_reply_to*\n Index Cond: ((pa.process_instance_id =\npi.process_instance_id) AND (pa.app_id = '427380312000560'::bigint) AND\n(pa.created > '1970-01-01 00:00:00'::timestamp without time zone))\n Buffers: shared hit=43060\n Planning time: 0.499 ms\n Execution time: 86.040 ms\n(22 rows)\n\n*Query 2*\n\nweb_1=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT\npa.process_activity_id AS m_process_activity_id FROM process_activity m\nWHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01 00:00:00'\nAND pa.process_instance_id in (\n*240117466018927,325820556706970,433008275197305*) ORDER BY\npa.process_instance_id,pa.created limit 50;\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.70..37.66 rows=50 width=24) (actual time=0.023..0.094\nrows=50 loops=1)\n Output: process_activity_id, process_instance_id, created\n Buffers: shared hit=50\n -> Index Scan using\nprocess_activity_process_instance_id_app_id_created_idx on\npublic.process_activity pa (cost=0.70..3124.97 rows=4226 width=24) (actual\ntime=0.022..0.079 *rows=50* loops=1)\n Output: process_activity_id, process_instance_id, created\n Index Cond: ((pa.process_instance_id = ANY\n('{140117466018927,225820556706970,233008275197305}'::bigint[])) AND\n(pa.app_id = '427380312000560'::bigint) AND (pa.created > '1970-01-01\n00:00:00'::timestamp without time zone))\n Buffers: shared hit=50\n Planning time: 0.167 ms\n Execution time: 0.137 ms\n(9 rows)\n\n\nCan someone explain\n\n - Why It is fetching more columns and more rows, incase of inner query ?\n - Is there any option to really limit values with INNER JOIN, INNER\n query ? If yes, can you please share information on this ?\n\nThanks in advance for your time and suggestions.\n\nRegards, Amar\n\nHi,PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bitWe have noticed huge difference interms of execution plan ( response time) , When we pass the direct values  Vs  inner query to IN clause.High level details of the use case are as followsAs part of the SQL there are 2 tables named Process_instance (master) and Process_activity ( child)Wanted to fetch TOP 50 rows from  Process_activity table for the given values of the Process_instance.When we used Inner Join / Inner query ( query1)  between parent table and child table , LIMIT is not really taking in to account. Instead it is fetching more rows and columns that required, and finally limiting the resultQuery1web_1=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id  FROM process_activity pa WHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01 00:00:00' AND pa.process_instance_id in (SELECT pi.process_instance_id FROM process_instance pi WHERE pi.user_id = '317079413683604' AND pi.app_id = '427380312000560') ORDER BY pa.process_instance_id,pa.created limit 50;                                                                                                                                                                                                                QUERY PLAN                                                                                                    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit  (cost=1071.47..1071.55 rows=31 width=24) (actual time=85.958..85.991 rows=50 loops=1)   Output: pa.process_activity_id, pa.process_instance_id, pa.created   Buffers: shared hit=43065   ->  Sort  (cost=1071.47..1071.55 rows=31 width=24) (actual time=85.956..85.971 rows=50 loops=1)         Output: pa.process_activity_id, pa.process_instance_id, pa.created         Sort Key: pa.process_instance_id, pa.created         Sort Method: top-N heapsort  Memory: 28kB         Buffers: shared hit=43065         ->  Nested Loop  (cost=1.14..1070.70 rows=31 width=24) (actual time=0.031..72.183 rows=46992 loops=1)               Output: pa.process_activity_id, pa.process_instance_id, pa.created               Buffers: shared hit=43065               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual time=0.010..0.013 rows=2 loops=1)                     Output: pi.process_instance_id                     Index Cond: (pi.user_id = '317079413683604'::bigint)                     Filter: (pi.app_id = '427380312000560'::bigint)                     Buffers: shared hit=5               ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..1053.80 rows=1425 width=24) (actual time=0.015..20.702 rows=23496 loops=2)                     Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translate, pa.in_reply_to                     Index Cond: ((pa.process_instance_id = pi.process_instance_id) AND (pa.app_id = '427380312000560'::bigint) AND (pa.created > '1970-01-01 00:00:00'::timestamp without time zone))                     Buffers: shared hit=43060 Planning time: 0.499 ms Execution time: 86.040 ms(22 rows)Query 2web_1=>  EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id AS m_process_activity_id FROM process_activity m WHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01 00:00:00' AND pa.process_instance_id in (240117466018927,325820556706970,433008275197305) ORDER BY pa.process_instance_id,pa.created limit 50;                                                                                                           QUERY PLAN                                                                                                            --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.70..37.66 rows=50 width=24) (actual time=0.023..0.094 rows=50 loops=1)   Output: process_activity_id, process_instance_id, created   Buffers: shared hit=50   ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..3124.97 rows=4226 width=24) (actual time=0.022..0.079 rows=50 loops=1)         Output: process_activity_id, process_instance_id, created         Index Cond: ((pa.process_instance_id = ANY ('{140117466018927,225820556706970,233008275197305}'::bigint[])) AND (pa.app_id = '427380312000560'::bigint) AND (pa.created > '1970-01-01 00:00:00'::timestamp without time zone))         Buffers: shared hit=50 Planning time: 0.167 ms Execution time: 0.137 ms(9 rows)Can someone explain  Why It is fetching more columns and more rows, incase of inner query ?Is there any option to really limit values with INNER JOIN, INNER query ? If yes, can you please share information on this ?Thanks in advance for your time and suggestions.Regards, Amar", "msg_date": "Thu, 7 May 2020 16:49:31 +0530", "msg_from": "Amarendra Konda <[email protected]>", "msg_from_op": true, "msg_subject": "Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" }, { "msg_contents": "On 5/7/20 4:19 AM, Amarendra Konda wrote:\n> Hi,\n> \n> PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled \n> by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> \n> We have noticed huge difference interms of execution plan ( response \n> time) , When we pass the direct values  Vs  inner query to IN clause.\n> \n> High level details of the use case are as follows\n> \n> * As part of the SQL there are 2 tables named Process_instance\n> (master) and Process_activity ( child)\n> * Wanted to fetch TOP 50 rows from  Process_activity table for the\n> given values of the Process_instance.\n> * When we used Inner Join / Inner query ( query1)  between parent\n> table and child table , LIMIT is not really taking in to account.\n> Instead it is fetching more rows and columns that required, and\n> finally limiting the result\n\nIt is doing what you told it to do which is SELECT all \nprocess_instance_i's for user_id='317079413683604' and app_id = \n'427380312000560' and then filtering further. I am going to guess that \nif you run the inner query alone you will find it returns ~23496 rows.\nYou might have better results if you an actual join between \nprocess_activity and process_instance. Something like below(obviously \nnot tested):\n\nSELECT\n pa.process_activity_id\nFROM\n process_activity pa\nJOIN\n process_instance pi\nON\n pa.process_instance_id = pi.process_instance_id\nWHERE\n pa.app_id = '427380312000560'\n AND\n pa.created > '1970-01-01 00:00:00'\n AND\n pi.user_id = '317079413683604'\nORDER BY\n pa.process_instance_id,\n pa.created\nLIMIT 50;\n\nThe second query is not equivalent as you are not filtering on user_id \nand you are filtering on only three process_instance_id's.\n\n\n> *\n> \n> \n> *Query1*\n> \n> web_1=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT \n> pa.process_activity_id  FROM process_activity pa WHERE pa.app_id = \n> '427380312000560' AND pa.created > '1970-01-01 00:00:00' AND \n> pa.process_instance_id in *_(SELECT pi.process_instance_id FROM \n> process_instance pi WHERE pi.user_id = '317079413683604' AND pi.app_id = \n> '427380312000560')_* ORDER BY pa.process_instance_id,pa.created limit 50;\n> \n> \n>                                                                 QUERY PLAN\n> \n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=1071.47..1071.55 rows=31 width=24) (actual \n> time=85.958..85.991 rows=50 loops=1)\n>    Output: pa.process_activity_id, pa.process_instance_id, pa.created\n>    Buffers: shared hit=43065\n>    ->  Sort  (cost=1071.47..1071.55 rows=31 width=24) (actual \n> time=85.956..85.971 rows=50 loops=1)\n>          Output: pa.process_activity_id, pa.process_instance_id, pa.created\n>          Sort Key: pa.process_instance_id, pa.created\n>          Sort Method: top-N heapsort  Memory: 28kB\n>          Buffers: shared hit=43065\n>          ->  Nested Loop  (cost=1.14..1070.70 rows=31 width=24) (actual \n> time=0.031..72.183 rows=46992 loops=1)\n>                Output: pa.process_activity_id, pa.process_instance_id, \n> pa.created\n>                Buffers: shared hit=43065\n>                ->  Index Scan using fki_conv_konotor_user_user_id on \n> public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual \n> time=0.010..0.013 rows=2 loops=1)\n>                      Output: pi.process_instance_id\n>                      Index Cond: (pi.user_id = '317079413683604'::bigint)\n>                      Filter: (pi.app_id = '427380312000560'::bigint)\n>                      Buffers: shared hit=5\n>                ->  Index Scan using \n> process_activity_process_instance_id_app_id_created_idx on \n> public.process_activity pa  (cost=0.70..1053.80 rows=1425 width=24) \n> (actual time=0.015..20.702 rows=*23496* loops=2)\n> * Output: pa.process_activity_id, pa.process_activity_type, \n> pa.voice_url, pa.process_activity_user_id, pa.app_id, \n> pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, \n> pa.label_category_id, pa.label_id, pa.csat_response_id, \n> pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.market\n> ing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, \n> pa.internal_meta, pa.interaction_id, pa.do_not_translate, \n> pa.should_translate, pa.in_reply_to*\n>                      Index Cond: ((pa.process_instance_id = \n> pi.process_instance_id) AND (pa.app_id = '427380312000560'::bigint) AND \n> (pa.created > '1970-01-01 00:00:00'::timestamp without time zone))\n>                      Buffers: shared hit=43060\n>  Planning time: 0.499 ms\n>  Execution time: 86.040 ms\n> (22 rows)\n> \n> *_Query 2_*\n> \n> web_1=>  EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT \n> pa.process_activity_id AS m_process_activity_id FROM process_activity m \n> WHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01 \n> 00:00:00' AND pa.process_instance_id in \n> (*240117466018927,325820556706970,433008275197305*) ORDER BY \n> pa.process_instance_id,pa.created limit 50;\n> \n>                                    QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=0.70..37.66 rows=50 width=24) (actual time=0.023..0.094 \n> rows=50 loops=1)\n>    Output: process_activity_id, process_instance_id, created\n>    Buffers: shared hit=50\n>    ->  Index Scan using \n> process_activity_process_instance_id_app_id_created_idx on \n> public.process_activity pa  (cost=0.70..3124.97 rows=4226 width=24) \n> (actual time=0.022..0.079 *rows=50* loops=1)\n>          Output: process_activity_id, process_instance_id, created\n>          Index Cond: ((pa.process_instance_id = ANY \n> ('{140117466018927,225820556706970,233008275197305}'::bigint[])) AND \n> (pa.app_id = '427380312000560'::bigint) AND (pa.created > '1970-01-01 \n> 00:00:00'::timestamp without time zone))\n>          Buffers: shared hit=50\n>  Planning time: 0.167 ms\n>  Execution time: 0.137 ms\n> (9 rows)\n> \n> \n> Can someone explain\n> \n> * Why It is fetching more columns and more rows, incase of inner query ?\n> * Is there any option to really limit values with INNER JOIN, INNER\n> query ? If yes, can you please share information on this ?\n> \n> Thanks in advance for your time and suggestions.\n> \n> Regards, Amar\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n", "msg_date": "Thu, 7 May 2020 07:40:11 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" }, { "msg_contents": "On Thu, May 7, 2020 at 7:40 AM Adrian Klaver <[email protected]>\nwrote:\n\n> On 5/7/20 4:19 AM, Amarendra Konda wrote:\n> > Hi,\n> >\n> > PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled\n> > by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> >\n> > We have noticed huge difference interms of execution plan ( response\n> > time) , When we pass the direct values Vs inner query to IN clause.\n> >\n> > High level details of the use case are as follows\n> >\n> > * As part of the SQL there are 2 tables named Process_instance\n> > (master) and Process_activity ( child)\n> > * Wanted to fetch TOP 50 rows from Process_activity table for the\n> > given values of the Process_instance.\n> > * When we used Inner Join / Inner query ( query1) between parent\n> > table and child table , LIMIT is not really taking in to account.\n> > Instead it is fetching more rows and columns that required, and\n> > finally limiting the result\n>\n> It is doing what you told it to do which is SELECT all\n> process_instance_i's for user_id='317079413683604' and app_id =\n> '427380312000560' and then filtering further. I am going to guess that\n> if you run the inner query alone you will find it returns ~23496 rows.\n> You might have better results if you an actual join between\n> process_activity and process_instance. Something like below(obviously\n> not tested):\n>\n\nWhat the OP seems to want is a semi-join:\n\n(not tested)\n\nSELECT pa.process_activity_id\nFROM process_activity pa WHERE pa.app_id = '427380312000560' AND pa.created\n> '1970-01-01 00:00:00'\nAND EXISTS (\n SELECT 1 FROM process_instance pi WHERE pi.app_id = pa.app_id AND\npi.user_id = '317079413683604'\n)\nORDER BY\npa.process_instance_id,\npa.created limit 50;\n\nI'm unsure exactly how this will impact the plan choice but it should be an\nimprovement, and in any case more correctly defines what it is you are\nlooking for.\n\nDavid J.\n\nOn Thu, May 7, 2020 at 7:40 AM Adrian Klaver <[email protected]> wrote:On 5/7/20 4:19 AM, Amarendra Konda wrote:\n> Hi,\n> \n> PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled \n> by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> \n> We have noticed huge difference interms of execution plan ( response \n> time) , When we pass the direct values  Vs  inner query to IN clause.\n> \n> High level details of the use case are as follows\n> \n>   * As part of the SQL there are 2 tables named Process_instance\n>     (master) and Process_activity ( child)\n>   * Wanted to fetch TOP 50 rows from  Process_activity table for the\n>     given values of the Process_instance.\n>   * When we used Inner Join / Inner query ( query1)  between parent\n>     table and child table , LIMIT is not really taking in to account.\n>     Instead it is fetching more rows and columns that required, and\n>     finally limiting the result\n\nIt is doing what you told it to do which is SELECT all \nprocess_instance_i's for user_id='317079413683604' and app_id = \n'427380312000560' and then filtering further. I am going to guess that \nif you run the inner query alone you will find it returns ~23496 rows.\nYou might have better results if you an actual join between \nprocess_activity and process_instance. Something like below(obviously \nnot tested):What the OP seems to want is a semi-join:(not tested)SELECT pa.process_activity_id  FROM process_activity pa WHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01 00:00:00'AND EXISTS (  SELECT 1 FROM process_instance pi WHERE pi.app_id = pa.app_id AND pi.user_id = '317079413683604')ORDER BY pa.process_instance_id,pa.created limit 50;I'm unsure exactly how this will impact the plan choice but it should be an improvement, and in any case more correctly defines what it is you are looking for.David J.", "msg_date": "Thu, 7 May 2020 08:46:51 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" }, { "msg_contents": "Hi Adrian,\n\nThanks for the reply. And i have kept latest execution plans, for various\nSQL statements ( inner join, sub queries and placing values instead of sub\nquery) .\nAs suggested, tried with INNER JOIN, however result was similar to\nsubquery.\n\nIs there any way we can tell the optimiser to process less number of rows\nbased on the LIMIT value ? ( i.e. may be SQL re-write) ?\n\n\n*INNER SQL*\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pi.process_instance_id AS\npi_process_instance_id FROM process_instance pi WHERE pi.user_id =\n'137074931866340' AND pi.app_id = '126502930200650';\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using fki_conv_konotor_user_user_id on public.process_instance\npi (cost=0.43..2.66 rows=1 width=8) *(actual time=0.018..0.019 rows=2\nloops=1)*\n Output: process_instance_id\n Index Cond: (pi.user_id = '137074931866340'::bigint)\n Filter: (pi.app_id = '126502930200650'::bigint)\n Buffers: shared hit=5\n Planning time: 0.119 ms\n Execution time: 0.041 ms\n\n\n*Full query - Sub query*\n\n EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id\nAS pa_process_activity_id FROM process_activity pa WHERE pa.app_id =\n'126502930200650' AND pa.created > '1970-01-01 00:00:00' AND\npa.process_instance_id in (SELECT pi.process_instance_id AS\npi_process_instance_id FROM process_instance pi WHERE pi.user_id =\n'137074931866340' AND pi.app_id = '126502930200650') ORDER BY\npa.process_instance_id, pa.created limit 50;\n\n\n\n QUERY PLAN\n\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------\n Limit (cost=1072.91..1072.99 rows=31 width=24) (actual\ntime=744.386..744.415 rows=50 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id, pa.created\n Buffers: shared hit=3760 read=39316\n -> Sort (cost=1072.91..1072.99 rows=31 width=24) (actual\ntime=744.384..744.396 rows=50 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id, pa.created\n Sort Key: pa.process_instance_id, pa.created\n Sort Method: top-N heapsort Memory: 28kB\n Buffers: shared hit=3760 read=39316\n -> Nested Loop (cost=1.14..1072.14 rows=31 width=24) (actual\ntime=0.044..727.297 rows=47011 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id,\npa.created\n Buffers: shared hit=3754 read=39316\n -> Index Scan using fki_conv_konotor_user_user_id on\npublic.process_instance pi (cost=0.43..2.66 rows=1 width=8) *(actual\ntime=0.009..0.015 rows=2 loops=1)*\n Output: pi.process_instance_id\n Index Cond: (pi.user_id = '137074931866340'::bigint)\n Filter: (pi.app_id = '126502930200650'::bigint)\n Buffers: shared hit=5\n -> Index Scan using\nprocess_activity_process_instance_id_app_id_created_idx on\npublic.process_activity pa (cost=0.70..1055.22 rows=1427 width=24) *(actual\ntime=0.029..349.000 rows=23506 loops=2)*\n Output: pa.process_activity_id,\npa.process_activity_type, pa.voice_url, pa.process_activity_user_id,\npa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source,\npa.label_category_id, pa.label_id, pa.csat_respons\ne_id, pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id,\npa.marketing_reply_id, pa.delivered_at, pa.reply_fragments,\npa.status_fragment, pa.internal_meta, pa.interaction_id,\npa.do_not_translate, pa.should_tr\nanslate, pa.in_reply_to\n Index Cond: ((pa.process_instance_id =\npi.process_instance_id) AND (pa.app_id = '126502930200650'::bigint) AND\n(pa.created > '1970-01-01 00:00:00'::timestamp without time zone))\n Buffers: shared hit=3749 read=39316\n Planning time: 2.547 ms\n Execution time: 744.499 ms\n(22 rows)\n\n*Full query - INNER JOIN*\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id\nAS pa_process_activity_id FROM process_activity pa INNER JOIN\nprocess_instance pi ON pi.process_instance_id = pa.process_instance_id AND\npa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00' AND\npi.user_id = '137074931866340' AND pi.app_id = '126502930200650' ORDER BY\npa.process_instance_id, pa.created limit 50;\n\n\n\n QUERY PLAN\n\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------\n Limit (cost=1072.91..1072.99 rows=31 width=24) (actual\ntime=87.803..87.834 rows=50 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id, pa.created\n Buffers: shared hit=43070\n -> Sort (cost=1072.91..1072.99 rows=31 width=24) (actual\ntime=87.803..87.815 rows=50 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id, pa.created\n Sort Key: pa.process_instance_id, pa.created\n Sort Method: top-N heapsort Memory: 28kB\n Buffers: shared hit=43070\n -> Nested Loop (cost=1.14..1072.14 rows=31 width=24) (actual\ntime=0.030..73.847 rows=47011 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id,\npa.created\n Buffers: shared hit=43070\n -> Index Scan using fki_conv_konotor_user_user_id on\npublic.process_instance pi (cost=0.43..2.66 rows=1 width=8) *(actual\ntime=0.015..0.018 rows=2 loops=1)*\n Output: pi.process_instance_id\n Index Cond: (pi.user_id = '137074931866340'::bigint)\n Filter: (pi.app_id = '126502930200650'::bigint)\n Buffers: shared hit=5\n -> Index Scan using\nprocess_activity_process_instance_id_app_id_created_idx on\npublic.process_activity pa (cost=0.70..1055.22 rows=1427 width=24) *(actual\ntime=0.011..21.447 rows=23506 loops=2)*\n Output: pa.process_activity_id,\npa.process_activity_type, pa.voice_url, pa.process_activity_user_id,\npa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source,\npa.label_category_id, pa.label_id, pa.csat_respons\ne_id, pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id,\npa.marketing_reply_id, pa.delivered_at, pa.reply_fragments,\npa.status_fragment, pa.internal_meta, pa.interaction_id,\npa.do_not_translate, pa.should_tr\nanslate, pa.in_reply_to\n Index Cond: ((pa.process_instance_id =\npi.process_instance_id) AND (pa.app_id = '126502930200650'::bigint) AND\n(pa.created > '1970-01-01 00:00:00'::timestamp without time zone))\n Buffers: shared hit=43065\n Planning time: 0.428 ms\n Execution time: 87.905 ms\n\n\n*FULL Query - INNER SQL replaced with result*\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id AS\nm_process_activity_id FROM process_activity pa WHERE pa.app_id =\n'126502930200650' AND pa.created > '1970-01-01 00:00:00' AND\npa.process_instance_id in (*137074941043913,164357609323111*) ORDER BY\npa.process_instance_id,pa.created limit 50;\n\n QUERY PLAN\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------\n Limit (cost=0.70..37.65 rows=50 width=24) (actual time=0.016..0.095\nrows=50 loops=1)\n Output: process_activity_id, process_instance_id, created\n Buffers: shared hit=55\n -> Index Scan using\nprocess_activity_process_instance_id_app_id_created_idx on\npublic.process_activity pa (cost=0.70..2100.39 rows=2841 width=24) *(actual\ntime=0.015..0.077 rows=50 loops=1)*\n Output: process_activity_id, process_instance_id, created\n Index Cond: ((pa.process_instance_id = ANY\n('{137074941043913,164357609323111}'::bigint[])) AND (pa.app_id =\n'126502930200650'::bigint) AND (m.created > '1970-01-01\n00:00:00'::timestamp without time\nzone))\n Buffers: shared hit=55\n Planning time: 1.710 ms\n Execution time: 0.147 ms\n\n\nRegards, Amar\n\n\n\n\nOn Thu, May 7, 2020 at 8:10 PM Adrian Klaver <[email protected]>\nwrote:\n\n> On 5/7/20 4:19 AM, Amarendra Konda wrote:\n> > Hi,\n> >\n> > PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled\n> > by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> >\n> > We have noticed huge difference interms of execution plan ( response\n> > time) , When we pass the direct values Vs inner query to IN clause.\n> >\n> > High level details of the use case are as follows\n> >\n> > * As part of the SQL there are 2 tables named Process_instance\n> > (master) and Process_activity ( child)\n> > * Wanted to fetch TOP 50 rows from Process_activity table for the\n> > given values of the Process_instance.\n> > * When we used Inner Join / Inner query ( query1) between parent\n> > table and child table , LIMIT is not really taking in to account.\n> > Instead it is fetching more rows and columns that required, and\n> > finally limiting the result\n>\n> It is doing what you told it to do which is SELECT all\n> process_instance_i's for user_id='317079413683604' and app_id =\n> '427380312000560' and then filtering further. I am going to guess that\n> if you run the inner query alone you will find it returns ~23496 rows.\n> You might have better results if you an actual join between\n> process_activity and process_instance. Something like below(obviously\n> not tested):\n>\n> SELECT\n> pa.process_activity_id\n> FROM\n> process_activity pa\n> JOIN\n> process_instance pi\n> ON\n> pa.process_instance_id = pi.process_instance_id\n> WHERE\n> pa.app_id = '427380312000560'\n> AND\n> pa.created > '1970-01-01 00:00:00'\n> AND\n> pi.user_id = '317079413683604'\n> ORDER BY\n> pa.process_instance_id,\n> pa.created\n> LIMIT 50;\n>\n> The second query is not equivalent as you are not filtering on user_id\n> and you are filtering on only three process_instance_id's.\n>\n>\n> > *\n> >\n> >\n> > *Query1*\n> >\n> > web_1=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT\n> > pa.process_activity_id FROM process_activity pa WHERE pa.app_id =\n> > '427380312000560' AND pa.created > '1970-01-01 00:00:00' AND\n> > pa.process_instance_id in *_(SELECT pi.process_instance_id FROM\n> > process_instance pi WHERE pi.user_id = '317079413683604' AND pi.app_id =\n> > '427380312000560')_* ORDER BY pa.process_instance_id,pa.created limit 50;\n> >\n> >\n> > QUERY\n> PLAN\n> >\n> >\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=1071.47..1071.55 rows=31 width=24) (actual\n> > time=85.958..85.991 rows=50 loops=1)\n> > Output: pa.process_activity_id, pa.process_instance_id, pa.created\n> > Buffers: shared hit=43065\n> > -> Sort (cost=1071.47..1071.55 rows=31 width=24) (actual\n> > time=85.956..85.971 rows=50 loops=1)\n> > Output: pa.process_activity_id, pa.process_instance_id,\n> pa.created\n> > Sort Key: pa.process_instance_id, pa.created\n> > Sort Method: top-N heapsort Memory: 28kB\n> > Buffers: shared hit=43065\n> > -> Nested Loop (cost=1.14..1070.70 rows=31 width=24) (actual\n> > time=0.031..72.183 rows=46992 loops=1)\n> > Output: pa.process_activity_id, pa.process_instance_id,\n> > pa.created\n> > Buffers: shared hit=43065\n> > -> Index Scan using fki_conv_konotor_user_user_id on\n> > public.process_instance pi (cost=0.43..2.66 rows=1 width=8) (actual\n> > time=0.010..0.013 rows=2 loops=1)\n> > Output: pi.process_instance_id\n> > Index Cond: (pi.user_id =\n> '317079413683604'::bigint)\n> > Filter: (pi.app_id = '427380312000560'::bigint)\n> > Buffers: shared hit=5\n> > -> Index Scan using\n> > process_activity_process_instance_id_app_id_created_idx on\n> > public.process_activity pa (cost=0.70..1053.80 rows=1425 width=24)\n> > (actual time=0.015..20.702 rows=*23496* loops=2)\n> > * Output: pa.process_activity_id, pa.process_activity_type,\n> > pa.voice_url, pa.process_activity_user_id, pa.app_id,\n> > pa.process_instance_id, pa.alias, pa.read_by_user, pa.source,\n> > pa.label_category_id, pa.label_id, pa.csat_response_id,\n> > pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id,\n> pa.market\n> > ing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment,\n> > pa.internal_meta, pa.interaction_id, pa.do_not_translate,\n> > pa.should_translate, pa.in_reply_to*\n> > Index Cond: ((pa.process_instance_id =\n> > pi.process_instance_id) AND (pa.app_id = '427380312000560'::bigint) AND\n> > (pa.created > '1970-01-01 00:00:00'::timestamp without time zone))\n> > Buffers: shared hit=43060\n> > Planning time: 0.499 ms\n> > Execution time: 86.040 ms\n> > (22 rows)\n> >\n> > *_Query 2_*\n> >\n> > web_1=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT\n> > pa.process_activity_id AS m_process_activity_id FROM process_activity m\n> > WHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01\n> > 00:00:00' AND pa.process_instance_id in\n> > (*240117466018927,325820556706970,433008275197305*) ORDER BY\n> > pa.process_instance_id,pa.created limit 50;\n> >\n> > QUERY PLAN\n> >\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=0.70..37.66 rows=50 width=24) (actual time=0.023..0.094\n> > rows=50 loops=1)\n> > Output: process_activity_id, process_instance_id, created\n> > Buffers: shared hit=50\n> > -> Index Scan using\n> > process_activity_process_instance_id_app_id_created_idx on\n> > public.process_activity pa (cost=0.70..3124.97 rows=4226 width=24)\n> > (actual time=0.022..0.079 *rows=50* loops=1)\n> > Output: process_activity_id, process_instance_id, created\n> > Index Cond: ((pa.process_instance_id = ANY\n> > ('{140117466018927,225820556706970,233008275197305}'::bigint[])) AND\n> > (pa.app_id = '427380312000560'::bigint) AND (pa.created > '1970-01-01\n> > 00:00:00'::timestamp without time zone))\n> > Buffers: shared hit=50\n> > Planning time: 0.167 ms\n> > Execution time: 0.137 ms\n> > (9 rows)\n> >\n> >\n> > Can someone explain\n> >\n> > * Why It is fetching more columns and more rows, incase of inner query\n> ?\n> > * Is there any option to really limit values with INNER JOIN, INNER\n> > query ? If yes, can you please share information on this ?\n> >\n> > Thanks in advance for your time and suggestions.\n> >\n> > Regards, Amar\n>\n>\n> --\n> Adrian Klaver\n> [email protected]\n>\n\nHi Adrian,Thanks for the reply.  And i have kept latest execution plans, for various SQL statements ( inner join, sub queries and placing values instead of sub query) . As suggested, tried with INNER JOIN, however result was similar to subquery. Is there any way we can tell the optimiser to process less number of rows based on the LIMIT value ? ( i.e. may be SQL re-write) ? INNER SQLEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pi.process_instance_id AS pi_process_instance_id FROM process_instance pi WHERE pi.user_id = '137074931866340' AND pi.app_id = '126502930200650';                                                                     QUERY PLAN                                                                      ----------------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual time=0.018..0.019 rows=2 loops=1)   Output: process_instance_id   Index Cond: (pi.user_id = '137074931866340'::bigint)   Filter: (pi.app_id = '126502930200650'::bigint)   Buffers: shared hit=5 Planning time: 0.119 ms Execution time: 0.041 msFull query - Sub query EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00' AND pa.process_instance_id in (SELECT pi.process_instance_id AS pi_process_instance_id FROM process_instance pi WHERE pi.user_id = '137074931866340' AND pi.app_id = '126502930200650') ORDER BY pa.process_instance_id, pa.created limit 50;                                                                                                                                                                                                                QUERY PLAN                                                                                                                                                                                                                 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=1072.91..1072.99 rows=31 width=24) (actual time=744.386..744.415 rows=50 loops=1)   Output: pa.process_activity_id, pa.process_instance_id, pa.created   Buffers: shared hit=3760 read=39316   ->  Sort  (cost=1072.91..1072.99 rows=31 width=24) (actual time=744.384..744.396 rows=50 loops=1)         Output: pa.process_activity_id, pa.process_instance_id, pa.created         Sort Key: pa.process_instance_id, pa.created         Sort Method: top-N heapsort  Memory: 28kB         Buffers: shared hit=3760 read=39316         ->  Nested Loop  (cost=1.14..1072.14 rows=31 width=24) (actual time=0.044..727.297 rows=47011 loops=1)               Output: pa.process_activity_id, pa.process_instance_id, pa.created               Buffers: shared hit=3754 read=39316               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual time=0.009..0.015 rows=2 loops=1)                     Output: pi.process_instance_id                     Index Cond: (pi.user_id = '137074931866340'::bigint)                     Filter: (pi.app_id = '126502930200650'::bigint)                     Buffers: shared hit=5               ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..1055.22 rows=1427 width=24) (actual time=0.029..349.000 rows=23506 loops=2)                     Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translate, pa.in_reply_to                     Index Cond: ((pa.process_instance_id = pi.process_instance_id) AND (pa.app_id = '126502930200650'::bigint) AND (pa.created > '1970-01-01 00:00:00'::timestamp without time zone))                     Buffers: shared hit=3749 read=39316 Planning time: 2.547 ms Execution time: 744.499 ms(22 rows)Full query - INNER JOINEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa INNER JOIN process_instance pi ON pi.process_instance_id = pa.process_instance_id AND pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00' AND pi.user_id = '137074931866340' AND pi.app_id = '126502930200650' ORDER BY pa.process_instance_id, pa.created limit 50;                                                                                                                                                                                                                QUERY PLAN                                                                                                                                                                                                                 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=1072.91..1072.99 rows=31 width=24) (actual time=87.803..87.834 rows=50 loops=1)   Output: pa.process_activity_id, pa.process_instance_id, pa.created   Buffers: shared hit=43070   ->  Sort  (cost=1072.91..1072.99 rows=31 width=24) (actual time=87.803..87.815 rows=50 loops=1)         Output: pa.process_activity_id, pa.process_instance_id, pa.created         Sort Key: pa.process_instance_id, pa.created         Sort Method: top-N heapsort  Memory: 28kB         Buffers: shared hit=43070         ->  Nested Loop  (cost=1.14..1072.14 rows=31 width=24) (actual time=0.030..73.847 rows=47011 loops=1)               Output: pa.process_activity_id, pa.process_instance_id, pa.created               Buffers: shared hit=43070               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual time=0.015..0.018 rows=2 loops=1)                     Output: pi.process_instance_id                     Index Cond: (pi.user_id = '137074931866340'::bigint)                     Filter: (pi.app_id = '126502930200650'::bigint)                     Buffers: shared hit=5               ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..1055.22 rows=1427 width=24) (actual time=0.011..21.447 rows=23506 loops=2)                     Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translate, pa.in_reply_to                     Index Cond: ((pa.process_instance_id = pi.process_instance_id) AND (pa.app_id = '126502930200650'::bigint) AND (pa.created > '1970-01-01 00:00:00'::timestamp without time zone))                     Buffers: shared hit=43065 Planning time: 0.428 ms Execution time: 87.905 msFULL Query - INNER SQL replaced with resultEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id AS m_process_activity_id FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00' AND pa.process_instance_id in (137074941043913,164357609323111)  ORDER BY pa.process_instance_id,pa.created limit 50;                                                                                                   QUERY PLAN                                                                                                    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.70..37.65 rows=50 width=24) (actual time=0.016..0.095 rows=50 loops=1)   Output: process_activity_id, process_instance_id, created   Buffers: shared hit=55   ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..2100.39 rows=2841 width=24) (actual time=0.015..0.077 rows=50 loops=1)         Output: process_activity_id, process_instance_id, created         Index Cond: ((pa.process_instance_id = ANY ('{137074941043913,164357609323111}'::bigint[])) AND (pa.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))         Buffers: shared hit=55 Planning time: 1.710 ms Execution time: 0.147 msRegards, AmarOn Thu, May 7, 2020 at 8:10 PM Adrian Klaver <[email protected]> wrote:On 5/7/20 4:19 AM, Amarendra Konda wrote:\n> Hi,\n> \n> PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled \n> by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> \n> We have noticed huge difference interms of execution plan ( response \n> time) , When we pass the direct values  Vs  inner query to IN clause.\n> \n> High level details of the use case are as follows\n> \n>   * As part of the SQL there are 2 tables named Process_instance\n>     (master) and Process_activity ( child)\n>   * Wanted to fetch TOP 50 rows from  Process_activity table for the\n>     given values of the Process_instance.\n>   * When we used Inner Join / Inner query ( query1)  between parent\n>     table and child table , LIMIT is not really taking in to account.\n>     Instead it is fetching more rows and columns that required, and\n>     finally limiting the result\n\nIt is doing what you told it to do which is SELECT all \nprocess_instance_i's for user_id='317079413683604' and app_id = \n'427380312000560' and then filtering further. I am going to guess that \nif you run the inner query alone you will find it returns ~23496 rows.\nYou might have better results if you an actual join between \nprocess_activity and process_instance. Something like below(obviously \nnot tested):\n\nSELECT\n     pa.process_activity_id\nFROM\n     process_activity pa\nJOIN\n     process_instance pi\nON\n     pa.process_instance_id = pi.process_instance_id\nWHERE\n     pa.app_id = '427380312000560'\n     AND\n          pa.created > '1970-01-01 00:00:00'\n     AND\n          pi.user_id = '317079413683604'\nORDER BY\n     pa.process_instance_id,\n     pa.created\nLIMIT 50;\n\nThe second query is not equivalent as you are not filtering on user_id \nand you are filtering on only three process_instance_id's.\n\n\n>   *\n> \n> \n> *Query1*\n> \n> web_1=> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT \n> pa.process_activity_id  FROM process_activity pa WHERE pa.app_id = \n> '427380312000560' AND pa.created > '1970-01-01 00:00:00' AND \n> pa.process_instance_id in *_(SELECT pi.process_instance_id FROM \n> process_instance pi WHERE pi.user_id = '317079413683604' AND pi.app_id = \n> '427380312000560')_* ORDER BY pa.process_instance_id,pa.created limit 50;\n>                                                                          \n>                                                                          \n>                                                                  QUERY PLAN\n>                                                  \n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>   Limit  (cost=1071.47..1071.55 rows=31 width=24) (actual \n> time=85.958..85.991 rows=50 loops=1)\n>     Output: pa.process_activity_id, pa.process_instance_id, pa.created\n>     Buffers: shared hit=43065\n>     ->  Sort  (cost=1071.47..1071.55 rows=31 width=24) (actual \n> time=85.956..85.971 rows=50 loops=1)\n>           Output: pa.process_activity_id, pa.process_instance_id, pa.created\n>           Sort Key: pa.process_instance_id, pa.created\n>           Sort Method: top-N heapsort  Memory: 28kB\n>           Buffers: shared hit=43065\n>           ->  Nested Loop  (cost=1.14..1070.70 rows=31 width=24) (actual \n> time=0.031..72.183 rows=46992 loops=1)\n>                 Output: pa.process_activity_id, pa.process_instance_id, \n> pa.created\n>                 Buffers: shared hit=43065\n>                 ->  Index Scan using fki_conv_konotor_user_user_id on \n> public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual \n> time=0.010..0.013 rows=2 loops=1)\n>                       Output: pi.process_instance_id\n>                       Index Cond: (pi.user_id = '317079413683604'::bigint)\n>                       Filter: (pi.app_id = '427380312000560'::bigint)\n>                       Buffers: shared hit=5\n>                 ->  Index Scan using \n> process_activity_process_instance_id_app_id_created_idx on \n> public.process_activity pa  (cost=0.70..1053.80 rows=1425 width=24) \n> (actual time=0.015..20.702 rows=*23496* loops=2)\n> * Output: pa.process_activity_id, pa.process_activity_type, \n> pa.voice_url, pa.process_activity_user_id, pa.app_id, \n> pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, \n> pa.label_category_id, pa.label_id, pa.csat_response_id, \n> pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.market\n> ing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, \n> pa.internal_meta, pa.interaction_id, pa.do_not_translate, \n> pa.should_translate, pa.in_reply_to*\n>                       Index Cond: ((pa.process_instance_id = \n> pi.process_instance_id) AND (pa.app_id = '427380312000560'::bigint) AND \n> (pa.created > '1970-01-01 00:00:00'::timestamp without time zone))\n>                       Buffers: shared hit=43060\n>   Planning time: 0.499 ms\n>   Execution time: 86.040 ms\n> (22 rows)\n> \n> *_Query 2_*\n> \n> web_1=>  EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT \n> pa.process_activity_id AS m_process_activity_id FROM process_activity m \n> WHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01 \n> 00:00:00' AND pa.process_instance_id in \n> (*240117466018927,325820556706970,433008275197305*) ORDER BY \n> pa.process_instance_id,pa.created limit 50;\n>                                                                          \n>                                     QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>   Limit  (cost=0.70..37.66 rows=50 width=24) (actual time=0.023..0.094 \n> rows=50 loops=1)\n>     Output: process_activity_id, process_instance_id, created\n>     Buffers: shared hit=50\n>     ->  Index Scan using \n> process_activity_process_instance_id_app_id_created_idx on \n> public.process_activity pa  (cost=0.70..3124.97 rows=4226 width=24) \n> (actual time=0.022..0.079 *rows=50* loops=1)\n>           Output: process_activity_id, process_instance_id, created\n>           Index Cond: ((pa.process_instance_id = ANY \n> ('{140117466018927,225820556706970,233008275197305}'::bigint[])) AND \n> (pa.app_id = '427380312000560'::bigint) AND (pa.created > '1970-01-01 \n> 00:00:00'::timestamp without time zone))\n>           Buffers: shared hit=50\n>   Planning time: 0.167 ms\n>   Execution time: 0.137 ms\n> (9 rows)\n> \n> \n> Can someone explain\n> \n>   * Why It is fetching more columns and more rows, incase of inner query ?\n>   * Is there any option to really limit values with INNER JOIN, INNER\n>     query ? If yes, can you please share information on this ?\n> \n> Thanks in advance for your time and suggestions.\n> \n> Regards, Amar\n\n\n-- \nAdrian Klaver\[email protected]", "msg_date": "Thu, 7 May 2020 23:06:02 +0530", "msg_from": "Amarendra Konda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" }, { "msg_contents": "Hi David,\n\nThanks for the reply.This has optimized number of rows.\n\nCan you please explain, why it is getting more columns in output, even\nthough we have asked for only one column ?\n\n\n EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id\nAS pa_process_activity_id FROM process_activity pa WHERE pa.app_id =\n'126502930200650' AND pa.created > '1970-01-01 00:00:00' AND EXISTS (\nSELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND\npi.user_id = '137074931866340') ORDER BY pa.process_instance_id,m.created\nlimit 50;\n\n\n\n QUERY PLAN\n\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------\n Limit (cost=1.14..37.39 rows=50 width=24) (actual time=821.283..891.629\nrows=50 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id, pa.created\n Buffers: shared hit=274950\n -> Nested Loop Semi Join (cost=1.14..266660108.78 rows=367790473\nwidth=24) (actual time=821.282..891.607 rows=50 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id, pa.created\n Buffers: shared hit=274950\n -> Index Scan using\nprocess_activity_process_instance_id_app_id_created_idx on\npublic.process_activity pa (cost=0.70..262062725.21 rows=367790473\nwidth=32) (actual time=821.253..891.517 rows=50 loops=1)\n\n\n* Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url,\npa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias,\npa.read_by_user, pa.source, pa.label_category_id, pa.label_id,\npa.csat_response_id, m.process_activity_fragments, pa.created, pa.updated,\npa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments,\npa.status_fragment, pa.internal_meta, pa.interaction_id,\npa.do_not_translate, pa.should_translate, pa.in_reply_to*\n Index Cond: ((m.app_id = '126502930200650'::bigint) AND\n(m.created > '1970-01-01 00:00:00'::timestamp without time zone))\n Buffers: shared hit=274946\n -> Materialize (cost=0.43..2.66 rows=1 width=8) (actual\ntime=0.001..0.001 rows=1 loops=50)\n Output: pi.app_id\n Buffers: shared hit=4\n -> Index Scan using fki_conv_konotor_user_user_id on\npublic.process_instance pi (cost=0.43..2.66 rows=1 width=8) (actual\ntime=0.020..0.020 rows=1 loops=1)\n Output: pi.app_id\n Index Cond: (pi.user_id = '137074931866340'::bigint)\n Filter: (pi.app_id = '126502930200650'::bigint)\n Buffers: shared hit=4\n Planning time: 0.297 ms\n Execution time: 891.686 ms\n(20 rows)\n\nOn Thu, May 7, 2020 at 9:17 PM David G. Johnston <[email protected]>\nwrote:\n\n> On Thu, May 7, 2020 at 7:40 AM Adrian Klaver <[email protected]>\n> wrote:\n>\n>> On 5/7/20 4:19 AM, Amarendra Konda wrote:\n>> > Hi,\n>> >\n>> > PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled\n>> > by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n>> >\n>> > We have noticed huge difference interms of execution plan ( response\n>> > time) , When we pass the direct values Vs inner query to IN clause.\n>> >\n>> > High level details of the use case are as follows\n>> >\n>> > * As part of the SQL there are 2 tables named Process_instance\n>> > (master) and Process_activity ( child)\n>> > * Wanted to fetch TOP 50 rows from Process_activity table for the\n>> > given values of the Process_instance.\n>> > * When we used Inner Join / Inner query ( query1) between parent\n>> > table and child table , LIMIT is not really taking in to account.\n>> > Instead it is fetching more rows and columns that required, and\n>> > finally limiting the result\n>>\n>> It is doing what you told it to do which is SELECT all\n>> process_instance_i's for user_id='317079413683604' and app_id =\n>> '427380312000560' and then filtering further. I am going to guess that\n>> if you run the inner query alone you will find it returns ~23496 rows.\n>> You might have better results if you an actual join between\n>> process_activity and process_instance. Something like below(obviously\n>> not tested):\n>>\n>\n> What the OP seems to want is a semi-join:\n>\n> (not tested)\n>\n> SELECT pa.process_activity_id\n> FROM process_activity pa WHERE pa.app_id = '427380312000560' AND\n> pa.created > '1970-01-01 00:00:00'\n> AND EXISTS (\n> SELECT 1 FROM process_instance pi WHERE pi.app_id = pa.app_id AND\n> pi.user_id = '317079413683604'\n> )\n> ORDER BY\n> pa.process_instance_id,\n> pa.created limit 50;\n>\n> I'm unsure exactly how this will impact the plan choice but it should be\n> an improvement, and in any case more correctly defines what it is you are\n> looking for.\n>\n> David J.\n>\n>\n\nHi David,Thanks for the reply.This has optimized number of rows. Can you please explain, why it is getting more columns in output, even though we have asked for only one column ?  EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00'  AND EXISTS ( SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id  AND pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,m.created limit 50;                                                                                                                                                                                                             QUERY PLAN                                                                                                                                                                                                              ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=1.14..37.39 rows=50 width=24) (actual time=821.283..891.629 rows=50 loops=1)   Output: pa.process_activity_id, pa.process_instance_id, pa.created   Buffers: shared hit=274950   ->  Nested Loop Semi Join  (cost=1.14..266660108.78 rows=367790473 width=24) (actual time=821.282..891.607 rows=50 loops=1)         Output: pa.process_activity_id, pa.process_instance_id, pa.created         Buffers: shared hit=274950         ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..262062725.21 rows=367790473 width=32) (actual time=821.253..891.517 rows=50 loops=1)               Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, m.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translate, pa.in_reply_to               Index Cond: ((m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))               Buffers: shared hit=274946         ->  Materialize  (cost=0.43..2.66 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=50)               Output: pi.app_id               Buffers: shared hit=4               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual time=0.020..0.020 rows=1 loops=1)                     Output: pi.app_id                     Index Cond: (pi.user_id = '137074931866340'::bigint)                     Filter: (pi.app_id = '126502930200650'::bigint)                     Buffers: shared hit=4 Planning time: 0.297 ms Execution time: 891.686 ms(20 rows)On Thu, May 7, 2020 at 9:17 PM David G. Johnston <[email protected]> wrote:On Thu, May 7, 2020 at 7:40 AM Adrian Klaver <[email protected]> wrote:On 5/7/20 4:19 AM, Amarendra Konda wrote:\n> Hi,\n> \n> PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled \n> by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> \n> We have noticed huge difference interms of execution plan ( response \n> time) , When we pass the direct values  Vs  inner query to IN clause.\n> \n> High level details of the use case are as follows\n> \n>   * As part of the SQL there are 2 tables named Process_instance\n>     (master) and Process_activity ( child)\n>   * Wanted to fetch TOP 50 rows from  Process_activity table for the\n>     given values of the Process_instance.\n>   * When we used Inner Join / Inner query ( query1)  between parent\n>     table and child table , LIMIT is not really taking in to account.\n>     Instead it is fetching more rows and columns that required, and\n>     finally limiting the result\n\nIt is doing what you told it to do which is SELECT all \nprocess_instance_i's for user_id='317079413683604' and app_id = \n'427380312000560' and then filtering further. I am going to guess that \nif you run the inner query alone you will find it returns ~23496 rows.\nYou might have better results if you an actual join between \nprocess_activity and process_instance. Something like below(obviously \nnot tested):What the OP seems to want is a semi-join:(not tested)SELECT pa.process_activity_id  FROM process_activity pa WHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01 00:00:00'AND EXISTS (  SELECT 1 FROM process_instance pi WHERE pi.app_id = pa.app_id AND pi.user_id = '317079413683604')ORDER BY pa.process_instance_id,pa.created limit 50;I'm unsure exactly how this will impact the plan choice but it should be an improvement, and in any case more correctly defines what it is you are looking for.David J.", "msg_date": "Thu, 7 May 2020 23:19:18 +0530", "msg_from": "Amarendra Konda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" }, { "msg_contents": "Hi David,\n\nIn earlier reply, Over looked another condition, hence please ignore that\none\n\nHere is the correct one with all the needed conditions. According to the\nlatest one, exists also not limiting rows from the process_activity table.\n\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id\nAS pa_process_activity_id FROM process_activity pa WHERE pa.app_id =\n'126502930200650' AND pa.created > '1970-01-01 00:00:00' AND EXISTS (\nSELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND\n*pi.process_instance_id\n= pa.process_instance_id * AND pi.user_id = '137074931866340') ORDER BY\npa.process_instance_id, pa.created limit 50;\n\n\n\n QUERY PLAN\n\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------\n Limit (cost=1079.44..1079.52 rows=32 width=24) (actual\ntime=85.747..85.777 rows=50 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id, pa.created\n Buffers: shared hit=43070\n -> Sort (cost=1079.44..1079.52 rows=32 width=24) (actual\ntime=85.745..85.759 rows=50 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id, pa.created\n Sort Key: pa.process_instance_id, pa.created\n Sort Method: top-N heapsort Memory: 28kB\n Buffers: shared hit=43070\n -> Nested Loop (cost=1.14..1078.64 rows=32 width=24) (actual\ntime=0.025..72.115 rows=47011 loops=1)\n Output: pa.process_activity_id, pa.process_instance_id,\npa.created\n Buffers: shared hit=43070\n -> Index Scan using fki_conv_konotor_user_user_id on\npublic.process_instance pi (cost=0.43..2.66 rows=1 width=16) (actual\ntime=0.010..0.015 rows=2 loops=1)\n Output: pi.app_id, pi.process_instance_id\n Index Cond: (c.user_id = '137074931866340'::bigint)\n Filter: (c.app_id = '126502930200650'::bigint)\n Buffers: shared hit=5\n -> Index Scan using\nprocess_activity_process_instance_id_app_id_created_idx on\npublic.process_activity pa (cost=0.70..1061.62 rows=1436 width=32) *(actual\ntime=0.011..20.320 rows=23506 loops=2)*\n Output: pa.process_activity_id,\npa.process_activity_type, pa.voice_url, pa.process_activity_user_id,\npa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source,\npa.label_category_id, pa.label_id, pa.csat_respons\ne_id, pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id,\npa.marketing_reply_id, pa.delivered_at, pa.reply_fragments,\npa.status_fragment, pa.internal_meta, pa.interaction_id,\npa.do_not_translate, pa.should_tr\nanslate, pa.in_reply_to\n Index Cond: ((m.process_instance_id =\npi.process_instance_id) AND (m.app_id = '126502930200650'::bigint) AND\n(m.created > '1970-01-01 00:00:00'::timestamp without time zone))\n Buffers: shared hit=43065\n Planning time: 0.455 ms\n Execution time: 85.830 ms\n\nOn Thu, May 7, 2020 at 11:19 PM Amarendra Konda <[email protected]>\nwrote:\n\n> Hi David,\n>\n> Thanks for the reply.This has optimized number of rows.\n>\n> Can you please explain, why it is getting more columns in output, even\n> though we have asked for only one column ?\n>\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id\n> AS pa_process_activity_id FROM process_activity pa WHERE pa.app_id =\n> '126502930200650' AND pa.created > '1970-01-01 00:00:00' AND EXISTS (\n> SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND\n> pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,m.created\n> limit 50;\n>\n>\n>\n> QUERY PLAN\n>\n>\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> -----------------\n> Limit (cost=1.14..37.39 rows=50 width=24) (actual time=821.283..891.629\n> rows=50 loops=1)\n> Output: pa.process_activity_id, pa.process_instance_id, pa.created\n> Buffers: shared hit=274950\n> -> Nested Loop Semi Join (cost=1.14..266660108.78 rows=367790473\n> width=24) (actual time=821.282..891.607 rows=50 loops=1)\n> Output: pa.process_activity_id, pa.process_instance_id, pa.created\n> Buffers: shared hit=274950\n> -> Index Scan using\n> process_activity_process_instance_id_app_id_created_idx on\n> public.process_activity pa (cost=0.70..262062725.21 rows=367790473\n> width=32) (actual time=821.253..891.517 rows=50 loops=1)\n>\n>\n> * Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url,\n> pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias,\n> pa.read_by_user, pa.source, pa.label_category_id, pa.label_id,\n> pa.csat_response_id, m.process_activity_fragments, pa.created, pa.updated,\n> pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments,\n> pa.status_fragment, pa.internal_meta, pa.interaction_id,\n> pa.do_not_translate, pa.should_translate, pa.in_reply_to*\n> Index Cond: ((m.app_id = '126502930200650'::bigint) AND\n> (m.created > '1970-01-01 00:00:00'::timestamp without time zone))\n> Buffers: shared hit=274946\n> -> Materialize (cost=0.43..2.66 rows=1 width=8) (actual\n> time=0.001..0.001 rows=1 loops=50)\n> Output: pi.app_id\n> Buffers: shared hit=4\n> -> Index Scan using fki_conv_konotor_user_user_id on\n> public.process_instance pi (cost=0.43..2.66 rows=1 width=8) (actual\n> time=0.020..0.020 rows=1 loops=1)\n> Output: pi.app_id\n> Index Cond: (pi.user_id = '137074931866340'::bigint)\n> Filter: (pi.app_id = '126502930200650'::bigint)\n> Buffers: shared hit=4\n> Planning time: 0.297 ms\n> Execution time: 891.686 ms\n> (20 rows)\n>\n> On Thu, May 7, 2020 at 9:17 PM David G. Johnston <\n> [email protected]> wrote:\n>\n>> On Thu, May 7, 2020 at 7:40 AM Adrian Klaver <[email protected]>\n>> wrote:\n>>\n>>> On 5/7/20 4:19 AM, Amarendra Konda wrote:\n>>> > Hi,\n>>> >\n>>> > PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled\n>>> > by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n>>> >\n>>> > We have noticed huge difference interms of execution plan ( response\n>>> > time) , When we pass the direct values Vs inner query to IN clause.\n>>> >\n>>> > High level details of the use case are as follows\n>>> >\n>>> > * As part of the SQL there are 2 tables named Process_instance\n>>> > (master) and Process_activity ( child)\n>>> > * Wanted to fetch TOP 50 rows from Process_activity table for the\n>>> > given values of the Process_instance.\n>>> > * When we used Inner Join / Inner query ( query1) between parent\n>>> > table and child table , LIMIT is not really taking in to account.\n>>> > Instead it is fetching more rows and columns that required, and\n>>> > finally limiting the result\n>>>\n>>> It is doing what you told it to do which is SELECT all\n>>> process_instance_i's for user_id='317079413683604' and app_id =\n>>> '427380312000560' and then filtering further. I am going to guess that\n>>> if you run the inner query alone you will find it returns ~23496 rows.\n>>> You might have better results if you an actual join between\n>>> process_activity and process_instance. Something like below(obviously\n>>> not tested):\n>>>\n>>\n>> What the OP seems to want is a semi-join:\n>>\n>> (not tested)\n>>\n>> SELECT pa.process_activity_id\n>> FROM process_activity pa WHERE pa.app_id = '427380312000560' AND\n>> pa.created > '1970-01-01 00:00:00'\n>> AND EXISTS (\n>> SELECT 1 FROM process_instance pi WHERE pi.app_id = pa.app_id AND\n>> pi.user_id = '317079413683604'\n>> )\n>> ORDER BY\n>> pa.process_instance_id,\n>> pa.created limit 50;\n>>\n>> I'm unsure exactly how this will impact the plan choice but it should be\n>> an improvement, and in any case more correctly defines what it is you are\n>> looking for.\n>>\n>> David J.\n>>\n>>\n\nHi David,In earlier reply, Over looked another condition, hence please ignore that oneHere is the correct one with all the needed conditions. According to the latest one, exists also not limiting rows from the process_activity table.EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00'  AND EXISTS ( SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND pi.process_instance_id = pa.process_instance_id  AND pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,  pa.created limit 50;                                                                                                                                                                                                                QUERY PLAN                                                                                                                                                                                                                 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=1079.44..1079.52 rows=32 width=24) (actual time=85.747..85.777 rows=50 loops=1)   Output: pa.process_activity_id, pa.process_instance_id, pa.created   Buffers: shared hit=43070   ->  Sort  (cost=1079.44..1079.52 rows=32 width=24) (actual time=85.745..85.759 rows=50 loops=1)         Output: pa.process_activity_id, pa.process_instance_id, pa.created         Sort Key: pa.process_instance_id, pa.created         Sort Method: top-N heapsort  Memory: 28kB         Buffers: shared hit=43070         ->  Nested Loop  (cost=1.14..1078.64 rows=32 width=24) (actual time=0.025..72.115 rows=47011 loops=1)               Output: pa.process_activity_id, pa.process_instance_id, pa.created               Buffers: shared hit=43070               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=16) (actual time=0.010..0.015 rows=2 loops=1)                     Output: pi.app_id, pi.process_instance_id                     Index Cond: (c.user_id = '137074931866340'::bigint)                     Filter: (c.app_id = '126502930200650'::bigint)                     Buffers: shared hit=5               ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..1061.62 rows=1436 width=32) (actual time=0.011..20.320 rows=23506 loops=2)                     Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translate, pa.in_reply_to                     Index Cond: ((m.process_instance_id = pi.process_instance_id) AND (m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))                     Buffers: shared hit=43065 Planning time: 0.455 ms Execution time: 85.830 msOn Thu, May 7, 2020 at 11:19 PM Amarendra Konda <[email protected]> wrote:Hi David,Thanks for the reply.This has optimized number of rows. Can you please explain, why it is getting more columns in output, even though we have asked for only one column ?  EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00'  AND EXISTS ( SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id  AND pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,m.created limit 50;                                                                                                                                                                                                             QUERY PLAN                                                                                                                                                                                                              ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=1.14..37.39 rows=50 width=24) (actual time=821.283..891.629 rows=50 loops=1)   Output: pa.process_activity_id, pa.process_instance_id, pa.created   Buffers: shared hit=274950   ->  Nested Loop Semi Join  (cost=1.14..266660108.78 rows=367790473 width=24) (actual time=821.282..891.607 rows=50 loops=1)         Output: pa.process_activity_id, pa.process_instance_id, pa.created         Buffers: shared hit=274950         ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..262062725.21 rows=367790473 width=32) (actual time=821.253..891.517 rows=50 loops=1)               Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, m.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translate, pa.in_reply_to               Index Cond: ((m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))               Buffers: shared hit=274946         ->  Materialize  (cost=0.43..2.66 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=50)               Output: pi.app_id               Buffers: shared hit=4               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual time=0.020..0.020 rows=1 loops=1)                     Output: pi.app_id                     Index Cond: (pi.user_id = '137074931866340'::bigint)                     Filter: (pi.app_id = '126502930200650'::bigint)                     Buffers: shared hit=4 Planning time: 0.297 ms Execution time: 891.686 ms(20 rows)On Thu, May 7, 2020 at 9:17 PM David G. Johnston <[email protected]> wrote:On Thu, May 7, 2020 at 7:40 AM Adrian Klaver <[email protected]> wrote:On 5/7/20 4:19 AM, Amarendra Konda wrote:\n> Hi,\n> \n> PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled \n> by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> \n> We have noticed huge difference interms of execution plan ( response \n> time) , When we pass the direct values  Vs  inner query to IN clause.\n> \n> High level details of the use case are as follows\n> \n>   * As part of the SQL there are 2 tables named Process_instance\n>     (master) and Process_activity ( child)\n>   * Wanted to fetch TOP 50 rows from  Process_activity table for the\n>     given values of the Process_instance.\n>   * When we used Inner Join / Inner query ( query1)  between parent\n>     table and child table , LIMIT is not really taking in to account.\n>     Instead it is fetching more rows and columns that required, and\n>     finally limiting the result\n\nIt is doing what you told it to do which is SELECT all \nprocess_instance_i's for user_id='317079413683604' and app_id = \n'427380312000560' and then filtering further. I am going to guess that \nif you run the inner query alone you will find it returns ~23496 rows.\nYou might have better results if you an actual join between \nprocess_activity and process_instance. Something like below(obviously \nnot tested):What the OP seems to want is a semi-join:(not tested)SELECT pa.process_activity_id  FROM process_activity pa WHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01 00:00:00'AND EXISTS (  SELECT 1 FROM process_instance pi WHERE pi.app_id = pa.app_id AND pi.user_id = '317079413683604')ORDER BY pa.process_instance_id,pa.created limit 50;I'm unsure exactly how this will impact the plan choice but it should be an improvement, and in any case more correctly defines what it is you are looking for.David J.", "msg_date": "Thu, 7 May 2020 23:37:42 +0530", "msg_from": "Amarendra Konda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" }, { "msg_contents": "Hi Virendra,\n\nThanks for your time.\n\nHere is the table and index structure\n\n* process_activity*\n Table \"public.process_activity\"\n Column | Type | Modifiers\n\n--------------------+-----------------------------+----------------------------\n process_activity_id | bigint | not null\ndefault next_id()\n process_activity_type | smallint | not null\n voice_url | text |\n process_activity_user_id | bigint | not null\n app_id | bigint | not null\n process_instance_id | bigint | not null\n alias | text | not null\n read_by_user | smallint | default 0\n source | smallint | default 0\n label_category_id | bigint |\n label_id | bigint |\n csat_response_id | bigint |\n process_activity_fragments | jsonb |\n created | timestamp without time zone | not null\n updated | timestamp without time zone |\n rule_id | bigint |\n marketing_reply_id | bigint |\n delivered_at | timestamp without time zone |\n reply_fragments | jsonb |\n status_fragment | jsonb |\n internal_meta | jsonb |\n interaction_id | text |\n do_not_translate | boolean |\n should_translate | integer |\n in_reply_to | jsonb |\nIndexes:\n \"process_activity_pkey\" PRIMARY KEY, btree (process_activity_id)\n \"fki_process_activity_konotor_user_user_id\" btree\n(process_activity_user_id) WITH (fillfactor='70')\n \"*process_activity_process_instance_id_app_id_created_idx*\" btree\n(process_instance_id, app_id, created) WITH (fillfactor='70')\n \"process_activity_process_instance_id_app_id_read_by_user_created_idx\"\nbtree (process_instance_id, app_id, read_by_user, created) WITH\n(fillfactor='70')\n \"process_activity_process_instance_id_idx\" btree (process_instance_id)\nWITH (fillfactor='70')\n\n\n\n\n*process_instance*\n Table \"public.process_instance\"\n Column | Type | Modifiers\n\n-------------------------+-----------------------------+-----------------------------\n process_instance_id | bigint | not null default\nnext_id()\n process_instance_alias | text | not null\n app_id | bigint | not null\n user_id | bigint | not null\n\nIndexes:\n \"process_instance_pkey\" PRIMARY KEY, btree (process_instance_id)\n \"*fki_conv_konotor_user_user_id*\" btree (user_id) WITH (fillfactor='70')\n\nRegards, Amarendra\n\nOn Fri, May 8, 2020 at 12:01 AM Virendra Kumar <[email protected]> wrote:\n\n> Sending table structure with indexes might help little further in\n> understanding.\n>\n> Regards,\n> Virendra\n>\n> On Thursday, May 7, 2020, 11:08:14 AM PDT, Amarendra Konda <\n> [email protected]> wrote:\n>\n>\n> Hi David,\n>\n> In earlier reply, Over looked another condition, hence please ignore that\n> one\n>\n> Here is the correct one with all the needed conditions. According to the\n> latest one, exists also not limiting rows from the process_activity table.\n>\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id\n> AS pa_process_activity_id FROM process_activity pa WHERE pa.app_id =\n> '126502930200650' AND pa.created > '1970-01-01 00:00:00' AND EXISTS (\n> SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND *pi.process_instance_id\n> = pa.process_instance_id * AND pi.user_id = '137074931866340') ORDER BY\n> pa.process_instance_id, pa.created limit 50;\n>\n>\n>\n> QUERY PLAN\n>\n>\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> -----------------------\n> Limit (cost=1079.44..1079.52 rows=32 width=24) (actual\n> time=85.747..85.777 rows=50 loops=1)\n> Output: pa.process_activity_id, pa.process_instance_id, pa.created\n> Buffers: shared hit=43070\n> -> Sort (cost=1079.44..1079.52 rows=32 width=24) (actual\n> time=85.745..85.759 rows=50 loops=1)\n> Output: pa.process_activity_id, pa.process_instance_id, pa.created\n> Sort Key: pa.process_instance_id, pa.created\n> Sort Method: top-N heapsort Memory: 28kB\n> Buffers: shared hit=43070\n> -> Nested Loop (cost=1.14..1078.64 rows=32 width=24) (actual\n> time=0.025..72.115 rows=47011 loops=1)\n> Output: pa.process_activity_id, pa.process_instance_id,\n> pa.created\n> Buffers: shared hit=43070\n> -> Index Scan using fki_conv_konotor_user_user_id on\n> public.process_instance pi (cost=0.43..2.66 rows=1 width=16) (actual\n> time=0.010..0.015 rows=2 loops=1)\n> Output: pi.app_id, pi.process_instance_id\n> Index Cond: (c.user_id = '137074931866340'::bigint)\n> Filter: (c.app_id = '126502930200650'::bigint)\n> Buffers: shared hit=5\n> -> Index Scan using\n> process_activity_process_instance_id_app_id_created_idx on\n> public.process_activity pa (cost=0.70..1061.62 rows=1436 width=32) *(actual\n> time=0.011..20.320 rows=23506 loops=2)*\n> Output: pa.process_activity_id,\n> pa.process_activity_type, pa.voice_url, pa.process_activity_user_id,\n> pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source,\n> pa.label_category_id, pa.label_id, pa.csat_respons\n> e_id, pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id,\n> pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments,\n> pa.status_fragment, pa.internal_meta, pa.interaction_id,\n> pa.do_not_translate, pa.should_tr\n> anslate, pa.in_reply_to\n> Index Cond: ((m.process_instance_id =\n> pi.process_instance_id) AND (m.app_id = '126502930200650'::bigint) AND\n> (m.created > '1970-01-01 00:00:00'::timestamp without time zone))\n> Buffers: shared hit=43065\n> Planning time: 0.455 ms\n> Execution time: 85.830 ms\n>\n> On Thu, May 7, 2020 at 11:19 PM Amarendra Konda <[email protected]>\n> wrote:\n>\n> Hi David,\n>\n> Thanks for the reply.This has optimized number of rows.\n>\n> Can you please explain, why it is getting more columns in output, even\n> though we have asked for only one column ?\n>\n>\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id\n> AS pa_process_activity_id FROM process_activity pa WHERE pa.app_id =\n> '126502930200650' AND pa.created > '1970-01-01 00:00:00' AND EXISTS (\n> SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND\n> pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,m.created\n> limit 50;\n>\n>\n>\n> QUERY PLAN\n>\n>\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> -----------------\n> Limit (cost=1.14..37.39 rows=50 width=24) (actual time=821.283..891.629\n> rows=50 loops=1)\n> Output: pa.process_activity_id, pa.process_instance_id, pa.created\n> Buffers: shared hit=274950\n> -> Nested Loop Semi Join (cost=1.14..266660108.78 rows=367790473\n> width=24) (actual time=821.282..891.607 rows=50 loops=1)\n> Output: pa.process_activity_id, pa.process_instance_id, pa.created\n> Buffers: shared hit=274950\n> -> Index Scan using\n> process_activity_process_instance_id_app_id_created_idx on\n> public.process_activity pa (cost=0.70..262062725.21 rows=367790473\n> width=32) (actual time=821.253..891.517 rows=50 loops=1)\n>\n>\n> * Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url,\n> pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias,\n> pa.read_by_user, pa.source, pa.label_category_id, pa.label_id,\n> pa.csat_response_id, m.process_activity_fragments, pa.created, pa.updated,\n> pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments,\n> pa.status_fragment, pa.internal_meta, pa.interaction_id,\n> pa.do_not_translate, pa.should_translate, pa.in_reply_to*\n> Index Cond: ((m.app_id = '126502930200650'::bigint) AND\n> (m.created > '1970-01-01 00:00:00'::timestamp without time zone))\n> Buffers: shared hit=274946\n> -> Materialize (cost=0.43..2.66 rows=1 width=8) (actual\n> time=0.001..0.001 rows=1 loops=50)\n> Output: pi.app_id\n> Buffers: shared hit=4\n> -> Index Scan using fki_conv_konotor_user_user_id on\n> public.process_instance pi (cost=0.43..2.66 rows=1 width=8) (actual\n> time=0.020..0.020 rows=1 loops=1)\n> Output: pi.app_id\n> Index Cond: (pi.user_id = '137074931866340'::bigint)\n> Filter: (pi.app_id = '126502930200650'::bigint)\n> Buffers: shared hit=4\n> Planning time: 0.297 ms\n> Execution time: 891.686 ms\n> (20 rows)\n>\n> On Thu, May 7, 2020 at 9:17 PM David G. Johnston <\n> [email protected]> wrote:\n>\n> On Thu, May 7, 2020 at 7:40 AM Adrian Klaver <[email protected]>\n> wrote:\n>\n> On 5/7/20 4:19 AM, Amarendra Konda wrote:\n> > Hi,\n> >\n> > PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled\n> > by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> >\n> > We have noticed huge difference interms of execution plan ( response\n> > time) , When we pass the direct values Vs inner query to IN clause.\n> >\n> > High level details of the use case are as follows\n> >\n> > * As part of the SQL there are 2 tables named Process_instance\n> > (master) and Process_activity ( child)\n> > * Wanted to fetch TOP 50 rows from Process_activity table for the\n> > given values of the Process_instance.\n> > * When we used Inner Join / Inner query ( query1) between parent\n> > table and child table , LIMIT is not really taking in to account.\n> > Instead it is fetching more rows and columns that required, and\n> > finally limiting the result\n>\n> It is doing what you told it to do which is SELECT all\n> process_instance_i's for user_id='317079413683604' and app_id =\n> '427380312000560' and then filtering further. I am going to guess that\n> if you run the inner query alone you will find it returns ~23496 rows.\n> You might have better results if you an actual join between\n> process_activity and process_instance. Something like below(obviously\n> not tested):\n>\n>\n> What the OP seems to want is a semi-join:\n>\n> (not tested)\n>\n> SELECT pa.process_activity_id\n> FROM process_activity pa WHERE pa.app_id = '427380312000560' AND\n> pa.created > '1970-01-01 00:00:00'\n> AND EXISTS (\n> SELECT 1 FROM process_instance pi WHERE pi.app_id = pa.app_id AND\n> pi.user_id = '317079413683604'\n> )\n> ORDER BY\n> pa.process_instance_id,\n> pa.created limit 50;\n>\n> I'm unsure exactly how this will impact the plan choice but it should be\n> an improvement, and in any case more correctly defines what it is you are\n> looking for.\n>\n> David J.\n>\n>\n\nHi Virendra,Thanks for your time. Here is the table and index structure process_activity                            Table \"public.process_activity\"       Column       |            Type             |         Modifiers          --------------------+-----------------------------+---------------------------- process_activity_id         | bigint                      | not null default next_id() process_activity_type       | smallint                    | not null voice_url          | text                        |  process_activity_user_id    | bigint                      | not null app_id             | bigint                      | not null process_instance_id    | bigint                      | not null alias              | text                        | not null read_by_user       | smallint                    | default 0 source             | smallint                    | default 0 label_category_id  | bigint                      |  label_id           | bigint                      |  csat_response_id   | bigint                      |  process_activity_fragments  | jsonb                       |  created            | timestamp without time zone | not null updated            | timestamp without time zone |  rule_id            | bigint                      |  marketing_reply_id | bigint                      |  delivered_at       | timestamp without time zone |  reply_fragments    | jsonb                       |  status_fragment    | jsonb                       |  internal_meta      | jsonb                       |  interaction_id     | text                        |  do_not_translate   | boolean                     |  should_translate   | integer                     |  in_reply_to        | jsonb                       | Indexes:    \"process_activity_pkey\" PRIMARY KEY, btree (process_activity_id)    \"fki_process_activity_konotor_user_user_id\" btree (process_activity_user_id) WITH (fillfactor='70')    \"process_activity_process_instance_id_app_id_created_idx\" btree (process_instance_id, app_id, created) WITH (fillfactor='70')    \"process_activity_process_instance_id_app_id_read_by_user_created_idx\" btree (process_instance_id, app_id, read_by_user, created) WITH (fillfactor='70')    \"process_activity_process_instance_id_idx\" btree (process_instance_id) WITH (fillfactor='70') process_instance                             Table \"public.process_instance\"         Column          |            Type             |          Modifiers          -------------------------+-----------------------------+----------------------------- process_instance_id     | bigint                      | not null default next_id() process_instance_alias  | text                        | not null app_id                  | bigint                      | not null user_id                 | bigint                      | not null Indexes:    \"process_instance_pkey\" PRIMARY KEY, btree (process_instance_id)    \"fki_conv_konotor_user_user_id\" btree (user_id) WITH (fillfactor='70')Regards, AmarendraOn Fri, May 8, 2020 at 12:01 AM Virendra Kumar <[email protected]> wrote:Sending table structure with indexes might help little further in understanding.Regards,Virendra\n\n\n\n On Thursday, May 7, 2020, 11:08:14 AM PDT, Amarendra Konda <[email protected]> wrote:\n \n\n\nHi David,In earlier reply, Over looked another condition, hence please ignore that oneHere is the correct one with all the needed conditions. According to the latest one, exists also not limiting rows from the process_activity table.EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00'  AND EXISTS ( SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND pi.process_instance_id = pa.process_instance_id  AND pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,  pa.created limit 50;                                                                                                                                                                                                                QUERY PLAN                                                                                                                                                                                                                 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=1079.44..1079.52 rows=32 width=24) (actual time=85.747..85.777 rows=50 loops=1)   Output: pa.process_activity_id, pa.process_instance_id, pa.created   Buffers: shared hit=43070   ->  Sort  (cost=1079.44..1079.52 rows=32 width=24) (actual time=85.745..85.759 rows=50 loops=1)         Output: pa.process_activity_id, pa.process_instance_id, pa.created         Sort Key: pa.process_instance_id, pa.created         Sort Method: top-N heapsort  Memory: 28kB         Buffers: shared hit=43070         ->  Nested Loop  (cost=1.14..1078.64 rows=32 width=24) (actual time=0.025..72.115 rows=47011 loops=1)               Output: pa.process_activity_id, pa.process_instance_id, pa.created               Buffers: shared hit=43070               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=16) (actual time=0.010..0.015 rows=2 loops=1)                     Output: pi.app_id, pi.process_instance_id                     Index Cond: (c.user_id = '137074931866340'::bigint)                     Filter: (c.app_id = '126502930200650'::bigint)                     Buffers: shared hit=5               ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..1061.62 rows=1436 width=32) (actual time=0.011..20.320 rows=23506 loops=2)                     Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translate, pa.in_reply_to                     Index Cond: ((m.process_instance_id = pi.process_instance_id) AND (m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))                     Buffers: shared hit=43065 Planning time: 0.455 ms Execution time: 85.830 msOn Thu, May 7, 2020 at 11:19 PM Amarendra Konda <[email protected]> wrote:Hi David,Thanks for the reply.This has optimized number of rows. Can you please explain, why it is getting more columns in output, even though we have asked for only one column ?  EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00'  AND EXISTS ( SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id  AND pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,m.created limit 50;                                                                                                                                                                                                             QUERY PLAN                                                                                                                                                                                                              ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=1.14..37.39 rows=50 width=24) (actual time=821.283..891.629 rows=50 loops=1)   Output: pa.process_activity_id, pa.process_instance_id, pa.created   Buffers: shared hit=274950   ->  Nested Loop Semi Join  (cost=1.14..266660108.78 rows=367790473 width=24) (actual time=821.282..891.607 rows=50 loops=1)         Output: pa.process_activity_id, pa.process_instance_id, pa.created         Buffers: shared hit=274950         ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..262062725.21 rows=367790473 width=32) (actual time=821.253..891.517 rows=50 loops=1)               Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, m.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translate, pa.in_reply_to               Index Cond: ((m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))               Buffers: shared hit=274946         ->  Materialize  (cost=0.43..2.66 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=50)               Output: pi.app_id               Buffers: shared hit=4               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual time=0.020..0.020 rows=1 loops=1)                     Output: pi.app_id                     Index Cond: (pi.user_id = '137074931866340'::bigint)                     Filter: (pi.app_id = '126502930200650'::bigint)                     Buffers: shared hit=4 Planning time: 0.297 ms Execution time: 891.686 ms(20 rows)On Thu, May 7, 2020 at 9:17 PM David G. Johnston <[email protected]> wrote:On Thu, May 7, 2020 at 7:40 AM Adrian Klaver <[email protected]> wrote:On 5/7/20 4:19 AM, Amarendra Konda wrote:\n> Hi,\n> \n> PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled \n> by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> \n> We have noticed huge difference interms of execution plan ( response \n> time) , When we pass the direct values  Vs  inner query to IN clause.\n> \n> High level details of the use case are as follows\n> \n>   * As part of the SQL there are 2 tables named Process_instance\n>     (master) and Process_activity ( child)\n>   * Wanted to fetch TOP 50 rows from  Process_activity table for the\n>     given values of the Process_instance.\n>   * When we used Inner Join / Inner query ( query1)  between parent\n>     table and child table , LIMIT is not really taking in to account.\n>     Instead it is fetching more rows and columns that required, and\n>     finally limiting the result\n\nIt is doing what you told it to do which is SELECT all \nprocess_instance_i's for user_id='317079413683604' and app_id = \n'427380312000560' and then filtering further. I am going to guess that \nif you run the inner query alone you will find it returns ~23496 rows.\nYou might have better results if you an actual join between \nprocess_activity and process_instance. Something like below(obviously \nnot tested):What the OP seems to want is a semi-join:(not tested)SELECT pa.process_activity_id  FROM process_activity pa WHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01 00:00:00'AND EXISTS (  SELECT 1 FROM process_instance pi WHERE pi.app_id = pa.app_id AND pi.user_id = '317079413683604')ORDER BY pa.process_instance_id,pa.created limit 50;I'm unsure exactly how this will impact the plan choice but it should be an improvement, and in any case more correctly defines what it is you are looking for.David J.", "msg_date": "Fri, 8 May 2020 00:21:45 +0530", "msg_from": "Amarendra Konda <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" }, { "msg_contents": "On 5/7/20 10:49 AM, Amarendra Konda wrote:\n> Hi David,\n> \n> Thanks for the reply.This has optimized number of rows.\n\nYeah, but your execution time has increased an order of magnitude. Not \nsure if that is what you want.\n\n> \n> Can you please explain, why it is getting more columns in output, even \n> though we have asked for only one column ?\n> \n> \n>  EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT \n> pa.process_activity_id AS pa_process_activity_id  FROM process_activity \n> pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 \n> 00:00:00'  AND EXISTS ( SELECT 1 FROM process_instance pi where \n> pi.app_id = pa.app_id  AND pi.user_id = '137074931866340') ORDER BY \n> pa.process_instance_id,m.created limit 50;\n> \n>    QUERY PLAN\n> \n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> -----------------\n>  Limit  (cost=1.14..37.39 rows=50 width=24) (actual \n> time=821.283..891.629 rows=50 loops=1)\n>    Output: pa.process_activity_id, pa.process_instance_id, pa.created\n>    Buffers: shared hit=274950\n>    ->  Nested Loop Semi Join  (cost=1.14..266660108.78 rows=367790473 \n> width=24) (actual time=821.282..891.607 rows=50 loops=1)\n>          Output: pa.process_activity_id, pa.process_instance_id, pa.created\n>          Buffers: shared hit=274950\n>          ->  Index Scan using \n> process_activity_process_instance_id_app_id_created_idx on \n> public.process_activity pa  (cost=0.70..262062725.21 rows=367790473 \n> width=32) (actual time=821.253..891.517 rows=50 loops=1)\n> * Output: pa.process_activity_id, pa.process_activity_type, \n> pa.voice_url, pa.process_activity_user_id, pa.app_id, \n> pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, \n> pa.label_category_id, pa.label_id, pa.csat_response_id,\n> m.process_activity_fragments, pa.created, pa.updated, pa.rule_id, \n> pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, \n> pa.status_fragment, pa.internal_meta, pa.interaction_id, \n> pa.do_not_translate, pa.should_translat\n> e, pa.in_reply_to*\n>                Index Cond: ((m.app_id = '126502930200650'::bigint) AND \n> (m.created > '1970-01-01 00:00:00'::timestamp without time zone))\n>                Buffers: shared hit=274946\n>          ->  Materialize  (cost=0.43..2.66 rows=1 width=8) (actual \n> time=0.001..0.001 rows=1 loops=50)\n>                Output: pi.app_id\n>                Buffers: shared hit=4\n>                ->  Index Scan using fki_conv_konotor_user_user_id on \n> public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual \n> time=0.020..0.020 rows=1 loops=1)\n>                      Output: pi.app_id\n>                      Index Cond: (pi.user_id = '137074931866340'::bigint)\n>                      Filter: (pi.app_id = '126502930200650'::bigint)\n>                      Buffers: shared hit=4\n>  Planning time: 0.297 ms\n>  Execution time: 891.686 ms\n> (20 rows)\n> \n> On Thu, May 7, 2020 at 9:17 PM David G. Johnston \n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> On Thu, May 7, 2020 at 7:40 AM Adrian Klaver\n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> On 5/7/20 4:19 AM, Amarendra Konda wrote:\n> > Hi,\n> >\n> > PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu,\n> compiled\n> > by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> >\n> > We have noticed huge difference interms of execution plan (\n> response\n> > time) , When we pass the direct values  Vs  inner query to IN\n> clause.\n> >\n> > High level details of the use case are as follows\n> >\n> >   * As part of the SQL there are 2 tables named Process_instance\n> >     (master) and Process_activity ( child)\n> >   * Wanted to fetch TOP 50 rows from  Process_activity table\n> for the\n> >     given values of the Process_instance.\n> >   * When we used Inner Join / Inner query ( query1)  between\n> parent\n> >     table and child table , LIMIT is not really taking in to\n> account.\n> >     Instead it is fetching more rows and columns that\n> required, and\n> >     finally limiting the result\n> \n> It is doing what you told it to do which is SELECT all\n> process_instance_i's for user_id='317079413683604' and app_id =\n> '427380312000560' and then filtering further. I am going to\n> guess that\n> if you run the inner query alone you will find it returns ~23496\n> rows.\n> You might have better results if you an actual join between\n> process_activity and process_instance. Something like\n> below(obviously\n> not tested):\n> \n> \n> What the OP seems to want is a semi-join:\n> \n> (not tested)\n> \n> SELECT pa.process_activity_id\n> FROM process_activity pa WHERE pa.app_id = '427380312000560' AND\n> pa.created > '1970-01-01 00:00:00'\n> ANDEXISTS (\n>   SELECT 1 FROM process_instance pi WHERE pi.app_id = pa.app_id AND\n> pi.user_id = '317079413683604'\n> )\n> ORDER BY\n> pa.process_instance_id,\n> pa.created limit 50;\n> \n> I'm unsure exactly how this will impact the plan choice but it\n> should be an improvement, and in any case more correctly defines\n> what it is you are looking for.\n> \n> David J.\n> \n\n\n-- \nAdrian Klaver\[email protected]\n\n\n", "msg_date": "Thu, 7 May 2020 12:25:15 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" }, { "msg_contents": "Here is my thought on why row is not limiting when joined vs why it is limiting when not joined.\nWhen not joined and where clause is having IN, it is using index process_activity_process_instance_id_app_id_created_idx which has columns process_instance_id, created which is in order by and hence no additional ordering is required and a direct rows limit can be applied here.\n\nWhen in join condition it has to fetch rows according to filter clause, join them and then order ( sort node in plan) hence it cannot limit rows while fetching it first time from the table.\nYou are also missing pi.user_id = '317079413683604' in exists clause. It is worth trying to put there and run explain again and see where it takes. But to your point row limitation cannot happen in case of join as such in the query.\n\nRegards,\nVirendra \n\n On Thursday, May 7, 2020, 11:52:00 AM PDT, Amarendra Konda <[email protected]> wrote: \n \n Hi Virendra,\nThanks for your time. \nHere is the table and index structure\n process_activity\n                            Table \"public.process_activity\"\n       Column       |            Type             |         Modifiers          \n--------------------+-----------------------------+----------------------------\n process_activity_id         | bigint                      | not null default next_id()\n process_activity_type       | smallint                    | not null\n voice_url          | text                        | \n process_activity_user_id    | bigint                      | not null\n app_id             | bigint                      | not null\n process_instance_id    | bigint                      | not null\n alias              | text                        | not null\n read_by_user       | smallint                    | default 0\n source             | smallint                    | default 0\n label_category_id  | bigint                      | \n label_id           | bigint                      | \n csat_response_id   | bigint                      | \n process_activity_fragments  | jsonb                       | \n created            | timestamp without time zone | not null\n updated            | timestamp without time zone | \n rule_id            | bigint                      | \n marketing_reply_id | bigint                      | \n delivered_at       | timestamp without time zone | \n reply_fragments    | jsonb                       | \n status_fragment    | jsonb                       | \n internal_meta      | jsonb                       | \n interaction_id     | text                        | \n do_not_translate   | boolean                     | \n should_translate   | integer                     | \n in_reply_to        | jsonb                       | \nIndexes:\n    \"process_activity_pkey\" PRIMARY KEY, btree (process_activity_id)\n    \"fki_process_activity_konotor_user_user_id\" btree (process_activity_user_id) WITH (fillfactor='70')\n    \"process_activity_process_instance_id_app_id_created_idx\" btree (process_instance_id, app_id, created) WITH (fillfactor='70')\n    \"process_activity_process_instance_id_app_id_read_by_user_created_idx\" btree (process_instance_id, app_id, read_by_user, created) WITH (fillfactor='70')\n    \"process_activity_process_instance_id_idx\" btree (process_instance_id) WITH (fillfactor='70')\n \n\n\n\nprocess_instance\n                             Table \"public.process_instance\"\n         Column          |            Type             |          Modifiers          \n-------------------------+-----------------------------+-----------------------------\n process_instance_id     | bigint                      | not null default next_id()\n process_instance_alias  | text                        | not null\n app_id                  | bigint                      | not null\n user_id                 | bigint                      | not null\n \nIndexes:\n    \"process_instance_pkey\" PRIMARY KEY, btree (process_instance_id)\n    \"fki_conv_konotor_user_user_id\" btree (user_id) WITH (fillfactor='70')\n\nRegards, Amarendra\nOn Fri, May 8, 2020 at 12:01 AM Virendra Kumar <[email protected]> wrote:\n\nSending table structure with indexes might help little further in understanding.\n\nRegards,\nVirendra\n On Thursday, May 7, 2020, 11:08:14 AM PDT, Amarendra Konda <[email protected]> wrote: \n \n Hi David,\nIn earlier reply, Over looked another condition, hence please ignore that one\nHere is the correct one with all the needed conditions. According to the latest one, exists also not limiting rows from the process_activity table.\n\nEXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00'  AND EXISTS ( SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND pi.process_instance_id = pa.process_instance_id  AND pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,  pa.created limit 50;\n                                                                                                                                                                                                          \n      QUERY PLAN                                                                                                                                                                                          \n                       \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------\n Limit  (cost=1079.44..1079.52 rows=32 width=24) (actual time=85.747..85.777 rows=50 loops=1)\n   Output: pa.process_activity_id, pa.process_instance_id, pa.created\n   Buffers: shared hit=43070\n   ->  Sort  (cost=1079.44..1079.52 rows=32 width=24) (actual time=85.745..85.759 rows=50 loops=1)\n         Output: pa.process_activity_id, pa.process_instance_id, pa.created\n         Sort Key: pa.process_instance_id, pa.created\n         Sort Method: top-N heapsort  Memory: 28kB\n         Buffers: shared hit=43070\n         ->  Nested Loop  (cost=1.14..1078.64 rows=32 width=24) (actual time=0.025..72.115 rows=47011 loops=1)\n               Output: pa.process_activity_id, pa.process_instance_id, pa.created\n               Buffers: shared hit=43070\n               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=16) (actual time=0.010..0.015 rows=2 loops=1)\n                     Output: pi.app_id, pi.process_instance_id\n                     Index Cond: (c.user_id = '137074931866340'::bigint)\n                     Filter: (c.app_id = '126502930200650'::bigint)\n                     Buffers: shared hit=5\n               ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..1061.62 rows=1436 width=32) (actual time=0.011..20.320 rows=23506 loops=2)\n                     Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_respons\ne_id, pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_tr\nanslate, pa.in_reply_to\n                     Index Cond: ((m.process_instance_id = pi.process_instance_id) AND (m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))\n                     Buffers: shared hit=43065\n Planning time: 0.455 ms\n Execution time: 85.830 ms\n\nOn Thu, May 7, 2020 at 11:19 PM Amarendra Konda <[email protected]> wrote:\n\nHi David,\nThanks for the reply.This has optimized number of rows. \nCan you please explain, why it is getting more columns in output, even though we have asked for only one column ? \n\n EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00'  AND EXISTS ( SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id  AND pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,m.created limit 50;\n                                                                                                                                                                                                          \n   QUERY PLAN                                                                                                                                                                                             \n                 \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------\n Limit  (cost=1.14..37.39 rows=50 width=24) (actual time=821.283..891.629 rows=50 loops=1)\n   Output: pa.process_activity_id, pa.process_instance_id, pa.created\n   Buffers: shared hit=274950\n   ->  Nested Loop Semi Join  (cost=1.14..266660108.78 rows=367790473 width=24) (actual time=821.282..891.607 rows=50 loops=1)\n         Output: pa.process_activity_id, pa.process_instance_id, pa.created\n         Buffers: shared hit=274950\n         ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..262062725.21 rows=367790473 width=32) (actual time=821.253..891.517 rows=50 loops=1)\n               Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, \nm.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translat\ne, pa.in_reply_to\n               Index Cond: ((m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))\n               Buffers: shared hit=274946\n         ->  Materialize  (cost=0.43..2.66 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=50)\n               Output: pi.app_id\n               Buffers: shared hit=4\n               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual time=0.020..0.020 rows=1 loops=1)\n                     Output: pi.app_id\n                     Index Cond: (pi.user_id = '137074931866340'::bigint)\n                     Filter: (pi.app_id = '126502930200650'::bigint)\n                     Buffers: shared hit=4\n Planning time: 0.297 ms\n Execution time: 891.686 ms\n(20 rows)\n\nOn Thu, May 7, 2020 at 9:17 PM David G. Johnston <[email protected]> wrote:\n\nOn Thu, May 7, 2020 at 7:40 AM Adrian Klaver <[email protected]> wrote:\n\nOn 5/7/20 4:19 AM, Amarendra Konda wrote:\n> Hi,\n> \n> PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled \n> by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> \n> We have noticed huge difference interms of execution plan ( response \n> time) , When we pass the direct values  Vs  inner query to IN clause.\n> \n> High level details of the use case are as follows\n> \n>   * As part of the SQL there are 2 tables named Process_instance\n>     (master) and Process_activity ( child)\n>   * Wanted to fetch TOP 50 rows from  Process_activity table for the\n>     given values of the Process_instance.\n>   * When we used Inner Join / Inner query ( query1)  between parent\n>     table and child table , LIMIT is not really taking in to account.\n>     Instead it is fetching more rows and columns that required, and\n>     finally limiting the result\n\nIt is doing what you told it to do which is SELECT all \nprocess_instance_i's for user_id='317079413683604' and app_id = \n'427380312000560' and then filtering further. I am going to guess that \nif you run the inner query alone you will find it returns ~23496 rows.\nYou might have better results if you an actual join between \nprocess_activity and process_instance. Something like below(obviously \nnot tested):\n\n\nWhat the OP seems to want is a semi-join:\n(not tested)\nSELECT pa.process_activity_id  FROM process_activity pa WHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01 00:00:00'AND EXISTS (  SELECT 1 FROM process_instance pi WHERE pi.app_id = pa.app_id AND pi.user_id = '317079413683604')\nORDER BY pa.process_instance_id,pa.created limit 50;\nI'm unsure exactly how this will impact the plan choice but it should be an improvement, and in any case more correctly defines what it is you are looking for.\nDavid J.\n\n\n \n \nHere is my thought on why row is not limiting when joined vs why it is limiting when not joined.When not joined and where clause is having IN, it is using index process_activity_process_instance_id_app_id_created_idx which has columns process_instance_id, created which is in order by and hence no additional ordering is required and a direct rows limit can be applied here.When in join condition it has to fetch rows according to filter clause, join them and then order ( sort node in plan) hence it cannot limit rows while fetching it first time from the table.You are also missing pi.user_id = '317079413683604' in exists clause. It is worth trying to put there and run explain again and see where it takes. But to your point row limitation cannot happen in case of join as such in the query.Regards,Virendra\n\n\n\n\n On Thursday, May 7, 2020, 11:52:00 AM PDT, Amarendra Konda <[email protected]> wrote:\n \n\n\nHi Virendra,Thanks for your time. Here is the table and index structure process_activity                            Table \"public.process_activity\"       Column       |            Type             |         Modifiers          --------------------+-----------------------------+---------------------------- process_activity_id         | bigint                      | not null default next_id() process_activity_type       | smallint                    | not null voice_url          | text                        |  process_activity_user_id    | bigint                      | not null app_id             | bigint                      | not null process_instance_id    | bigint                      | not null alias              | text                        | not null read_by_user       | smallint                    | default 0 source             | smallint                    | default 0 label_category_id  | bigint                      |  label_id           | bigint                      |  csat_response_id   | bigint                      |  process_activity_fragments  | jsonb                       |  created            | timestamp without time zone | not null updated            | timestamp without time zone |  rule_id            | bigint                      |  marketing_reply_id | bigint                      |  delivered_at       | timestamp without time zone |  reply_fragments    | jsonb                       |  status_fragment    | jsonb                       |  internal_meta      | jsonb                       |  interaction_id     | text                        |  do_not_translate   | boolean                     |  should_translate   | integer                     |  in_reply_to        | jsonb                       | Indexes:    \"process_activity_pkey\" PRIMARY KEY, btree (process_activity_id)    \"fki_process_activity_konotor_user_user_id\" btree (process_activity_user_id) WITH (fillfactor='70')    \"process_activity_process_instance_id_app_id_created_idx\" btree (process_instance_id, app_id, created) WITH (fillfactor='70')    \"process_activity_process_instance_id_app_id_read_by_user_created_idx\" btree (process_instance_id, app_id, read_by_user, created) WITH (fillfactor='70')    \"process_activity_process_instance_id_idx\" btree (process_instance_id) WITH (fillfactor='70') process_instance                             Table \"public.process_instance\"         Column          |            Type             |          Modifiers          -------------------------+-----------------------------+----------------------------- process_instance_id     | bigint                      | not null default next_id() process_instance_alias  | text                        | not null app_id                  | bigint                      | not null user_id                 | bigint                      | not null Indexes:    \"process_instance_pkey\" PRIMARY KEY, btree (process_instance_id)    \"fki_conv_konotor_user_user_id\" btree (user_id) WITH (fillfactor='70')Regards, AmarendraOn Fri, May 8, 2020 at 12:01 AM Virendra Kumar <[email protected]> wrote:Sending table structure with indexes might help little further in understanding.Regards,Virendra\n\n\n\n On Thursday, May 7, 2020, 11:08:14 AM PDT, Amarendra Konda <[email protected]> wrote:\n \n\n\nHi David,In earlier reply, Over looked another condition, hence please ignore that oneHere is the correct one with all the needed conditions. According to the latest one, exists also not limiting rows from the process_activity table.EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00'  AND EXISTS ( SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND pi.process_instance_id = pa.process_instance_id  AND pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,  pa.created limit 50;                                                                                                                                                                                                                QUERY PLAN                                                                                                                                                                                                                 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=1079.44..1079.52 rows=32 width=24) (actual time=85.747..85.777 rows=50 loops=1)   Output: pa.process_activity_id, pa.process_instance_id, pa.created   Buffers: shared hit=43070   ->  Sort  (cost=1079.44..1079.52 rows=32 width=24) (actual time=85.745..85.759 rows=50 loops=1)         Output: pa.process_activity_id, pa.process_instance_id, pa.created         Sort Key: pa.process_instance_id, pa.created         Sort Method: top-N heapsort  Memory: 28kB         Buffers: shared hit=43070         ->  Nested Loop  (cost=1.14..1078.64 rows=32 width=24) (actual time=0.025..72.115 rows=47011 loops=1)               Output: pa.process_activity_id, pa.process_instance_id, pa.created               Buffers: shared hit=43070               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=16) (actual time=0.010..0.015 rows=2 loops=1)                     Output: pi.app_id, pi.process_instance_id                     Index Cond: (c.user_id = '137074931866340'::bigint)                     Filter: (c.app_id = '126502930200650'::bigint)                     Buffers: shared hit=5               ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..1061.62 rows=1436 width=32) (actual time=0.011..20.320 rows=23506 loops=2)                     Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, pa.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translate, pa.in_reply_to                     Index Cond: ((m.process_instance_id = pi.process_instance_id) AND (m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))                     Buffers: shared hit=43065 Planning time: 0.455 ms Execution time: 85.830 msOn Thu, May 7, 2020 at 11:19 PM Amarendra Konda <[email protected]> wrote:Hi David,Thanks for the reply.This has optimized number of rows. Can you please explain, why it is getting more columns in output, even though we have asked for only one column ?  EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00'  AND EXISTS ( SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id  AND pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,m.created limit 50;                                                                                                                                                                                                             QUERY PLAN                                                                                                                                                                                                              ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=1.14..37.39 rows=50 width=24) (actual time=821.283..891.629 rows=50 loops=1)   Output: pa.process_activity_id, pa.process_instance_id, pa.created   Buffers: shared hit=274950   ->  Nested Loop Semi Join  (cost=1.14..266660108.78 rows=367790473 width=24) (actual time=821.282..891.607 rows=50 loops=1)         Output: pa.process_activity_id, pa.process_instance_id, pa.created         Buffers: shared hit=274950         ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..262062725.21 rows=367790473 width=32) (actual time=821.253..891.517 rows=50 loops=1)               Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, m.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translate, pa.in_reply_to               Index Cond: ((m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))               Buffers: shared hit=274946         ->  Materialize  (cost=0.43..2.66 rows=1 width=8) (actual time=0.001..0.001 rows=1 loops=50)               Output: pi.app_id               Buffers: shared hit=4               ->  Index Scan using fki_conv_konotor_user_user_id on public.process_instance pi  (cost=0.43..2.66 rows=1 width=8) (actual time=0.020..0.020 rows=1 loops=1)                     Output: pi.app_id                     Index Cond: (pi.user_id = '137074931866340'::bigint)                     Filter: (pi.app_id = '126502930200650'::bigint)                     Buffers: shared hit=4 Planning time: 0.297 ms Execution time: 891.686 ms(20 rows)On Thu, May 7, 2020 at 9:17 PM David G. Johnston <[email protected]> wrote:On Thu, May 7, 2020 at 7:40 AM Adrian Klaver <[email protected]> wrote:On 5/7/20 4:19 AM, Amarendra Konda wrote:\n> Hi,\n> \n> PostgreSQL version : PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled \n> by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n> \n> We have noticed huge difference interms of execution plan ( response \n> time) , When we pass the direct values  Vs  inner query to IN clause.\n> \n> High level details of the use case are as follows\n> \n>   * As part of the SQL there are 2 tables named Process_instance\n>     (master) and Process_activity ( child)\n>   * Wanted to fetch TOP 50 rows from  Process_activity table for the\n>     given values of the Process_instance.\n>   * When we used Inner Join / Inner query ( query1)  between parent\n>     table and child table , LIMIT is not really taking in to account.\n>     Instead it is fetching more rows and columns that required, and\n>     finally limiting the result\n\nIt is doing what you told it to do which is SELECT all \nprocess_instance_i's for user_id='317079413683604' and app_id = \n'427380312000560' and then filtering further. I am going to guess that \nif you run the inner query alone you will find it returns ~23496 rows.\nYou might have better results if you an actual join between \nprocess_activity and process_instance. Something like below(obviously \nnot tested):What the OP seems to want is a semi-join:(not tested)SELECT pa.process_activity_id  FROM process_activity pa WHERE pa.app_id = '427380312000560' AND pa.created > '1970-01-01 00:00:00'AND EXISTS (  SELECT 1 FROM process_instance pi WHERE pi.app_id = pa.app_id AND pi.user_id = '317079413683604')ORDER BY pa.process_instance_id,pa.created limit 50;I'm unsure exactly how this will impact the plan choice but it should be an improvement, and in any case more correctly defines what it is you are looking for.David J.", "msg_date": "Thu, 7 May 2020 19:30:17 +0000 (UTC)", "msg_from": "Virendra Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs\n INNER Query )" }, { "msg_contents": "On Thu, May 7, 2020 at 10:49 AM Amarendra Konda <[email protected]>\nwrote:\n\n> Can you please explain, why it is getting more columns in output, even\n> though we have asked for only one column ?\n>\n>\n>\n> * Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url,\n> pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias,\n> pa.read_by_user, pa.source, pa.label_category_id, pa.label_id,\n> pa.csat_response_id, m.process_activity_fragments, pa.created, pa.updated,\n> pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments,\n> pa.status_fragment, pa.internal_meta, pa.interaction_id,\n> pa.do_not_translate, pa.should_translate, pa.in_reply_to*\n>\nNot knowing the source code in this area at all...\n\nI'm pretty sure its because it doesn't matter. The executor retrieves data\n\"pages\", 8k blocks containing multiple records, then extracts specific full\ntuples from there. At that point its probably just data pointers being\npassed around. Its not until the end that the planner/executor has to\ndecide which subset of columns to return to the user, or when a new tuple\nstructure has to be created anyway (say because of joining), maybe, does it\ntake the effort of constructing a minimally necessary output column set.\n\nDavid J.\n\nOn Thu, May 7, 2020 at 10:49 AM Amarendra Konda <[email protected]> wrote:Can you please explain, why it is getting more columns in output, even though we have asked for only one column ?                Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url, pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias, pa.read_by_user, pa.source, pa.label_category_id, pa.label_id, pa.csat_response_id, m.process_activity_fragments, pa.created, pa.updated, pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments, pa.status_fragment, pa.internal_meta, pa.interaction_id, pa.do_not_translate, pa.should_translate, pa.in_reply_toNot knowing the source code in this area at all...I'm pretty sure its because it doesn't matter.  The executor retrieves data \"pages\", 8k blocks containing multiple records, then extracts specific full tuples from there.  At that point its probably just data pointers being passed around.  Its not until the end that the planner/executor has to decide which subset of columns to return to the user, or when a new tuple structure has to be created anyway (say because of joining), maybe, does it take the effort of constructing a minimally necessary output column set.David J.", "msg_date": "Thu, 7 May 2020 14:21:05 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Thu, May 7, 2020 at 10:49 AM Amarendra Konda <[email protected]>\n> wrote:\n>> Can you please explain, why it is getting more columns in output, even\n>> though we have asked for only one column ?\n>> * Output: pa.process_activity_id, pa.process_activity_type, pa.voice_url,\n>> pa.process_activity_user_id, pa.app_id, pa.process_instance_id, pa.alias,\n>> pa.read_by_user, pa.source, pa.label_category_id, pa.label_id,\n>> pa.csat_response_id, m.process_activity_fragments, pa.created, pa.updated,\n>> pa.rule_id, pa.marketing_reply_id, pa.delivered_at, pa.reply_fragments,\n>> pa.status_fragment, pa.internal_meta, pa.interaction_id,\n>> pa.do_not_translate, pa.should_translate, pa.in_reply_to*\n\n> Not knowing the source code in this area at all...\n\n> I'm pretty sure its because it doesn't matter.\n\nIt's actually intentional, to save a projection step within that plan\nnode. We'll discard the extra columns once it matters, at some higher\nplan level.\n\n(There have been some debates on -hackers about whether this optimization\nis still worth anything, given all the executor improvements that have\nbeen made since it went in. But it was clearly a win at the time.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 07 May 2020 17:26:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" }, { "msg_contents": "On Thu, May 7, 2020 at 11:07 AM Amarendra Konda <[email protected]>\nwrote:\n\n> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id\n> AS pa_process_activity_id FROM process_activity pa WHERE pa.app_id =\n> '126502930200650' AND pa.created > '1970-01-01 00:00:00' AND EXISTS (\n> SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND *pi.process_instance_id\n> = pa.process_instance_id * AND pi.user_id = '137074931866340') ORDER BY\n> pa.process_instance_id, pa.created limit 50;\n>\n>\n>\n>\n> -> Index Scan using\n> process_activity_process_instance_id_app_id_created_idx on\n> public.process_activity pa (cost=0.70..1061.62 rows=1436 width=32) *(actual\n> time=0.011..20.320 rows=23506 loops=2)*\n>\n> Index Cond: ((m.process_instance_id = pi.process_instance_id) AND\n(m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01\n00:00:00'::timestamp without time zone))\n\nI suppose during the nested loop the inner index scan could limit itself to\nthe first 50 entries it finds (since the first two index columns are being\nheld constant on each scan, m.created should define the traversal order...)\nso that the output of the nested loop ends up being (max 2 x 50) 100\nentries which are then sorted and only the top 50 returned.\n\nWhether the executor could but isn't doing that here or isn't programmed to\ndo that (or my logic is totally off) I do not know.\n\nDavid J.\n\nOn Thu, May 7, 2020 at 11:07 AM Amarendra Konda <[email protected]> wrote:EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)  SELECT pa.process_activity_id AS pa_process_activity_id  FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00'  AND EXISTS ( SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND pi.process_instance_id = pa.process_instance_id  AND pi.user_id = '137074931866340') ORDER BY pa.process_instance_id,  pa.created limit 50;                                                                                                                                                                                                                           ->  Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa  (cost=0.70..1061.62 rows=1436 width=32) (actual time=0.011..20.320 rows=23506 loops=2)> Index Cond: ((m.process_instance_id = pi.process_instance_id) AND (m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))I suppose during the nested loop the inner index scan could limit itself to the first 50 entries it finds (since the first two index columns are being held constant on each scan, m.created should define the traversal order...) so that the output of the nested loop ends up being (max 2 x 50) 100 entries which are then sorted and only the top 50 returned.Whether the executor could but isn't doing that here or isn't programmed to do that (or my logic is totally off) I do not know.David J.", "msg_date": "Thu, 7 May 2020 14:59:43 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" }, { "msg_contents": "On Fri, 8 May 2020 at 10:00, David G. Johnston\n<[email protected]> wrote:\n>\n> On Thu, May 7, 2020 at 11:07 AM Amarendra Konda <[email protected]> wrote:\n>>\n>> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id AS pa_process_activity_id FROM process_activity pa WHERE pa.app_id = '126502930200650' AND pa.created > '1970-01-01 00:00:00' AND EXISTS ( SELECT 1 FROM process_instance pi where pi.app_id = pa.app_id AND pi.process_instance_id = pa.process_instance_id AND pi.user_id = '137074931866340') ORDER BY pa.process_instance_id, pa.created limit 50;\n>>\n>>\n>> -> Index Scan using process_activity_process_instance_id_app_id_created_idx on public.process_activity pa (cost=0.70..1061.62 rows=1436 width=32) (actual time=0.011..20.320 rows=23506 loops=2)\n>\n> > Index Cond: ((m.process_instance_id = pi.process_instance_id) AND (m.app_id = '126502930200650'::bigint) AND (m.created > '1970-01-01 00:00:00'::timestamp without time zone))\n>\n> I suppose during the nested loop the inner index scan could limit itself to the first 50 entries it finds (since the first two index columns are being held constant on each scan, m.created should define the traversal order...) so that the output of the nested loop ends up being (max 2 x 50) 100 entries which are then sorted and only the top 50 returned.\n>\n> Whether the executor could but isn't doing that here or isn't programmed to do that (or my logic is totally off) I do not know.\n\nI think the planner is likely not putting the process_activity table\non the outer side of the nested loop join due to the poor row\nestimates. If it knew that so many rows would match the join then it\nlikely would have done that to save from having to perform the sort at\nall. However, because the planner has put the process_instance on the\nouter side of the nested loop join, it's the pathkeys from that path\nthat the nested loop node has, which is not the same as what the ORDER\nBY needs, so the planner must add a sort step, which means that all\nrows from the nested loop plan must be read so that they can be\nsorted.\n\nIt might be worth trying: create index on process_instance\n(user_id,app_id); as that might lower the cost of performing the join\nin the opposite order and have the planner prefer that order instead.\nIf doing that, the OP could then ditch the\nfki_conv_konotor_user_user_id index to save space.\n\nIf that's not enough to convince the planner that the opposite order\nis better then certainly SET enable_sort TO off; would.\n\nDavid\n\n\n", "msg_date": "Fri, 8 May 2020 11:46:00 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explain plan changes - IN CLAUSE ( Passing direct values Vs INNER\n Query )" } ]
[ { "msg_contents": "Hi experts,\n\nOur application serves multiple tenants. Each tenant has the schema with a\nfew hundreds of tables and few functions.\nWe have 2000 clients so we have to create 2000 schemas in a single database.\n\nWhile doing this, i observed that the catalog tables pg_attribute,\npg_class, pg_depend grow huge in count and size.\n\nDo you think this will be a challenge during execution of every query ?\n\nWhen Postgres parses an sql to find the best execution plan, does it scan\nany of these catalogs that could eventually take more time?\n\nAny other challenges you have come across or foresee in such cases ?\n\nThanks,\nSammy.\n\nHi experts,Our application serves multiple tenants. Each tenant has the schema with a few hundreds of tables and few functions.We have 2000 clients so we have to create 2000 schemas in a single database.While doing this, i observed that the catalog tables pg_attribute, pg_class, pg_depend grow huge in count and size.Do you think this will be a challenge during execution of every query ? When Postgres parses an sql to find the best execution plan, does it scan any of these catalogs that could eventually take more time? Any other challenges you have come across or foresee in such cases ? Thanks,Sammy.", "msg_date": "Thu, 7 May 2020 14:10:55 -0300", "msg_from": "samhitha g <[email protected]>", "msg_from_op": true, "msg_subject": "pg_attribute, pg_class, pg_depend grow huge in count and size with\n multiple tenants." }, { "msg_contents": "On Thu, May 7, 2020 at 1:05 PM samhitha g <[email protected]>\nwrote:\n\n> Our application serves multiple tenants. Each tenant has the schema with a\n> few hundreds of tables and few functions.\n> We have 2000 clients so we have to create 2000 schemas in a single\n> database.\n>\n\nThat is one option but I wouldn't say you must. If you cannot get\nindividual tables to be multi-tenant you are probably better off having one\ndatabase per client on a shared cluster - at least given the size of the\nschema and number of clients.\n\nDavid J.\n\nOn Thu, May 7, 2020 at 1:05 PM samhitha g <[email protected]> wrote:Our application serves multiple tenants. Each tenant has the schema with a few hundreds of tables and few functions.We have 2000 clients so we have to create 2000 schemas in a single database.That is one option but I wouldn't say you must.  If you cannot get individual tables to be multi-tenant you are probably better off having one database per client on a shared cluster - at least given the size of the schema and number of clients.David J.", "msg_date": "Thu, 7 May 2020 13:17:35 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute, pg_class, pg_depend grow huge in count and size\n with multiple tenants." }, { "msg_contents": "Hi,\n\nOn Thu, May 7, 2020 at 5:18 PM David G. Johnston <[email protected]>\nwrote:\n\n> On Thu, May 7, 2020 at 1:05 PM samhitha g <[email protected]>\n> wrote:\n>\n>> Our application serves multiple tenants. Each tenant has the schema\n>> with a few hundreds of tables and few functions.\n>> We have 2000 clients so we have to create 2000 schemas in a single\n>> database.\n>>\n>\n> That is one option but I wouldn't say you must. If you cannot get\n> individual tables to be multi-tenant you are probably better off having one\n> database per client on a shared cluster - at least given the size of the\n> schema and number of clients.\n>\nI am working on a similar problem.\n1 database per each client may be a killer when you have a connection\npooler that creates a pool for a unique combination of (user,database).\n\n>\n> David J.\n>\n>\n\n-- \nRegards,\nAvinash Vallarapu\n+1-902-221-5976\n\nHi,On Thu, May 7, 2020 at 5:18 PM David G. Johnston <[email protected]> wrote:On Thu, May 7, 2020 at 1:05 PM samhitha g <[email protected]> wrote:Our application serves multiple tenants. Each tenant has the schema with a few hundreds of tables and few functions.We have 2000 clients so we have to create 2000 schemas in a single database.That is one option but I wouldn't say you must.  If you cannot get individual tables to be multi-tenant you are probably better off having one database per client on a shared cluster - at least given the size of the schema and number of clients.I am working on a similar problem.1 database per each client may be a killer when you have a connection pooler that creates a pool for a unique combination of (user,database). David J.\n-- Regards,Avinash Vallarapu+1-902-221-5976", "msg_date": "Thu, 7 May 2020 17:28:21 -0300", "msg_from": "Avinash Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute, pg_class, pg_depend grow huge in count and size\n with multiple tenants." }, { "msg_contents": "On 07/05/20, Avinash Kumar ([email protected]) wrote:\n> >> Our application serves multiple tenants. Each tenant has the schema\n> >> with a few hundreds of tables and few functions.\n> >> We have 2000 clients so we have to create 2000 schemas in a single\n> >> database.\n\n> > That is one option but I wouldn't say you must. If you cannot get\n> > individual tables to be multi-tenant you are probably better off having one\n> > database per client on a shared cluster - at least given the size of the\n> > schema and number of clients.\n> >\n> I am working on a similar problem.\n> 1 database per each client may be a killer when you have a connection\n> pooler that creates a pool for a unique combination of (user,database).\n\nOne of our clusters has well over 500 databases fronted by pg_bouncer.\n\nWe get excellent connection \"flattening\" using pg_bouncer with\nper-database connection spikes dealt with through a reserve pool.\n\nThe nice thing about separate databases is that it is easy to scale\nhorizontally.\n\nRory\n\n\n", "msg_date": "Thu, 7 May 2020 22:08:46 +0100", "msg_from": "Rory Campbell-Lange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute, pg_class, pg_depend grow huge in count and size\n with multiple tenants." }, { "msg_contents": "Hi,\n\nOn Thu, May 7, 2020 at 6:08 PM Rory Campbell-Lange <[email protected]>\nwrote:\n\n> On 07/05/20, Avinash Kumar ([email protected]) wrote:\n> > >> Our application serves multiple tenants. Each tenant has the schema\n> > >> with a few hundreds of tables and few functions.\n> > >> We have 2000 clients so we have to create 2000 schemas in a single\n> > >> database.\n>\n> > > That is one option but I wouldn't say you must. If you cannot get\n> > > individual tables to be multi-tenant you are probably better off\n> having one\n> > > database per client on a shared cluster - at least given the size of\n> the\n> > > schema and number of clients.\n> > >\n> > I am working on a similar problem.\n> > 1 database per each client may be a killer when you have a connection\n> > pooler that creates a pool for a unique combination of (user,database).\n>\n> One of our clusters has well over 500 databases fronted by pg_bouncer.\n>\n> We get excellent connection \"flattening\" using pg_bouncer with\n> per-database connection spikes dealt with through a reserve pool.\n>\nWhat if you see at least 4 connections being established by each client\nduring peak ? And if you serve 4 or 2 connections per each DB, then you\nare creating 1000 or more reserved connections with 500 DBs in a cluster.\n\n>\n> The nice thing about separate databases is that it is easy to scale\n> horizontally.\n>\nAgreed. But, how about autovacuum ? Workers shift from DB to DB and 500\nclusters means you may have to have a lot of manual vacuuming in place as\nwell.\n\n>\n> Rory\n>\n\n\n-- \nRegards,\nAvinash Vallarapu\n+1-902-221-5976\n\nHi,On Thu, May 7, 2020 at 6:08 PM Rory Campbell-Lange <[email protected]> wrote:On 07/05/20, Avinash Kumar ([email protected]) wrote:\n> >> Our application serves multiple tenants. Each tenant has the schema\n> >> with a few hundreds of tables and few functions.\n> >> We have 2000 clients so we have to create 2000 schemas in a single\n> >> database.\n\n> > That is one option but I wouldn't say you must.  If you cannot get\n> > individual tables to be multi-tenant you are probably better off having one\n> > database per client on a shared cluster - at least given the size of the\n> > schema and number of clients.\n> >\n> I am working on a similar problem.\n> 1 database per each client may be a killer when you have a connection\n> pooler that creates a pool for a unique combination of (user,database).\n\nOne of our clusters has well over 500 databases fronted by pg_bouncer.\n\nWe get excellent connection \"flattening\" using pg_bouncer with\nper-database connection spikes dealt with through a reserve pool.What if you see at least 4 connections being established by each client during peak ? And if you serve 4 or 2  connections per each DB, then you are creating 1000 or more reserved connections with 500 DBs in a cluster. \n\nThe nice thing about separate databases is that it is easy to scale\nhorizontally.Agreed. But, how about autovacuum ? Workers shift from DB to DB and 500 clusters means you may have to have a lot of manual vacuuming in place as well. \n\nRory\n-- Regards,Avinash Vallarapu+1-902-221-5976", "msg_date": "Thu, 7 May 2020 18:17:32 -0300", "msg_from": "Avinash Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute, pg_class, pg_depend grow huge in count and size\n with multiple tenants." }, { "msg_contents": "On Thu, 2020-05-07 at 18:17 -0300, Avinash Kumar wrote:\n> > The nice thing about separate databases is that it is easy to scale\n> > horizontally.\n> \n> Agreed. But, how about autovacuum ? Workers shift from DB to DB and 500 clusters\n> means you may have to have a lot of manual vacuuming in place as well.\n\nJust set \"autovacuum_max_workers\" higher.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Fri, 08 May 2020 08:31:14 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute, pg_class, pg_depend grow huge in count and size\n with multiple tenants." }, { "msg_contents": "Hi,\n\nOn Fri, May 8, 2020 at 3:31 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Thu, 2020-05-07 at 18:17 -0300, Avinash Kumar wrote:\n> > > The nice thing about separate databases is that it is easy to scale\n> > > horizontally.\n> >\n> > Agreed. But, how about autovacuum ? Workers shift from DB to DB and 500\n> clusters\n> > means you may have to have a lot of manual vacuuming in place as well.\n>\n> Just set \"autovacuum_max_workers\" higher.\n>\nNo, that wouldn't help. If you just increase autovacuum_max_workers, the\ntotal cost limit of autovacuum_vacuum_cost_limit (or vacuum_cost_limit) is\nshared by so many workers and it further delays autovacuum per each worker.\nInstead you need to increase autovacuum_vacuum_cost_limit as well when you\nincrease the number of workers. But, if you do that and also increase\nworkers, well, you would easily reach the limitations of the disk. I am not\nsure it is anywhere advised to have 20 autovacuum_max_workers unless i have\na disk with lots of IOPS and with very tiny tables across all the\ndatabases.\n\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\n-- \nRegards,\nAvinash Vallarapu\n\nHi,On Fri, May 8, 2020 at 3:31 AM Laurenz Albe <[email protected]> wrote:On Thu, 2020-05-07 at 18:17 -0300, Avinash Kumar wrote:\n> > The nice thing about separate databases is that it is easy to scale\n> > horizontally.\n> \n> Agreed. But, how about autovacuum ? Workers shift from DB to DB and 500 clusters\n> means you may have to have a lot of manual vacuuming in place as well.\n\nJust set \"autovacuum_max_workers\" higher.No, that wouldn't help. If you just increase autovacuum_max_workers, the total cost limit of autovacuum_vacuum_cost_limit (or vacuum_cost_limit) is shared by so many workers and it further delays autovacuum per each worker. Instead you need to increase autovacuum_vacuum_cost_limit as well when you increase the number of workers. But, if you do that and also increase workers, well, you would easily reach the limitations of the disk. I am not sure it is anywhere advised to have 20 autovacuum_max_workers unless i have a disk with lots of IOPS and with very tiny tables across all the databases. \n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n-- Regards,Avinash Vallarapu", "msg_date": "Fri, 8 May 2020 03:47:06 -0300", "msg_from": "Avinash Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute, pg_class, pg_depend grow huge in count and size\n with multiple tenants." }, { "msg_contents": "On Fri, 2020-05-08 at 03:47 -0300, Avinash Kumar wrote:\n> > Just set \"autovacuum_max_workers\" higher.\n> \n> No, that wouldn't help. If you just increase autovacuum_max_workers, the total cost limit of\n> autovacuum_vacuum_cost_limit (or vacuum_cost_limit) is shared by so many workers and it\n> further delays autovacuum per each worker. Instead you need to increase autovacuum_vacuum_cost_limit\n> as well when you increase the number of workers.\n\nTrue, I should have mentioned that.\n\n> But, if you do that and also increase workers, well, you would easily reach the limitations\n> of the disk. I am not sure it is anywhere advised to have 20 autovacuum_max_workers unless\n> i have a disk with lots of IOPS and with very tiny tables across all the databases.\n\nSure, if you have a high database load, you will at some point exceed the limits of\nthe machine, which is not surprising. What I am trying to say is that you have to ramp\nup the resources for autovacuum together with increasing the overall workload.\nYou should consider autovacuum as part of that workload.\n\nIf your machine cannot cope with the workload any more, you have to scale, which\nis easily done by adding more machines if you have many databases.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Fri, 08 May 2020 08:53:41 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute, pg_class, pg_depend grow huge in count and size\n with multiple tenants." }, { "msg_contents": "Hi,\n\nOn Fri, May 8, 2020 at 3:53 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Fri, 2020-05-08 at 03:47 -0300, Avinash Kumar wrote:\n> > > Just set \"autovacuum_max_workers\" higher.\n> >\n> > No, that wouldn't help. If you just increase autovacuum_max_workers, the\n> total cost limit of\n> > autovacuum_vacuum_cost_limit (or vacuum_cost_limit) is shared by so many\n> workers and it\n> > further delays autovacuum per each worker. Instead you need to increase\n> autovacuum_vacuum_cost_limit\n> > as well when you increase the number of workers.\n>\n> True, I should have mentioned that.\n>\n> > But, if you do that and also increase workers, well, you would easily\n> reach the limitations\n> > of the disk. I am not sure it is anywhere advised to have 20\n> autovacuum_max_workers unless\n> > i have a disk with lots of IOPS and with very tiny tables across all the\n> databases.\n>\n> Sure, if you have a high database load, you will at some point exceed the\n> limits of\n> the machine, which is not surprising. What I am trying to say is that you\n> have to ramp\n> up the resources for autovacuum together with increasing the overall\n> workload.\n> You should consider autovacuum as part of that workload.\n>\n> If your machine cannot cope with the workload any more, you have to scale,\n> which\n> is easily done by adding more machines if you have many databases.\n>\nAgreed. Getting back to the original question asked by Sammy, i think it is\nstill bad to create 2000 databases for storing 2000 clients/(schemas) for a\nmulti-tenant setup.\n\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\n-- \nRegards,\nAvinash Vallarapu\n\nHi,On Fri, May 8, 2020 at 3:53 AM Laurenz Albe <[email protected]> wrote:On Fri, 2020-05-08 at 03:47 -0300, Avinash Kumar wrote:\n> > Just set \"autovacuum_max_workers\" higher.\n> \n> No, that wouldn't help. If you just increase autovacuum_max_workers, the total cost limit of\n> autovacuum_vacuum_cost_limit (or vacuum_cost_limit) is shared by so many workers and it\n> further delays autovacuum per each worker. Instead you need to increase autovacuum_vacuum_cost_limit\n> as well when you increase the number of workers.\n\nTrue, I should have mentioned that.\n\n> But, if you do that and also increase workers, well, you would easily reach the limitations\n> of the disk. I am not sure it is anywhere advised to have 20 autovacuum_max_workers unless\n> i have a disk with lots of IOPS and with very tiny tables across all the databases.\n\nSure, if you have a high database load, you will at some point exceed the limits of\nthe machine, which is not surprising.  What I am trying to say is that you have to ramp\nup the resources for autovacuum together with increasing the overall workload.\nYou should consider autovacuum as part of that workload.\n\nIf your machine cannot cope with the workload any more, you have to scale, which\nis easily done by adding more machines if you have many databases.Agreed. Getting back to the original question asked by Sammy, i think it is still bad to create 2000 databases for storing 2000 clients/(schemas) for a multi-tenant setup.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n-- Regards,Avinash Vallarapu", "msg_date": "Fri, 8 May 2020 07:14:48 -0300", "msg_from": "Avinash Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute, pg_class, pg_depend grow huge in count and size\n with multiple tenants." }, { "msg_contents": "Hi all,\n\nSince we are talking about multi-tenant databases, the citus extension \n<https://www.citusdata.com/product/community> fits in neatly with that \nusing horizontal partitioning/shards.\n\nRegards,\nMichael Vitale\n\nAvinash Kumar wrote on 5/8/2020 6:14 AM:\n> Hi,\n>\n> On Fri, May 8, 2020 at 3:53 AM Laurenz Albe <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> On Fri, 2020-05-08 at 03:47 -0300, Avinash Kumar wrote:\n> > > Just set \"autovacuum_max_workers\" higher.\n> >\n> > No, that wouldn't help. If you just increase\n> autovacuum_max_workers, the total cost limit of\n> > autovacuum_vacuum_cost_limit (or vacuum_cost_limit) is shared by\n> so many workers and it\n> > further delays autovacuum per each worker. Instead you need to\n> increase autovacuum_vacuum_cost_limit\n> > as well when you increase the number of workers.\n>\n> True, I should have mentioned that.\n>\n> > But, if you do that and also increase workers, well, you would\n> easily reach the limitations\n> > of the disk. I am not sure it is anywhere advised to have 20\n> autovacuum_max_workers unless\n> > i have a disk with lots of IOPS and with very tiny tables across\n> all the databases.\n>\n> Sure, if you have a high database load, you will at some point\n> exceed the limits of\n> the machine, which is not surprising.  What I am trying to say is\n> that you have to ramp\n> up the resources for autovacuum together with increasing the\n> overall workload.\n> You should consider autovacuum as part of that workload.\n>\n> If your machine cannot cope with the workload any more, you have\n> to scale, which\n> is easily done by adding more machines if you have many databases.\n>\n> Agreed. Getting back to the original question asked by Sammy, i think \n> it is still bad to create 2000 databases for storing 2000 \n> clients/(schemas) for a multi-tenant setup.\n>\n>\n> Yours,\n> Laurenz Albe\n> -- \n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n>\n> -- \n> Regards,\n> Avinash Vallarapu\n\n\n\n\nHi all,\n\nSince we are talking about multi-tenant databases, the citus extension \nfits in neatly with that using horizontal partitioning/shards.  \n\nRegards,\nMichael Vitale\n\nAvinash Kumar wrote on 5/8/2020 6:14 AM:\n\n\nHi,On Fri, May 8, 2020 at 3:53 AM Laurenz Albe\n <[email protected]>\n wrote:On \nFri, 2020-05-08 at 03:47 -0300, Avinash Kumar wrote:\n> > Just set \"autovacuum_max_workers\" higher.\n> \n> No, that wouldn't help. If you just increase \nautovacuum_max_workers, the total cost limit of\n> autovacuum_vacuum_cost_limit (or vacuum_cost_limit) is shared by so\n many workers and it\n> further delays autovacuum per each worker. Instead you need to \nincrease autovacuum_vacuum_cost_limit\n> as well when you increase the number of workers.\n\nTrue, I should have mentioned that.\n\n> But, if you do that and also increase workers, well, you would \neasily reach the limitations\n> of the disk. I am not sure it is anywhere advised to have 20 \nautovacuum_max_workers unless\n> i have a disk with lots of IOPS and with very tiny tables across \nall the databases.\n\nSure, if you have a high database load, you will at some point exceed \nthe limits of\nthe machine, which is not surprising.  What I am trying to say is that \nyou have to ramp\nup the resources for autovacuum together with increasing the overall \nworkload.\nYou should consider autovacuum as part of that workload.\n\nIf your machine cannot cope with the workload any more, you have to \nscale, which\nis easily done by adding more machines if you have many databases.Agreed.\n Getting back to the original question asked by Sammy, i think it is \nstill bad to create 2000 databases for storing 2000 clients/(schemas) \nfor a multi-tenant setup.\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n-- Regards,Avinash\n Vallarapu", "msg_date": "Fri, 8 May 2020 09:01:50 -0400", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute, pg_class, pg_depend grow huge in count and size\n with multiple tenants." }, { "msg_contents": "On Thu, May 7, 2020 at 4:05 PM samhitha g <[email protected]>\nwrote:\n\n> Hi experts,\n>\n> Our application serves multiple tenants. Each tenant has the schema with a\n> few hundreds of tables and few functions.\n> We have 2000 clients so we have to create 2000 schemas in a single\n> database.\n>\n> While doing this, i observed that the catalog tables pg_attribute,\n> pg_class, pg_depend grow huge in count and size.\n>\n\nPlease attach numbers to \"huge\". We don't know what \"huge\" means to you.\n\n\"2000 * a few hundred\" tables is certainly getting to the point where it\nmakes sense to be concerned. But my concern would be more about backup and\nrecovery, version upgrades, pg_dump, etc. not about daily operations.\n\nCheers,\n\nJeff\n\n>\n\nOn Thu, May 7, 2020 at 4:05 PM samhitha g <[email protected]> wrote:Hi experts,Our application serves multiple tenants. Each tenant has the schema with a few hundreds of tables and few functions.We have 2000 clients so we have to create 2000 schemas in a single database.While doing this, i observed that the catalog tables pg_attribute, pg_class, pg_depend grow huge in count and size.Please attach numbers to \"huge\".  We don't know what \"huge\" means to you.\"2000  * a few hundred\" tables is certainly getting to the point where it makes sense to be concerned.  But my concern would be more about backup and recovery, version upgrades, pg_dump, etc. not about daily operations.Cheers,Jeff", "msg_date": "Fri, 8 May 2020 09:44:53 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute, pg_class, pg_depend grow huge in count and size\n with multiple tenants." }, { "msg_contents": "On Thu, May 7, 2020 at 5:17 PM Avinash Kumar <[email protected]>\nwrote:\n\n> Hi,\n>\n> On Thu, May 7, 2020 at 6:08 PM Rory Campbell-Lange <\n> [email protected]> wrote:\n>\n>> One of our clusters has well over 500 databases fronted by pg_bouncer.\n>>\n>> We get excellent connection \"flattening\" using pg_bouncer with\n>> per-database connection spikes dealt with through a reserve pool.\n>>\n> What if you see at least 4 connections being established by each client\n> during peak ? And if you serve 4 or 2 connections per each DB, then you\n> are creating 1000 or more reserved connections with 500 DBs in a cluster.\n>\n\nDoes every database spike at the same time?\n\n\n>\n>> The nice thing about separate databases is that it is easy to scale\n>> horizontally.\n>>\n> Agreed. But, how about autovacuum ? Workers shift from DB to DB and 500\n> clusters means you may have to have a lot of manual vacuuming in place as\n> well.\n>\n\nWhy would having difference schemas in different DBs change your manual\nvacuuming needs? And if anything, having separate DBs will make\nautovacuuming more efficient, as it keeps the statistics collectors stats\nfiles smaller.\n\nCheers,\n\nJeff\n\n>\n\nOn Thu, May 7, 2020 at 5:17 PM Avinash Kumar <[email protected]> wrote:Hi,On Thu, May 7, 2020 at 6:08 PM Rory Campbell-Lange <[email protected]> wrote:One of our clusters has well over 500 databases fronted by pg_bouncer.\n\nWe get excellent connection \"flattening\" using pg_bouncer with\nper-database connection spikes dealt with through a reserve pool.What if you see at least 4 connections being established by each client during peak ? And if you serve 4 or 2  connections per each DB, then you are creating 1000 or more reserved connections with 500 DBs in a cluster. Does every database spike at the same time? \n\nThe nice thing about separate databases is that it is easy to scale\nhorizontally.Agreed. But, how about autovacuum ? Workers shift from DB to DB and 500 clusters means you may have to have a lot of manual vacuuming in place as well.Why would having difference schemas in different DBs change your manual vacuuming needs?  And if anything, having separate DBs will make autovacuuming more efficient, as it keeps the statistics collectors stats files smaller. Cheers,Jeff", "msg_date": "Fri, 8 May 2020 09:54:25 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_attribute, pg_class, pg_depend grow huge in count and size\n with multiple tenants." } ]
[ { "msg_contents": "Hello Team,\n\nWe are using a PostgreSQL version -9.6.12 version and from last 4 weeks our\nTransaction ID's (XID's) have increased by 195 million to 341 million\ntransactions. I see the below from pg_stat_activity from the postGreSQL DB.\n\n1) Viewing the pg_stat-activity I noticed that the vacuum query is\nrunning for a runtime interval of few hours to 3-5 days whenever I check\nthe pg_stat-activity. Is this a common process postgreSQL runs ? I have\nnoticed this running and show in the pg_stat activity from last few weeks\nonly. Also the query shows the table name with\n(to prevent wrap around) for each of the tables in the vacuum query as\noutput. What does this mean ?\n\n2) Does it mean I need to run a manual auto vacuum process for these\ntables ? as the transaction ids have increased from 195 million to 341\nmillion ?.\n\nWhat other things I need to check in the database around this ?.\n\nThanks !!\n\nHello Team,We are using a PostgreSQL version -9.6.12 version and from last 4 weeks our Transaction ID's (XID's) have increased by 195 million to 341 million transactions.  I see the below from pg_stat_activity from the postGreSQL DB.1) Viewing the pg_stat-activity  I noticed  that the vacuum query is running for a runtime interval of few hours to 3-5 days whenever I check the pg_stat-activity. Is this a common process postgreSQL runs ? I have noticed this running and show in the pg_stat activity from last few weeks only. Also the query shows the table name with (to prevent wrap around) for each of the tables in the vacuum query as output. What does this mean ?2) Does it mean I need to run a manual auto vacuum process for these tables ? as the transaction ids have increased from 195 million to 341 million ?.What other things I need to check in the database around this ?.Thanks !!", "msg_date": "Thu, 7 May 2020 13:23:04 -0500", "msg_from": "github kran <[email protected]>", "msg_from_op": true, "msg_subject": "AutoVacuum and growing transaction XID's" }, { "msg_contents": "It is trying to do a vacuum freeze. Do you have autovacuum turned off? Any\nsettings changed from default related to autovacuum?\n\nhttps://www.postgresql.org/docs/9.6/routine-vacuuming.html\nRead 24.1.5. Preventing Transaction ID Wraparound Failures\n\nThese may also be of help-\nhttps://info.crunchydata.com/blog/managing-transaction-id-wraparound-in-postgresql\nhttps://www.2ndquadrant.com/en/blog/managing-freezing/\n\nNote that you need to ensure the server gets caught up, or you risk being\nlocked out to prevent data corruption.\n\nIt is trying to do a vacuum freeze. Do you have autovacuum turned off? Any settings changed from default related to autovacuum?https://www.postgresql.org/docs/9.6/routine-vacuuming.htmlRead 24.1.5. Preventing Transaction ID Wraparound FailuresThese may also be of help-https://info.crunchydata.com/blog/managing-transaction-id-wraparound-in-postgresqlhttps://www.2ndquadrant.com/en/blog/managing-freezing/Note that you need to ensure the server gets caught up, or you risk being locked out to prevent data corruption.", "msg_date": "Thu, 7 May 2020 12:32:32 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AutoVacuum and growing transaction XID's" }, { "msg_contents": "On Thu, May 7, 2020 at 1:33 PM Michael Lewis <[email protected]> wrote:\n\n> It is trying to do a vacuum freeze. Do you have autovacuum turned off? Any\n> settings changed from default related to autovacuum?\n>\n> https://www.postgresql.org/docs/9.6/routine-vacuuming.html\n> Read 24.1.5. Preventing Transaction ID Wraparound Failures\n>\n> These may also be of help-\n>\n> https://info.crunchydata.com/blog/managing-transaction-id-wraparound-in-postgresql\n> https://www.2ndquadrant.com/en/blog/managing-freezing/\n>\n> Note that you need to ensure the server gets caught up, or you risk being\n> locked out to prevent data corruption.\n>\n\n Thanks Mike.\n1) We haven't changed anything related to autovacuum except a work_mem\nparameter which was increased to 4 GB which I believe is not related to\nautovacuum\n2) The vacuum was not turned off and few parameters we had on vacuum are\n *autovacuum_analyze_scale_factor = 0.02* and\n*autovacuum_vacuum_scale_factor\n= 0.05*\n*3) *The database curently we are running is 2 years old for now and we\nhave around close to 40 partitions and the datfrozenxid on the table is 343\nmillion whereas the default is 200 million. I would try doing a manual\nauto vacuum on those tables\nwhere the autovacuum_freeze_max_age > 200 million. Do you think It's a\nright thing to do ?.\n\nI will also go through this documents.\n\nTahnks\n\nOn Thu, May 7, 2020 at 1:33 PM Michael Lewis <[email protected]> wrote:It is trying to do a vacuum freeze. Do you have autovacuum turned off? Any settings changed from default related to autovacuum?https://www.postgresql.org/docs/9.6/routine-vacuuming.htmlRead 24.1.5. Preventing Transaction ID Wraparound FailuresThese may also be of help-https://info.crunchydata.com/blog/managing-transaction-id-wraparound-in-postgresqlhttps://www.2ndquadrant.com/en/blog/managing-freezing/Note that you need to ensure the server gets caught up, or you risk being locked out to prevent data corruption.  Thanks Mike. 1)  We haven't changed anything related to autovacuum except a work_mem parameter which was increased to 4 GB which I believe is not related to autovacuum 2)  The vacuum was not turned off and few parameters we had on vacuum are                  autovacuum_analyze_scale_factor = 0.02 and autovacuum_vacuum_scale_factor = 0.053) The database curently we are running is 2 years old for now and we have around close to 40 partitions and the datfrozenxid on the table is 343 million whereas the default is 200 million.  I would try doing a manual auto vacuum on those tableswhere the autovacuum_freeze_max_age > 200 million. Do you think It's a right thing to do ?.I will also go through this documents. Tahnks", "msg_date": "Thu, 7 May 2020 16:18:03 -0500", "msg_from": "github kran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AutoVacuum and growing transaction XID's" }, { "msg_contents": "On Thu, May 7, 2020 at 4:18 PM github kran <[email protected]> wrote:\n\n>\n>\n> On Thu, May 7, 2020 at 1:33 PM Michael Lewis <[email protected]> wrote:\n>\n>> It is trying to do a vacuum freeze. Do you have autovacuum turned off?\n>> Any settings changed from default related to autovacuum?\n>>\n>> https://www.postgresql.org/docs/9.6/routine-vacuuming.html\n>> Read 24.1.5. Preventing Transaction ID Wraparound Failures\n>>\n>> These may also be of help-\n>>\n>> https://info.crunchydata.com/blog/managing-transaction-id-wraparound-in-postgresql\n>> https://www.2ndquadrant.com/en/blog/managing-freezing/\n>>\n>> Note that you need to ensure the server gets caught up, or you risk being\n>> locked out to prevent data corruption.\n>>\n>\n> Thanks Mike.\n> 1) We haven't changed anything related to autovacuum except a work_mem\n> parameter which was increased to 4 GB which I believe is not related to\n> autovacuum\n> 2) The vacuum was not turned off and few parameters we had on vacuum are\n> *autovacuum_analyze_scale_factor = 0.02* and *autovacuum_vacuum_scale_factor\n> = 0.05*\n> *3) *The database curently we are running is 2 years old for now and we\n> have around close to 40 partitions and the *datfrozenxid on the table is\n> 343 million whereas the default is 200 million*. I would try doing a\n> manual auto vacuum on those tables\n> where the *autovacuum_freeze_max_age > 200 million*. Do you think It's a\n> right thing to do ?.\n>\n> I will also go through this documents.\n>\n\n* Few more things 5/7 - 8:40 PM CDT*\n 1) I see there are *8 Vacuum workers* ( Not sure what changed) running\nin the background and the concern I have is all of these vacuum processes\nare running with wrap around and while they are running\n I can't either DROP or ALTER any other tables ( REMOVE Inheritance\nfor any of old tables where the WRITES are not getting written to).* Any of\nthe ALTER TABLE OR DROP TABLE DDL's arer not getting exeucted even I\nWAITED FOR SEVERAL MINUTES , so I have terminated those queries as I didn't\nhave luck.*\n 2) T*he VACUUM Process wrap around is running for last 1 day and\nseveral hrs on other tables. *\n 3) *Can I increase the autovacuum_freeze_max_age on the tables\non production system* ?\n\n>\n> Thanks\n>\n\n\n\n>\n>\n\nOn Thu, May 7, 2020 at 4:18 PM github kran <[email protected]> wrote:On Thu, May 7, 2020 at 1:33 PM Michael Lewis <[email protected]> wrote:It is trying to do a vacuum freeze. Do you have autovacuum turned off? Any settings changed from default related to autovacuum?https://www.postgresql.org/docs/9.6/routine-vacuuming.htmlRead 24.1.5. Preventing Transaction ID Wraparound FailuresThese may also be of help-https://info.crunchydata.com/blog/managing-transaction-id-wraparound-in-postgresqlhttps://www.2ndquadrant.com/en/blog/managing-freezing/Note that you need to ensure the server gets caught up, or you risk being locked out to prevent data corruption.  Thanks Mike. 1)  We haven't changed anything related to autovacuum except a work_mem parameter which was increased to 4 GB which I believe is not related to autovacuum 2)  The vacuum was not turned off and few parameters we had on vacuum are                  autovacuum_analyze_scale_factor = 0.02 and autovacuum_vacuum_scale_factor = 0.053) The database curently we are running is 2 years old for now and we have around close to 40 partitions and the datfrozenxid on the table is 343 million whereas the default is 200 million.  I would try doing a manual auto vacuum on those tableswhere the autovacuum_freeze_max_age > 200 million. Do you think It's a right thing to do ?.I will also go through this documents.    Few more things 5/7 - 8:40 PM CDT   1)  I see there are 8 Vacuum workers ( Not sure what changed) running in the background and the concern I have is all of these vacuum processes are running with wrap around and while they are running      I can't either DROP or ALTER any other tables ( REMOVE Inheritance for any of old tables where the WRITES are not getting written to). Any of the ALTER TABLE OR DROP TABLE  DDL's arer not getting exeucted even I WAITED FOR SEVERAL MINUTES , so I have terminated those queries as I didn't have luck.   2)  The VACUUM Process wrap around is running for last 1 day and several hrs on other tables.    3)  Can I increase the \n\nautovacuum_freeze_max_age on the tables on production system ?  Thanks", "msg_date": "Thu, 7 May 2020 20:51:31 -0500", "msg_from": "github kran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AutoVacuum and growing transaction XID's" }, { "msg_contents": "On Fri, 8 May 2020 at 09:18, github kran <[email protected]> wrote:\n> 1) We haven't changed anything related to autovacuum except a work_mem parameter which was increased to 4 GB which I believe is not related to autovacuum\n\nIt might want to look into increasing vacuum_cost_limit to something\nwell above 200 or dropping autovacuum_vacuum_cost_delay down from 20\nto something much lower. However, you say you've not changed the\nautovacuum settings, but you've also said:\n\n> 1) I see there are 8 Vacuum workers ( Not sure what changed) running in the background and the concern I have is all of these vacuum processes are running with wrap around and while they are running\n\nThe default is 3, so if you have 8 then the settings are non-standard.\n\nIt might be good to supply the output of:\n\nSELECT name,setting from pg_Settings where name like '%vacuum%';\n\nYou should know that the default speed that autovacuum runs at is\nquite slow in 9.6. If you end up with all your autovacuum workers tied\nup with anti-wraparound vacuums then other tables are likely to get\nneglected and that could lead to stale stats or bloated tables. Best\nto aim to get auto-vacuum running faster or aim to perform some manual\nvacuums of tables that are over their max freeze age during an\noff-peak period to make use of the lower load during those times.\nStart with tables in pg_class with the largest age(relfrozenxid).\nYou'll still likely want to look at the speed autovacuum runs at\neither way.\n\nPlease be aware that the first time a new cluster crosses the\nautovacuum_freeze_max_age threshold can be a bit of a pain point as it\ncan mean that many tables require auto-vacuum activity all at once.\nThe impact of this is compounded if you have many tables that never\nreceive an UPDATE/DELETE as auto-vacuum, in 9.6, does not visit those\ntables for any other reason. After the first time, the relfrozenxids\nof tables tend to be more staggered so their vacuum freeze\nrequirements are also more staggered and that tends to cause fewer\nproblems.\n\nDavid\n\n\n", "msg_date": "Fri, 8 May 2020 16:01:28 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AutoVacuum and growing transaction XID's" }, { "msg_contents": "On Fri, 8 May 2020 at 13:51, github kran <[email protected]> wrote:\n> I can't either DROP or ALTER any other tables ( REMOVE Inheritance for any of old tables where the WRITES are not getting written to). Any of the ALTER TABLE OR DROP TABLE DDL's arer not getting exeucted even I WAITED FOR SEVERAL MINUTES , so I have terminated those queries as I didn't have luck.\n\nThe auto-vacuum freeze holds an SharedUpdateExclusiveLock on the table\nbeing vacuumed. If you try any DDL that requires an\nAccessExclusiveLock, it'll have to wait until the vacuum has\ncompleted. If you leave the DDL running then all accesses to the table\nwill be queued behind the ungranted AccessExclusiveLock. It's likely\na good idea to always run DDL with a fairly short lock_timeout, just\nin case this happens.\n\n> 3) Can I increase the autovacuum_freeze_max_age on the tables on production system ?\n\nYes, but you cannot increase the per-table setting above the global\nsetting. Changing the global setting requires a restart.\n\nDavid\n\n\n", "msg_date": "Fri, 8 May 2020 16:04:19 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AutoVacuum and growing transaction XID's" }, { "msg_contents": "Thanks David for your replies.\n\nOn Thu, May 7, 2020 at 11:01 PM David Rowley <[email protected]> wrote:\n\n> On Fri, 8 May 2020 at 09:18, github kran <[email protected]> wrote:\n> > 1) We haven't changed anything related to autovacuum except a work_mem\n> parameter which was increased to 4 GB which I believe is not related to\n> autovacuum\n>\n> It might want to look into increasing vacuum_cost_limit to something\n> well above 200 or dropping autovacuum_vacuum_cost_delay down from 20\n> to something much lower. However, you say you've not changed the\n> autovacuum settings, but you've also said:\n>\n> > 1) I see there are 8 Vacuum workers ( Not sure what changed) running\n> in the background and the concern I have is all of these vacuum processes\n> are running with wrap around and while they are running\n>\n\n - Yes I said it was originally 3 but I noticed the work_mem parameter\n was changed few weeks back to 4 GB and then from that day onwards there is\n an increasing trend of the MaxUsedTransactionIds from 200 Million to 347\n million ( It's growing day by day from last 2 -3 weeks)\n - Do you think there could be a formula on how the workers could have\n increased based on this increase in WORK_MEM controlled by database ?.\n\n\n> The default is 3, so if you have 8 then the settings are non-standard.\n>\n> It might be good to supply the output of:\n>\n> SELECT name,setting from pg_Settings where name like '%vacuum%';\n>\n Output of vacuum\n\nname setting min_val max_val boot_val reset_val\nautovacuum on null null on on\nautovacuum_analyze_scale_factor 0.02 0 100 0.1 0.02\nautovacuum_analyze_threshold 50 0 2147483647 50 50\nautovacuum_freeze_max_age 200000000 100000 2000000000 200000000 200000000\nautovacuum_max_workers 8 1 262143 3 8\nautovacuum_multixact_freeze_max_age 400000000 10000 2000000000 400000000\n400000000\nautovacuum_naptime 5 1 2147483 60 5\nautovacuum_vacuum_cost_delay 5 -1 100 20 5\nautovacuum_vacuum_cost_limit -1 -1 10000 -1 -1\nautovacuum_vacuum_scale_factor 0.05 0 100 0.2 0.05\nautovacuum_vacuum_threshold 50 0 2147483647 50 50\nautovacuum_work_mem -1 -1 2147483647 -1 -1\n\n\n>\n> You should know that the default speed that autovacuum runs at is\n> quite slow in 9.6. If you end up with all your autovacuum workers tied\n> up with anti-wraparound vacuums then other tables are likely to get\n> neglected and that could lead to stale stats or bloated tables. Best\n> to aim to get auto-vacuum running faster or aim to perform some manual\n> vacuums of tables that are over their max freeze age during an\n> off-peak period to make use of the lower load during those times.\n> Start with tables in pg_class with the largest age(relfrozenxid).\n> You'll still likely want to look at the speed autovacuum runs at\n> either way.\n>\n> Please be aware that the first time a new cluster crosses the\n> autovacuum_freeze_max_age threshold can be a bit of a pain point as it\n> can mean that many tables require auto-vacuum activity all at once.\n> The impact of this is compounded if you have many tables that never\n> receive an UPDATE/DELETE as auto-vacuum, in 9.6, does not visit those\n> tables for any other reason. After the first time, the relfrozenxids\n> of tables tend to be more staggered so their vacuum freeze\n> requirements are also more staggered and that tends to cause fewer\n> problems.\n>\n\n The current situation I have is the auto vacuum kicked with 8 tables with\neach of those tied to each worker and it's running very slow in 9.6 as you\nmentioned\n i observed VACUUM on those 8 tables is running from last 15 hrs and\nother process are running for 1 hr+ and others for few minutes for\ndifferent tables.\n\n Finally I would wait for your reply to see what could be done for this\nVACUUM and growing TXIDs values.\n\n - Do you think I should consider changing back the work_mem back to 4\n MB what it was originally ?\n - Can I apply your recommendations on a production instance directly\n or you prefer me to apply initially in other environment before applying on\n Prod ?\n - Also like I said I want to clean up few unused tables OR MANUAL\n VACUUM but current system doesn't allow me to do it considering these\n factors.\n - I will try to run VACUUM Manually during off peak hrs , Can I STOP\n the Manual VACUUM process if its take more than 10 minutes or what is the\n allowed time in mins I can have it running ?.\n\nDavid\n>\n\nThanks David for your replies.On Thu, May 7, 2020 at 11:01 PM David Rowley <[email protected]> wrote:On Fri, 8 May 2020 at 09:18, github kran <[email protected]> wrote:> 1)  We haven't changed anything related to autovacuum except a work_mem parameter which was increased to 4 GB which I believe is not related to autovacuum\n\nIt might want to look into increasing vacuum_cost_limit to something\nwell above 200 or dropping autovacuum_vacuum_cost_delay down from 20\nto something much lower. However, you say you've not changed theautovacuum settings, but you've also said:>    1)  I see there are 8 Vacuum workers ( Not sure what changed) running in the background and the concern I have is all of these vacuum processes are running with wrap around and while they are running  Yes I said it was originally 3 but I noticed  the work_mem parameter was changed few weeks back to 4 GB and then from that day onwards there is an increasing trend of  the MaxUsedTransactionIds from 200 Million to 347 million ( It's growing day by day from last 2 -3 weeks)Do you think there could be a formula on how the workers could have increased based on this increase in WORK_MEM controlled by database ?.\n\nThe default is 3, so if you have 8 then the settings are non-standard.\n\nIt might be good to supply the output of:\n\nSELECT name,setting from pg_Settings where name like '%vacuum%';   Output of vacuum   \n\n\n\n\n\n\nname\nsetting\nmin_val\nmax_val\nboot_val\nreset_val\n\n\nautovacuum\non\nnull\nnull\non\non\n\n\nautovacuum_analyze_scale_factor\n0.02\n0\n100\n0.1\n0.02\n\n\nautovacuum_analyze_threshold\n50\n0\n2147483647\n50\n50\n\n\nautovacuum_freeze_max_age\n200000000\n100000\n2000000000\n200000000\n200000000\n\n\nautovacuum_max_workers\n8\n1\n262143\n3\n8\n\n\nautovacuum_multixact_freeze_max_age\n400000000\n10000\n2000000000\n400000000\n400000000\n\n\nautovacuum_naptime\n5\n1\n2147483\n60\n5\n\n\nautovacuum_vacuum_cost_delay\n5\n-1\n100\n20\n5\n\n\nautovacuum_vacuum_cost_limit\n-1\n-1\n10000\n-1\n-1\n\n\nautovacuum_vacuum_scale_factor\n0.05\n0\n100\n0.2\n0.05\n\n\nautovacuum_vacuum_threshold\n50\n0\n2147483647\n50\n50\n\n\nautovacuum_work_mem\n-1\n-1\n2147483647\n-1\n-1\n\n \n\nYou should know that the default speed that autovacuum runs at is\nquite slow in 9.6. If you end up with all your autovacuum workers tied\nup with anti-wraparound vacuums then other tables are likely to get\nneglected and that could lead to stale stats or bloated tables. Best\nto aim to get auto-vacuum running faster or aim to perform some manual\nvacuums of tables that are over their max freeze age during an\noff-peak period to make use of the lower load during those times.\nStart with tables in pg_class with the largest age(relfrozenxid).\nYou'll still likely want to look at the speed autovacuum runs at\neither way.\n\nPlease be aware that the first time a new cluster crosses the\nautovacuum_freeze_max_age threshold can be a bit of a pain point as it\ncan mean that many tables require auto-vacuum activity all at once.\nThe impact of this is compounded if you have many tables that never\nreceive an UPDATE/DELETE as auto-vacuum, in 9.6, does not visit those\ntables for any other reason. After the first time, the relfrozenxids\nof tables tend to be more staggered so their vacuum freeze\nrequirements are also more staggered and that tends to cause fewer\nproblems.  The current situation I have is the auto vacuum kicked with 8 tables with each of those tied to each worker and it's running very slow in 9.6 as you mentioned   i observed VACUUM  on those 8 tables is running from last 15 hrs and other process are running for 1 hr+ and others for few minutes for different tables.   Finally I would wait for your reply to see what could be done for this VACUUM and growing TXIDs  values.     Do you think I should consider changing back the work_mem back to 4 MB what it was originally ?  Can I apply your recommendations on a production instance directly or you prefer me to apply initially in other environment before applying on Prod ?  Also like I said I want to clean up few unused tables OR MANUAL VACUUM but current system doesn't allow me to do it considering these factors. I will try to run VACUUM Manually during off peak hrs , Can I STOP the Manual VACUUM process if its take more than 10 minutes or what is the allowed time in mins I can have it running  ?.\nDavid", "msg_date": "Fri, 8 May 2020 01:06:55 -0500", "msg_from": "github kran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AutoVacuum and growing transaction XID's" }, { "msg_contents": "On Thu, May 7, 2020 at 11:04 PM David Rowley <[email protected]> wrote:\n\n> On Fri, 8 May 2020 at 13:51, github kran <[email protected]> wrote:\n> > I can't either DROP or ALTER any other tables ( REMOVE Inheritance\n> for any of old tables where the WRITES are not getting written to). Any of\n> the ALTER TABLE OR DROP TABLE DDL's arer not getting exeucted even I\n> WAITED FOR SEVERAL MINUTES , so I have terminated those queries as I didn't\n> have luck.\n>\n> The auto-vacuum freeze holds an SharedUpdateExclusiveLock on the table\n> being vacuumed. If you try any DDL that requires an\n> AccessExclusiveLock, it'll have to wait until the vacuum has\n> completed. If you leave the DDL running then all accesses to the table\n> will be queued behind the ungranted AccessExclusiveLock. It's likely\n> a good idea to always run DDL with a fairly short lock_timeout, just\n> in case this happens.\n>\n* How much value I can assign to lock_timeout so that I dont get into\ntrouble to test my DDL commands and without impacting other sessions.*\n\n>\n> > 3) Can I increase the autovacuum_freeze_max_age on the tables on\n> production system ?\n>\n\n\n>\n> Yes, but you cannot increase the per-table setting above the global\n> setting. Changing the global setting requires a restart.\n>\n> How can I change the value of the global setting of the\nautovacuum_freeze_max_Age value.\n\n\n> David\n>\n\nOn Thu, May 7, 2020 at 11:04 PM David Rowley <[email protected]> wrote:On Fri, 8 May 2020 at 13:51, github kran <[email protected]> wrote:\n>       I can't either DROP or ALTER any other tables ( REMOVE Inheritance for any of old tables where the WRITES are not getting written to). Any of the ALTER TABLE OR DROP TABLE  DDL's arer not getting exeucted even I WAITED FOR SEVERAL MINUTES , so I have terminated those queries as I didn't have luck.\n\nThe auto-vacuum freeze holds an SharedUpdateExclusiveLock on the table\nbeing vacuumed. If you try any DDL that requires an\nAccessExclusiveLock, it'll have to wait until the vacuum has\ncompleted. If you leave the DDL running then all accesses to the table\nwill be queued behind the ungranted AccessExclusiveLock.  It's likely\na good idea to always run DDL with a fairly short lock_timeout, just\nin case this happens.  How much value I can assign to lock_timeout so that I dont get into trouble to test my DDL commands and without impacting other sessions.\n\n>    3)  Can I increase the  autovacuum_freeze_max_age on the tables on production system ?   \n\nYes, but you cannot increase the per-table setting above the global\nsetting. Changing the global setting requires a restart.\n   How can I change the value of the global setting of the autovacuum_freeze_max_Age value.     \nDavid", "msg_date": "Fri, 8 May 2020 01:17:17 -0500", "msg_from": "github kran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AutoVacuum and growing transaction XID's" }, { "msg_contents": "autovacuum_naptime being only 5 seconds seems too frequent. A lock_timeout\nmight be 1-5 seconds depending on your system. Usually, DDL can fail and\nwait a little time rather than lock the table for minutes and have all\nreads back up behind the DDL.\n\nGiven you have autovacuum_vacuum_cost_limit set to unlimited (seems very\nodd), I'm not sure a manual vacuum freeze command on the tables with high\nage would perform differently. Still, issuing a vacuum freeze and then\nkilling the autovacuum process might be worth trying.\n\nautovacuum_naptime being only 5 seconds seems too frequent. A lock_timeout might be 1-5 seconds depending on your system. Usually, DDL can fail and wait a little time rather than lock the table for minutes and have all reads back up behind the DDL.Given you have autovacuum_vacuum_cost_limit set to unlimited (seems very odd), I'm not sure a manual vacuum freeze command on the tables with high age would perform differently. Still, issuing a vacuum freeze and then killing the autovacuum process might be worth trying.", "msg_date": "Fri, 8 May 2020 15:11:04 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AutoVacuum and growing transaction XID's" }, { "msg_contents": "Thanks for yous suggestions Michael and David.\n\nOn Fri, May 8, 2020 at 4:11 PM Michael Lewis <[email protected]> wrote:\n\n> autovacuum_naptime being only 5 seconds seems too frequent. A lock_timeout\n> might be 1-5 seconds depending on your system. Usually, DDL can fail and\n> wait a little time rather than lock the table for minutes and have all\n> reads back up behind the DDL.\n>\n> Given you have autovacuum_vacuum_cost_limit set to unlimited (seems very\n> odd), I'm not sure a manual vacuum freeze command on the tables with high\n> age would perform differently. Still, issuing a vacuum freeze and then\n> killing the autovacuum process might be worth trying.\n>\n\nThanks for yous suggestions Michael  and David.On Fri, May 8, 2020 at 4:11 PM Michael Lewis <[email protected]> wrote:autovacuum_naptime being only 5 seconds seems too frequent. A lock_timeout might be 1-5 seconds depending on your system. Usually, DDL can fail and wait a little time rather than lock the table for minutes and have all reads back up behind the DDL.Given you have autovacuum_vacuum_cost_limit set to unlimited (seems very odd), I'm not sure a manual vacuum freeze command on the tables with high age would perform differently. Still, issuing a vacuum freeze and then killing the autovacuum process might be worth trying.", "msg_date": "Tue, 12 May 2020 10:40:25 -0500", "msg_from": "github kran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AutoVacuum and growing transaction XID's" } ]
[ { "msg_contents": "I noticed something peculiar while optimizing complex views today. The query planner does not skip inner joins that, to my understanding, can have no impact on the result. Am I missing a situation where these joins could impact the result?\n\nThe following demonstrates the problem without the complex views. It also demonstrates how the planner simplifies a LEFT JOIN in the same situation. The left and right sides of an inner join could be swapped, obviously, but here I kept the unique constraint on the right.\n\n\n\nCREATE TABLE foo (\n id INTEGER PRIMARY KEY\n);\n\nCREATE TABLE bar (\n foo_id INTEGER NOT NULL REFERENCES foo\n);\n\n-- This simplifies to SELECT COUNT(*) FROM bar;\nEXPLAIN SELECT COUNT(*)\nFROM bar\nLEFT JOIN foo ON bar.foo_id = foo.id;\n\n-- This should simplify to SELECT COUNT(*) FROM bar WHERE foo_id IS NOT NULL;\n-- The presence of a NOT NULL constraint on foo_id has no effect.\nEXPLAIN SELECT COUNT(*)\nFROM bar\nINNER JOIN foo ON bar.foo_id = foo.id;\n\n\n\n QUERY PLAN \n-------------------------------------------------------------\n Aggregate (cost=38.25..38.26 rows=1 width=8)\n -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=0)\n(2 rows)\n\n QUERY PLAN \n-------------------------------------------------------------------------\n Aggregate (cost=111.57..111.58 rows=1 width=8)\n -> Hash Join (cost=67.38..105.92 rows=2260 width=0)\n Hash Cond: (bar.foo_id_not_null = foo.id)\n -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=4)\n -> Hash (cost=35.50..35.50 rows=2550 width=4)\n -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n(6 rows)\n\n version \n-------------------------------------------------------------------------------------------------------------------\n PostgreSQL 12.2 on x86_64-apple-darwin19.4.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.59), 64-bit\n(1 row)\n\n\n", "msg_date": "Wed, 13 May 2020 22:44:56 -0700", "msg_from": "\"Matthew Nelson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Plan not skipping unnecessary inner join" }, { "msg_contents": "> Am I missing a situation where these joins could impact the result?\n\nYes it will impact the number of rows in the result. for example if foo is empty then postgres is required to return no results, regardless of how many rows are in bar. This is why it can ignore the table in the left join\n\n— David\n\n> On 14 May 2020, at 1:44 pm, Matthew Nelson <[email protected]> wrote:\n> \n> I noticed something peculiar while optimizing complex views today. The query planner does not skip inner joins that, to my understanding, can have no impact on the result. Am I missing a situation where these joins could impact the result?\n> \n> The following demonstrates the problem without the complex views. It also demonstrates how the planner simplifies a LEFT JOIN in the same situation. The left and right sides of an inner join could be swapped, obviously, but here I kept the unique constraint on the right.\n> \n> \n> \n> CREATE TABLE foo (\n> id INTEGER PRIMARY KEY\n> );\n> \n> CREATE TABLE bar (\n> foo_id INTEGER NOT NULL REFERENCES foo\n> );\n> \n> -- This simplifies to SELECT COUNT(*) FROM bar;\n> EXPLAIN SELECT COUNT(*)\n> FROM bar\n> LEFT JOIN foo ON bar.foo_id = foo.id;\n> \n> -- This should simplify to SELECT COUNT(*) FROM bar WHERE foo_id IS NOT NULL;\n> -- The presence of a NOT NULL constraint on foo_id has no effect.\n> EXPLAIN SELECT COUNT(*)\n> FROM bar\n> INNER JOIN foo ON bar.foo_id = foo.id;\n> \n> \n> \n> QUERY PLAN \n> -------------------------------------------------------------\n> Aggregate (cost=38.25..38.26 rows=1 width=8)\n> -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=0)\n> (2 rows)\n> \n> QUERY PLAN \n> -------------------------------------------------------------------------\n> Aggregate (cost=111.57..111.58 rows=1 width=8)\n> -> Hash Join (cost=67.38..105.92 rows=2260 width=0)\n> Hash Cond: (bar.foo_id_not_null = foo.id)\n> -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=4)\n> -> Hash (cost=35.50..35.50 rows=2550 width=4)\n> -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n> (6 rows)\n> \n> version \n> -------------------------------------------------------------------------------------------------------------------\n> PostgreSQL 12.2 on x86_64-apple-darwin19.4.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.59), 64-bit\n> (1 row)\n> \n> \n\n\n\n", "msg_date": "Thu, 14 May 2020 14:45:23 +0800", "msg_from": "David Wheeler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plan not skipping unnecessary inner join" }, { "msg_contents": "But foo cannot be empty given the foreign key constraint on bar.foo_id.\n\nI suppose that explains the planner's behavior. While today's planner can take a shortcut when it finds a unique constraint, it does not check for foreign key constraints matching the join condition? So the LEFT JOIN simplification is possible but not the INNER JOIN simplification.\n\nMatthew\n\nOn Wed, May 13, 2020, at 11:45 PM, David Wheeler wrote:\n> > Am I missing a situation where these joins could impact the result?\n> \n> Yes it will impact the number of rows in the result. for example if foo \n> is empty then postgres is required to return no results, regardless of \n> how many rows are in bar. This is why it can ignore the table in the \n> left join\n> \n> — David\n> \n> > On 14 May 2020, at 1:44 pm, Matthew Nelson <[email protected]> wrote:\n> > \n> > I noticed something peculiar while optimizing complex views today. The query planner does not skip inner joins that, to my understanding, can have no impact on the result. Am I missing a situation where these joins could impact the result?\n> > \n> > The following demonstrates the problem without the complex views. It also demonstrates how the planner simplifies a LEFT JOIN in the same situation. The left and right sides of an inner join could be swapped, obviously, but here I kept the unique constraint on the right.\n> > \n> > \n> > \n> > CREATE TABLE foo (\n> > id INTEGER PRIMARY KEY\n> > );\n> > \n> > CREATE TABLE bar (\n> > foo_id INTEGER NOT NULL REFERENCES foo\n> > );\n> > \n> > -- This simplifies to SELECT COUNT(*) FROM bar;\n> > EXPLAIN SELECT COUNT(*)\n> > FROM bar\n> > LEFT JOIN foo ON bar.foo_id = foo.id;\n> > \n> > -- This should simplify to SELECT COUNT(*) FROM bar WHERE foo_id IS NOT NULL;\n> > -- The presence of a NOT NULL constraint on foo_id has no effect.\n> > EXPLAIN SELECT COUNT(*)\n> > FROM bar\n> > INNER JOIN foo ON bar.foo_id = foo.id;\n> > \n> > \n> > \n> > QUERY PLAN \n> > -------------------------------------------------------------\n> > Aggregate (cost=38.25..38.26 rows=1 width=8)\n> > -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=0)\n> > (2 rows)\n> > \n> > QUERY PLAN \n> > -------------------------------------------------------------------------\n> > Aggregate (cost=111.57..111.58 rows=1 width=8)\n> > -> Hash Join (cost=67.38..105.92 rows=2260 width=0)\n> > Hash Cond: (bar.foo_id = foo.id)\n> > -> Seq Scan on bar (cost=0.00..32.60 rows=2260 width=4)\n> > -> Hash (cost=35.50..35.50 rows=2550 width=4)\n> > -> Seq Scan on foo (cost=0.00..35.50 rows=2550 width=4)\n> > (6 rows)\n> > \n> > version \n> > -------------------------------------------------------------------------------------------------------------------\n> > PostgreSQL 12.2 on x86_64-apple-darwin19.4.0, compiled by Apple clang version 11.0.3 (clang-1103.0.32.59), 64-bit\n> > (1 row)\n> > \n> > \n> \n>\n\n\n", "msg_date": "Fri, 15 May 2020 10:36:58 -0700", "msg_from": "\"Matthew Nelson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Plan not skipping unnecessary inner join" } ]
[ { "msg_contents": "I redid the same tests with vanila postgres and with empty tables.\nI'm surprised, why does the plan have 2550 rows in explain?\n\nregards,\nRanier Vilela\n\nI redid the same tests with vanila postgres and with empty tables.I'm surprised, why does the plan have 2550 rows in explain?regards,Ranier Vilela", "msg_date": "Sun, 17 May 2020 09:32:47 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Plan not skipping unnecessary inner join" }, { "msg_contents": "On Sun, May 17, 2020 at 09:32:47AM -0300, Ranier Vilela wrote:\n> I redid the same tests with vanila postgres and with empty tables.\n> I'm surprised, why does the plan have 2550 rows in explain?\n\nThat's the *estimated* rowcount.\n\nThe planner tends to ignore table statistics which say the table is empty,\nsince that can lead to a terrible plan if it's not true (stats are out of date\nor autovacuum threshold not hit).\n\nSee also here\nhttps://www.postgresql.org/message-id/20171110204043.GS8563%40telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 17 May 2020 08:31:24 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plan not skipping unnecessary inner join" }, { "msg_contents": "Em dom., 17 de mai. de 2020 às 10:31, Justin Pryzby <[email protected]>\nescreveu:\n\n> On Sun, May 17, 2020 at 09:32:47AM -0300, Ranier Vilela wrote:\n> > I redid the same tests with vanila postgres and with empty tables.\n> > I'm surprised, why does the plan have 2550 rows in explain?\n>\n> That's the *estimated* rowcount.\n>\n> The planner tends to ignore table statistics which say the table is empty,\n> since that can lead to a terrible plan if it's not true (stats are out of\n> date\n> or autovacuum threshold not hit).\n>\nThanks for the explanation.\n\nregards,\nRanier Vilela\n\nEm dom., 17 de mai. de 2020 às 10:31, Justin Pryzby <[email protected]> escreveu:On Sun, May 17, 2020 at 09:32:47AM -0300, Ranier Vilela wrote:\n> I redid the same tests with vanila postgres and with empty tables.\n> I'm surprised, why does the plan have 2550 rows in explain?\n\nThat's the *estimated* rowcount.\n\nThe planner tends to ignore table statistics which say the table is empty,\nsince that can lead to a terrible plan if it's not true (stats are out of date\nor autovacuum threshold not hit).Thanks for the explanation.regards,Ranier Vilela", "msg_date": "Sun, 17 May 2020 11:05:39 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Plan not skipping unnecessary inner join" } ]
[ { "msg_contents": "First time posting here, so please let me know what additional information\nyou'd like. Thanks!\n\n## A description of what you are trying to achieve and what results you\nexpect:\n\n- I have a program that dynamically generates SQL queries\n- I made changes to how that program generates the SELECT part of the\nqueries (and sub-queries)\n- I have a query that went from taking less than a second with the old\nversion of the SELECT to over 2 hours to complete with the new version of\nthe SELECT\n- I'd expect both versions of the query to take the same amount of time\n- The changes to the SELECT appear in a sub-query and even though the\nchanged columns in the sub-query are ultimately ignored by the query using\nthe sub-query\n\n## PostgreSQL version number you are running:\n\n- Originally discovered on 9.6 running directly on the host\n - PostgreSQL 9.6.17 on x86_64-pc-linux-gnu (Ubuntu 9.6.17-2.pgdg18.04+1),\ncompiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit\n- Recreated the issue under 9.6 and 12.2 using Docker\n - PostgreSQL 9.6.17 on x86_64-pc-linux-musl, compiled by gcc (Alpine\n9.2.0) 9.2.0, 64-bit\n - PostgreSQL 12.2 on x86_64-pc-linux-musl, compiled by gcc (Alpine 9.2.0)\n9.2.0, 64-bit\n\n## How you installed PostgreSQL:\n\n- Originally found in PostgreSQL installed on host\n - postgresql-9.6/bionic-pgdg,now 9.6.17-2.pgdg18.04+1 amd64 [installed]\n- Recreated in Docker containers for 9.6 and 12.2\n\n## Changes made to the settings in the postgresql.conf file: see Server\nConfiguration for a quick way to list them all:\n\n- Ran tests using all combinations of the following values:\n - work_mem (4MB and 4GB)\n - random_page_cost: (1, 2, 4)\n- I found no difference in performance with any combination of the above\n\n## Operating system and version:\n\n- Ubuntu 18.04\n- Linux helios 4.15.0-76-generic #86-Ubuntu SMP Fri Jan 17 17:24:28 UTC\n2020 x86_64 x86_64 x86_64 GNU/Linux\n\n## What program you're using to connect to PostgreSQL:\n\n- psql\n\n## Is there anything relevant or unusual in the PostgreSQL server logs?:\n\n- Not that I've seen\n\n## For questions about any kind of error:\n\n- No errors\n\n## What you were doing when the error happened / how to cause the error:\n\n- As mentioned above, my dynamically generated SQL now includes a few extra\ncolumns in a sub-query than it used to. The query using the sub-query\nignores those columns, so I'd imagine no change in performance, but instead\nI see a 6000x increase in execution time.\n- I created a set of testing scripts that run a total of 48 variations on a\nfew queries and configuration options and have generated EXPLAINS for all\nof them.\n- I've included a copy of each test query and the resulting EXPLAIN\n- The files that include \"\\_present\" are those that have the new SQL and\nare running very slowly\n- The files the include \\_null, \\_deleted, \\_casted represent variations on\nthe queries that all run very quickly\n- Running VACUUM and/or ANALYZE does not seem to have an effect\n\n## Tables involved\n\n```\nexperiments=# set search_path to jigsaw_temp;\nSET\nexperiments=# \\dt\n List of relations\n Schema | Name | Type |\nOwner\n-------------+------------------------------------------------+-------+-------\n jigsaw_temp | jtemp1c37l3b_baseline_windows_after_inclusion | table | ryan\n jigsaw_temp | jtemp1c37l3b_baseline_windows_with_collections | table | ryan\n(2 rows)\n\nexperiments=# \\d jtemp1c37l3b_baseline_windows_after_inclusion;\n Table \"jigsaw_temp.jtemp1c37l3b_baseline_windows_after_inclusion\"\n Column | Type | Collation | Nullable | Default\n----------------------+------------------+-----------+----------+---------\n uuid | text | | |\n person_id | bigint | | |\n criterion_id | bigint | | |\n criterion_table | text | | |\n criterion_domain | text | | |\n start_date | date | | |\n end_date | date | | |\n source_value | text | | |\n source_vocabulary_id | text | | |\n drug_amount | double precision | | |\n drug_amount_units | text | | |\n drug_days_supply | integer | | |\n drug_name | text | | |\n drug_quantity | bigint | | |\n window_id | bigint | | |\n\nexperiments=# \\d jtemp1c37l3b_baseline_windows_with_collections\nTable \"jigsaw_temp.jtemp1c37l3b_baseline_windows_with_collections\"\n Column | Type | Collation | Nullable | Default\n-----------+--------+-----------+----------+---------\n person_id | bigint | | |\n uuid | text | | |\n\nexperiments=# SELECT relname, relpages, reltuples, relallvisible, relkind,\nrelnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class\nWHERE relname='jtemp1c37l3b_baseline_windows_with_collections';\n relname | relpages | reltuples |\nrelallvisible | relkind | relnatts | relhassubclass | reloptions |\npg_table_size\n------------------------------------------------+----------+-----------+---------------+---------+----------+----------------+------------+---------------\n jtemp1c37l3b_baseline_windows_with_collections | 1433 | 138972 |\n 0 | r | 2 | f | |\n 11771904\n(1 row)\n\nexperiments=# SELECT relname, relpages, reltuples, relallvisible, relkind,\nrelnatts, relhassubclass, reloptions, pg_table_size(oid) FROM pg_class\nWHERE relname='jtemp1c37l3b_baseline_windows_after_inclusion';\n relname | relpages | reltuples |\nrelallvisible | relkind | relnatts | relhassubclass | reloptions |\npg_table_size\n-----------------------------------------------+----------+-----------+---------------+---------+----------+----------------+------------+---------------\n jtemp1c37l3b_baseline_windows_after_inclusion | 9187 | 505244 |\n 0 | r | 15 | f | | 75309056\n(1 row)\n```\n\nconfig\n\n```\n name | setting\n |\n description\n----------------------------------------+------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------\n allow_system_table_mods | off\n | Allows modifications of the structure of system tables.\n application_name | psql\n | Sets the application name to be reported in statistics and logs.\n archive_cleanup_command |\n | Sets the shell command that will be executed at every restart\npoint.\n archive_command | (disabled)\n | Sets the shell command that will be called to archive a WAL file.\n archive_mode | off\n | Allows archiving of WAL files using archive_command.\n archive_timeout | 0\n | Forces a switch to the next WAL file if a new file has not been\nstarted within N seconds.\n array_nulls | on\n | Enable input of NULL elements in arrays.\n authentication_timeout | 1min\n | Sets the maximum allowed time to complete client authentication.\n autovacuum | on\n | Starts the autovacuum subprocess.\n autovacuum_analyze_scale_factor | 0.1\n | Number of tuple inserts, updates, or deletes prior to analyze as a\nfraction of reltuples.\n autovacuum_analyze_threshold | 50\n | Minimum number of tuple inserts, updates, or deletes prior to\nanalyze.\n autovacuum_freeze_max_age | 200000000\n | Age at which to autovacuum a table to prevent transaction ID\nwraparound.\n autovacuum_max_workers | 3\n | Sets the maximum number of simultaneously running autovacuum\nworker processes.\n autovacuum_multixact_freeze_max_age | 400000000\n | Multixact age at which to autovacuum a table to prevent multixact\nwraparound.\n autovacuum_naptime | 1min\n | Time to sleep between autovacuum runs.\n autovacuum_vacuum_cost_delay | 2ms\n | Vacuum cost delay in milliseconds, for autovacuum.\n autovacuum_vacuum_cost_limit | -1\n | Vacuum cost amount available before napping, for autovacuum.\n autovacuum_vacuum_scale_factor | 0.2\n | Number of tuple updates or deletes prior to vacuum as a fraction\nof reltuples.\n autovacuum_vacuum_threshold | 50\n | Minimum number of tuple updates or deletes prior to vacuum.\n autovacuum_work_mem | -1\n | Sets the maximum memory to be used by each autovacuum worker\nprocess.\n backend_flush_after | 0\n | Number of pages after which previously performed writes are\nflushed to disk.\n backslash_quote | safe_encoding\n | Sets whether \"\\'\" is allowed in string literals.\n bgwriter_delay | 200ms\n | Background writer sleep time between rounds.\n bgwriter_flush_after | 512kB\n | Number of pages after which previously performed writes are\nflushed to disk.\n bgwriter_lru_maxpages | 100\n | Background writer maximum number of LRU pages to flush per round.\n bgwriter_lru_multiplier | 2\n | Multiple of the average buffer usage to free per round.\n block_size | 8192\n | Shows the size of a disk block.\n bonjour | off\n | Enables advertising the server via Bonjour.\n bonjour_name |\n | Sets the Bonjour service name.\n bytea_output | hex\n | Sets the output format for bytea.\n check_function_bodies | on\n | Check function bodies during CREATE FUNCTION.\n checkpoint_completion_target | 0.5\n | Time spent flushing dirty buffers during checkpoint, as fraction\nof checkpoint interval.\n checkpoint_flush_after | 256kB\n | Number of pages after which previously performed writes are\nflushed to disk.\n checkpoint_timeout | 5min\n | Sets the maximum time between automatic WAL checkpoints.\n checkpoint_warning | 30s\n | Enables warnings if checkpoint segments are filled more frequently\nthan this.\n client_encoding | UTF8\n | Sets the client's character set encoding.\n client_min_messages | notice\n | Sets the message levels that are sent to the client.\n cluster_name |\n | Sets the name of the cluster, which is included in the process\ntitle.\n commit_delay | 0\n | Sets the delay in microseconds between transaction commit and\nflushing WAL to disk.\n commit_siblings | 5\n | Sets the minimum concurrent open transactions before performing\ncommit_delay.\n config_file |\n/var/lib/postgresql/data/postgresql.conf | Sets the server's main\nconfiguration file.\n constraint_exclusion | partition\n | Enables the planner to use constraints to optimize queries.\n cpu_index_tuple_cost | 0.005\n | Sets the planner's estimate of the cost of processing each index\nentry during an index scan.\n cpu_operator_cost | 0.0025\n | Sets the planner's estimate of the cost of processing each\noperator or function call.\n cpu_tuple_cost | 0.01\n | Sets the planner's estimate of the cost of processing each tuple\n(row).\n cursor_tuple_fraction | 0.1\n | Sets the planner's estimate of the fraction of a cursor's rows\nthat will be retrieved.\n data_checksums | off\n | Shows whether data checksums are turned on for this cluster.\n data_directory | /var/lib/postgresql/data\n | Sets the server's data directory.\n data_directory_mode | 0700\n | Mode of the data directory.\n data_sync_retry | off\n | Whether to continue running after a failure to sync data files.\n DateStyle | ISO, MDY\n | Sets the display format for date and time values.\n db_user_namespace | off\n | Enables per-database user names.\n deadlock_timeout | 1s\n | Sets the time to wait on a lock before checking for deadlock.\n debug_assertions | off\n | Shows whether the running server has assertion checks enabled.\n debug_pretty_print | on\n | Indents parse and plan tree displays.\n debug_print_parse | off\n | Logs each query's parse tree.\n debug_print_plan | off\n | Logs each query's execution plan.\n debug_print_rewritten | off\n | Logs each query's rewritten parse tree.\n default_statistics_target | 100\n | Sets the default statistics target.\n default_table_access_method | heap\n | Sets the default table access method for new tables.\n default_tablespace |\n | Sets the default tablespace to create tables and indexes in.\n default_text_search_config | pg_catalog.english\n | Sets default text search configuration.\n default_transaction_deferrable | off\n | Sets the default deferrable status of new transactions.\n default_transaction_isolation | read committed\n | Sets the transaction isolation level of each new transaction.\n default_transaction_read_only | off\n | Sets the default read-only status of new transactions.\n dynamic_library_path | $libdir\n | Sets the path for dynamically loadable modules.\n dynamic_shared_memory_type | posix\n | Selects the dynamic shared memory implementation used.\n effective_cache_size | 4GB\n | Sets the planner's assumption about the total size of the data\ncaches.\n effective_io_concurrency | 1\n | Number of simultaneous requests that can be handled efficiently by\nthe disk subsystem.\n enable_bitmapscan | on\n | Enables the planner's use of bitmap-scan plans.\n enable_gathermerge | on\n | Enables the planner's use of gather merge plans.\n enable_hashagg | on\n | Enables the planner's use of hashed aggregation plans.\n enable_hashjoin | on\n | Enables the planner's use of hash join plans.\n enable_indexonlyscan | on\n | Enables the planner's use of index-only-scan plans.\n enable_indexscan | on\n | Enables the planner's use of index-scan plans.\n enable_material | on\n | Enables the planner's use of materialization.\n enable_mergejoin | on\n | Enables the planner's use of merge join plans.\n enable_nestloop | on\n | Enables the planner's use of nested-loop join plans.\n enable_parallel_append | on\n | Enables the planner's use of parallel append plans.\n enable_parallel_hash | on\n | Enables the planner's use of parallel hash plans.\n enable_partition_pruning | on\n | Enables plan-time and run-time partition pruning.\n enable_partitionwise_aggregate | off\n | Enables partitionwise aggregation and grouping.\n enable_partitionwise_join | off\n | Enables partitionwise join.\n enable_seqscan | on\n | Enables the planner's use of sequential-scan plans.\n enable_sort | on\n | Enables the planner's use of explicit sort steps.\n enable_tidscan | on\n | Enables the planner's use of TID scan plans.\n escape_string_warning | on\n | Warn about backslash escapes in ordinary string literals.\n event_source | PostgreSQL\n | Sets the application name used to identify PostgreSQL messages in\nthe event log.\n exit_on_error | off\n | Terminate session on any error.\n external_pid_file |\n | Writes the postmaster PID to the specified file.\n extra_float_digits | 1\n | Sets the number of digits displayed for floating-point values.\n force_parallel_mode | off\n | Forces use of parallel query facilities.\n from_collapse_limit | 8\n | Sets the FROM-list size beyond which subqueries are not collapsed.\n fsync | on\n | Forces synchronization of updates to disk.\n full_page_writes | on\n | Writes full pages to WAL when first modified after a checkpoint.\n geqo | on\n | Enables genetic query optimization.\n geqo_effort | 5\n | GEQO: effort is used to set the default for other GEQO parameters.\n geqo_generations | 0\n | GEQO: number of iterations of the algorithm.\n geqo_pool_size | 0\n | GEQO: number of individuals in the population.\n geqo_seed | 0\n | GEQO: seed for random path selection.\n geqo_selection_bias | 2\n | GEQO: selective pressure within the population.\n geqo_threshold | 12\n | Sets the threshold of FROM items beyond which GEQO is used.\n gin_fuzzy_search_limit | 0\n | Sets the maximum allowed result for exact search by GIN.\n gin_pending_list_limit | 4MB\n | Sets the maximum size of the pending list for GIN index.\n hba_file |\n/var/lib/postgresql/data/pg_hba.conf | Sets the server's \"hba\"\nconfiguration file.\n hot_standby | on\n | Allows connections and queries during recovery.\n hot_standby_feedback | off\n | Allows feedback from a hot standby to the primary that will avoid\nquery conflicts.\n huge_pages | try\n | Use of huge pages on Linux or Windows.\n ident_file |\n/var/lib/postgresql/data/pg_ident.conf | Sets the server's \"ident\"\nconfiguration file.\n idle_in_transaction_session_timeout | 0\n | Sets the maximum allowed duration of any idling transaction.\n ignore_checksum_failure | off\n | Continues processing after a checksum failure.\n ignore_system_indexes | off\n | Disables reading from system indexes.\n integer_datetimes | on\n | Datetimes are integer based.\n IntervalStyle | postgres\n | Sets the display format for interval values.\n jit | on\n | Allow JIT compilation.\n jit_above_cost | 100000\n | Perform JIT compilation if query is more expensive.\n jit_debugging_support | off\n | Register JIT compiled function with debugger.\n jit_dump_bitcode | off\n | Write out LLVM bitcode to facilitate JIT debugging.\n jit_expressions | on\n | Allow JIT compilation of expressions.\n jit_inline_above_cost | 500000\n | Perform JIT inlining if query is more expensive.\n jit_optimize_above_cost | 500000\n | Optimize JITed functions if query is more expensive.\n jit_profiling_support | off\n | Register JIT compiled function with perf profiler.\n jit_provider | llvmjit\n | JIT provider to use.\n jit_tuple_deforming | on\n | Allow JIT compilation of tuple deforming.\n join_collapse_limit | 8\n | Sets the FROM-list size beyond which JOIN constructs are not\nflattened.\n krb_caseins_users | off\n | Sets whether Kerberos and GSSAPI user names should be treated as\ncase-insensitive.\n krb_server_keyfile |\n | Sets the location of the Kerberos server key file.\n lc_collate | en_US.utf8\n | Shows the collation order locale.\n lc_ctype | en_US.utf8\n | Shows the character classification and case conversion locale.\n lc_messages | en_US.utf8\n | Sets the language in which messages are displayed.\n lc_monetary | en_US.utf8\n | Sets the locale for formatting monetary amounts.\n lc_numeric | en_US.utf8\n | Sets the locale for formatting numbers.\n lc_time | en_US.utf8\n | Sets the locale for formatting date and time values.\n listen_addresses | *\n | Sets the host name or IP address(es) to listen to.\n lo_compat_privileges | off\n | Enables backward compatibility mode for privilege checks on large\nobjects.\n local_preload_libraries |\n | Lists unprivileged shared libraries to preload into each backend.\n lock_timeout | 0\n | Sets the maximum allowed duration of any wait for a lock.\n log_autovacuum_min_duration | -1\n | Sets the minimum execution time above which autovacuum actions\nwill be logged.\n log_checkpoints | off\n | Logs each checkpoint.\n log_connections | off\n | Logs each successful connection.\n log_destination | stderr\n | Sets the destination for server log output.\n log_directory | log\n | Sets the destination directory for log files.\n log_disconnections | off\n | Logs end of a session, including duration.\n log_duration | off\n | Logs the duration of each completed SQL statement.\n log_error_verbosity | default\n | Sets the verbosity of logged messages.\n log_executor_stats | off\n | Writes executor performance statistics to the server log.\n log_file_mode | 0600\n | Sets the file permissions for log files.\n log_filename | postgresql-%Y-%m-%d_%H%M%S.log\n | Sets the file name pattern for log files.\n log_hostname | off\n | Logs the host name in the connection logs.\n log_line_prefix | %m [%p]\n | Controls information prefixed to each log line.\n log_lock_waits | off\n | Logs long lock waits.\n log_min_duration_statement | -1\n | Sets the minimum execution time above which statements will be\nlogged.\n log_min_error_statement | error\n | Causes all statements generating error at or above this level to\nbe logged.\n log_min_messages | warning\n | Sets the message levels that are logged.\n log_parser_stats | off\n | Writes parser performance statistics to the server log.\n log_planner_stats | off\n | Writes planner performance statistics to the server log.\n log_replication_commands | off\n | Logs each replication command.\n log_rotation_age | 1d\n | Automatic log file rotation will occur after N minutes.\n log_rotation_size | 10MB\n | Automatic log file rotation will occur after N kilobytes.\n log_statement | none\n | Sets the type of statements logged.\n log_statement_stats | off\n | Writes cumulative performance statistics to the server log.\n log_temp_files | -1\n | Log the use of temporary files larger than this number of\nkilobytes.\n log_timezone | UTC\n | Sets the time zone to use in log messages.\n log_transaction_sample_rate | 0\n | Set the fraction of transactions to log for new transactions.\n log_truncate_on_rotation | off\n | Truncate existing log files of same name during log rotation.\n logging_collector | off\n | Start a subprocess to capture stderr output and/or csvlogs into\nlog files.\n maintenance_work_mem | 64MB\n | Sets the maximum memory to be used for maintenance operations.\n max_connections | 100\n | Sets the maximum number of concurrent connections.\n max_files_per_process | 1000\n | Sets the maximum number of simultaneously open files for each\nserver process.\n max_function_args | 100\n | Shows the maximum number of function arguments.\n max_identifier_length | 63\n | Shows the maximum identifier length.\n max_index_keys | 32\n | Shows the maximum number of index keys.\n max_locks_per_transaction | 64\n | Sets the maximum number of locks per transaction.\n max_logical_replication_workers | 4\n | Maximum number of logical replication worker processes.\n max_parallel_maintenance_workers | 2\n | Sets the maximum number of parallel processes per maintenance\noperation.\n max_parallel_workers | 8\n | Sets the maximum number of parallel workers that can be active at\none time.\n max_parallel_workers_per_gather | 2\n | Sets the maximum number of parallel processes per executor node.\n max_pred_locks_per_page | 2\n | Sets the maximum number of predicate-locked tuples per page.\n max_pred_locks_per_relation | -2\n | Sets the maximum number of predicate-locked pages and tuples per\nrelation.\n max_pred_locks_per_transaction | 64\n | Sets the maximum number of predicate locks per transaction.\n max_prepared_transactions | 0\n | Sets the maximum number of simultaneously prepared transactions.\n max_replication_slots | 10\n | Sets the maximum number of simultaneously defined replication\nslots.\n max_stack_depth | 2MB\n | Sets the maximum stack depth, in kilobytes.\n max_standby_archive_delay | 30s\n | Sets the maximum delay before canceling queries when a hot standby\nserver is processing archived WAL data.\n max_standby_streaming_delay | 30s\n | Sets the maximum delay before canceling queries when a hot standby\nserver is processing streamed WAL data.\n max_sync_workers_per_subscription | 2\n | Maximum number of table synchronization workers per subscription.\n max_wal_senders | 10\n | Sets the maximum number of simultaneously running WAL sender\nprocesses.\n max_wal_size | 1GB\n | Sets the WAL size that triggers a checkpoint.\n max_worker_processes | 8\n | Maximum number of concurrent worker processes.\n min_parallel_index_scan_size | 512kB\n | Sets the minimum amount of index data for a parallel scan.\n min_parallel_table_scan_size | 8MB\n | Sets the minimum amount of table data for a parallel scan.\n min_wal_size | 80MB\n | Sets the minimum size to shrink the WAL to.\n old_snapshot_threshold | -1\n | Time before a snapshot is too old to read pages changed after the\nsnapshot was taken.\n operator_precedence_warning | off\n | Emit a warning for constructs that changed meaning since\nPostgreSQL 9.4.\n parallel_leader_participation | on\n | Controls whether Gather and Gather Merge also run subplans.\n parallel_setup_cost | 1000\n | Sets the planner's estimate of the cost of starting up worker\nprocesses for parallel query.\n parallel_tuple_cost | 0.1\n | Sets the planner's estimate of the cost of passing each tuple\n(row) from worker to master backend.\n password_encryption | md5\n | Encrypt passwords.\n plan_cache_mode | auto\n | Controls the planner's selection of custom or generic plan.\n port | 5432\n | Sets the TCP port the server listens on.\n post_auth_delay | 0\n | Waits N seconds on connection startup after authentication.\n pre_auth_delay | 0\n | Waits N seconds on connection startup before authentication.\n primary_conninfo |\n | Sets the connection string to be used to connect to the sending\nserver.\n primary_slot_name |\n | Sets the name of the replication slot to use on the sending server.\n promote_trigger_file |\n | Specifies a file name whose presence ends recovery in the standby.\n quote_all_identifiers | off\n | When generating SQL fragments, quote all identifiers.\n random_page_cost | 4\n | Sets the planner's estimate of the cost of a nonsequentially\nfetched disk page.\n recovery_end_command |\n | Sets the shell command that will be executed once at the end of\nrecovery.\n recovery_min_apply_delay | 0\n | Sets the minimum delay for applying changes during recovery.\n recovery_target |\n | Set to \"immediate\" to end recovery as soon as a consistent state\nis reached.\n recovery_target_action | pause\n | Sets the action to perform upon reaching the recovery target.\n recovery_target_inclusive | on\n | Sets whether to include or exclude transaction with recovery\ntarget.\n recovery_target_lsn |\n | Sets the LSN of the write-ahead log location up to which recovery\nwill proceed.\n recovery_target_name |\n | Sets the named restore point up to which recovery will proceed.\n recovery_target_time |\n | Sets the time stamp up to which recovery will proceed.\n recovery_target_timeline | latest\n | Specifies the timeline to recover into.\n recovery_target_xid |\n | Sets the transaction ID up to which recovery will proceed.\n restart_after_crash | on\n | Reinitialize server after backend crash.\n restore_command |\n | Sets the shell command that will retrieve an archived WAL file.\n row_security | on\n | Enable row security.\n search_path | \"$user\", public\n | Sets the schema search order for names that are not\nschema-qualified.\n segment_size | 1GB\n | Shows the number of pages per disk file.\n seq_page_cost | 1\n | Sets the planner's estimate of the cost of a sequentially fetched\ndisk page.\n server_encoding | UTF8\n | Sets the server (database) character set encoding.\n server_version | 12.2\n | Shows the server version.\n server_version_num | 120002\n | Shows the server version as an integer.\n session_preload_libraries |\n | Lists shared libraries to preload into each backend.\n session_replication_role | origin\n | Sets the session's behavior for triggers and rewrite rules.\n shared_buffers | 12GB\n | Sets the number of shared memory buffers used by the server.\n shared_memory_type | mmap\n | Selects the shared memory implementation used for the main shared\nmemory region.\n shared_preload_libraries |\n | Lists shared libraries to preload into server.\n ssl | off\n | Enables SSL connections.\n ssl_ca_file |\n | Location of the SSL certificate authority file.\n ssl_cert_file | server.crt\n | Location of the SSL server certificate file.\n ssl_ciphers | HIGH:MEDIUM:+3DES:!aNULL\n | Sets the list of allowed SSL ciphers.\n ssl_crl_file |\n | Location of the SSL certificate revocation list file.\n ssl_dh_params_file |\n | Location of the SSL DH parameters file.\n ssl_ecdh_curve | prime256v1\n | Sets the curve to use for ECDH.\n ssl_key_file | server.key\n | Location of the SSL server private key file.\n ssl_library | OpenSSL\n | Name of the SSL library.\n ssl_max_protocol_version |\n | Sets the maximum SSL/TLS protocol version to use.\n ssl_min_protocol_version | TLSv1\n | Sets the minimum SSL/TLS protocol version to use.\n ssl_passphrase_command |\n | Command to obtain passphrases for SSL.\n ssl_passphrase_command_supports_reload | off\n | Also use ssl_passphrase_command during server reload.\n ssl_prefer_server_ciphers | on\n | Give priority to server ciphersuite order.\n standard_conforming_strings | on\n | Causes '...' strings to treat backslashes literally.\n statement_timeout | 0\n | Sets the maximum allowed duration of any statement.\n stats_temp_directory | pg_stat_tmp\n | Writes temporary statistics files to the specified directory.\n superuser_reserved_connections | 3\n | Sets the number of connection slots reserved for superusers.\n synchronize_seqscans | on\n | Enable synchronized sequential scans.\n synchronous_commit | on\n | Sets the current transaction's synchronization level.\n synchronous_standby_names |\n | Number of synchronous standbys and list of names of potential\nsynchronous ones.\n syslog_facility | local0\n | Sets the syslog \"facility\" to be used when syslog enabled.\n syslog_ident | postgres\n | Sets the program name used to identify PostgreSQL messages in\nsyslog.\n syslog_sequence_numbers | on\n | Add sequence number to syslog messages to avoid duplicate\nsuppression.\n syslog_split_messages | on\n | Split messages sent to syslog by lines and to fit into 1024 bytes.\n tcp_keepalives_count | 9\n | Maximum number of TCP keepalive retransmits.\n tcp_keepalives_idle | 7200\n | Time between issuing TCP keepalives.\n tcp_keepalives_interval | 75\n | Time between TCP keepalive retransmits.\n tcp_user_timeout | 0\n | TCP user timeout.\n temp_buffers | 8MB\n | Sets the maximum number of temporary buffers used by each session.\n temp_file_limit | -1\n | Limits the total size of all temporary files used by each process.\n temp_tablespaces |\n | Sets the tablespace(s) to use for temporary tables and sort files.\n TimeZone | UTC\n | Sets the time zone for displaying and interpreting time stamps.\n timezone_abbreviations | Default\n | Selects a file of time zone abbreviations.\n trace_notify | off\n | Generates debugging output for LISTEN and NOTIFY.\n trace_recovery_messages | log\n | Enables logging of recovery-related debugging information.\n trace_sort | off\n | Emit information about resource usage in sorting.\n track_activities | on\n | Collects information about executing commands.\n track_activity_query_size | 1kB\n | Sets the size reserved for pg_stat_activity.query, in bytes.\n track_commit_timestamp | off\n | Collects transaction commit time.\n track_counts | on\n | Collects statistics on database activity.\n track_functions | none\n | Collects function-level statistics on database activity.\n track_io_timing | off\n | Collects timing statistics for database I/O activity.\n transaction_deferrable | off\n | Whether to defer a read-only serializable transaction until it can\nbe executed with no possible serialization failures.\n transaction_isolation | read committed\n | Sets the current transaction's isolation level.\n transaction_read_only | off\n | Sets the current transaction's read-only status.\n transform_null_equals | off\n | Treats \"expr=NULL\" as \"expr IS NULL\".\n unix_socket_directories | /var/run/postgresql\n | Sets the directories where Unix-domain sockets will be created.\n unix_socket_group |\n | Sets the owning group of the Unix-domain socket.\n unix_socket_permissions | 0777\n | Sets the access permissions of the Unix-domain socket.\n update_process_title | on\n | Updates the process title to show the active SQL command.\n vacuum_cleanup_index_scale_factor | 0.1\n | Number of tuple inserts prior to index cleanup as a fraction of\nreltuples.\n vacuum_cost_delay | 0\n | Vacuum cost delay in milliseconds.\n vacuum_cost_limit | 200\n | Vacuum cost amount available before napping.\n vacuum_cost_page_dirty | 20\n | Vacuum cost for a page dirtied by vacuum.\n vacuum_cost_page_hit | 1\n | Vacuum cost for a page found in the buffer cache.\n vacuum_cost_page_miss | 10\n | Vacuum cost for a page not found in the buffer cache.\n vacuum_defer_cleanup_age | 0\n | Number of transactions by which VACUUM and HOT cleanup should be\ndeferred, if any.\n vacuum_freeze_min_age | 50000000\n | Minimum age at which VACUUM should freeze a table row.\n vacuum_freeze_table_age | 150000000\n | Age at which VACUUM should scan whole table to freeze tuples.\n vacuum_multixact_freeze_min_age | 5000000\n | Minimum age at which VACUUM should freeze a MultiXactId in a table\nrow.\n vacuum_multixact_freeze_table_age | 150000000\n | Multixact age at which VACUUM should scan whole table to freeze\ntuples.\n wal_block_size | 8192\n | Shows the block size in the write ahead log.\n wal_buffers | 16MB\n | Sets the number of disk-page buffers in shared memory for WAL.\n wal_compression | off\n | Compresses full-page writes written in WAL file.\n wal_consistency_checking |\n | Sets the WAL resource managers for which WAL consistency checks\nare done.\n wal_init_zero | on\n | Writes zeroes to new WAL files before first use.\n wal_keep_segments | 0\n | Sets the number of WAL files held for standby servers.\n wal_level | replica\n | Set the level of information written to the WAL.\n wal_log_hints | off\n | Writes full pages to WAL when first modified after a checkpoint,\neven for a non-critical modifications.\n wal_receiver_status_interval | 10s\n | Sets the maximum interval between WAL receiver status reports to\nthe sending server.\n wal_receiver_timeout | 1min\n | Sets the maximum wait time to receive data from the sending\nserver.\n wal_recycle | on\n | Recycles WAL files by renaming them.\n wal_retrieve_retry_interval | 5s\n | Sets the time to wait before retrying to retrieve WAL after a\nfailed attempt.\n wal_segment_size | 16MB\n | Shows the size of write ahead log segments.\n wal_sender_timeout | 1min\n | Sets the maximum time to wait for WAL replication.\n wal_sync_method | fdatasync\n | Selects the method used for forcing WAL updates to disk.\n wal_writer_delay | 200ms\n | Time between WAL flushes performed in the WAL writer.\n wal_writer_flush_after | 1MB\n | Amount of WAL written out by WAL writer that triggers a flush.\n work_mem | 4MB\n | Sets the maximum memory to be used for query workspaces.\n xmlbinary | base64\n | Sets how binary values are to be encoded in XML.\n xmloption | content\n | Sets whether XML data in implicit parsing and serialization\noperations is to be considered as documents or content fragments.\n zero_damaged_pages | off\n | Continues processing past damaged page headers.\n(314 rows)\n```\n\ntable statistics\n\n```\nexperiments=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x)\nfrac_MCV, tablename, attname, inherited, null_frac, n_distinct,\narray_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1)\nn_hist, correlation FROM pg_stats WHERE tablename ilike 'jtemp1c37l3b_%'\nORDER BY tablename, 1 DESC;\n frac_mcv | tablename | attname\n | inherited | null_frac | n_distinct | n_mcv | n_hist | correlation\n-----------+------------------------------------------------+----------------------+-----------+-----------+------------+-------+--------+-------------\n | jtemp1c37l3b_baseline_windows_after_inclusion |\ndrug_amount_units | f | 1 | 0 | | |\n | jtemp1c37l3b_baseline_windows_after_inclusion |\ndrug_days_supply | f | 1 | 0 | | |\n | jtemp1c37l3b_baseline_windows_after_inclusion | drug_name\n | f | 1 | 0 | | |\n | jtemp1c37l3b_baseline_windows_after_inclusion | window_id\n | f | 1 | 0 | | |\n | jtemp1c37l3b_baseline_windows_after_inclusion | uuid\n | f | 0 | -1 | | 101 | 1\n | jtemp1c37l3b_baseline_windows_after_inclusion | person_id\n | f | 0 | -1 | | 101 | 0.38026863\n | jtemp1c37l3b_baseline_windows_after_inclusion | criterion_id\n | f | 0 | -1 | | 101 | 0.38026673\n | jtemp1c37l3b_baseline_windows_after_inclusion | drug_quantity\n | f | 1 | 0 | | |\n | jtemp1c37l3b_baseline_windows_after_inclusion | source_value\n | f | 1 | 0 | | |\n | jtemp1c37l3b_baseline_windows_after_inclusion |\nsource_vocabulary_id | f | 1 | 0 | | |\n | jtemp1c37l3b_baseline_windows_after_inclusion | drug_amount\n | f | 1 | 0 | | |\n 1 | jtemp1c37l3b_baseline_windows_after_inclusion |\ncriterion_table | f | 0 | 1 | 1 |\n | 1\n 1 | jtemp1c37l3b_baseline_windows_after_inclusion |\ncriterion_domain | f | 0 | 1 | 1 |\n | 1\n 1 | jtemp1c37l3b_baseline_windows_after_inclusion | end_date\n | f | 0 | 1 | 1 | | 1\n 0.2788666 | jtemp1c37l3b_baseline_windows_after_inclusion | start_date\n | f | 0 | 4729 | 100 | 101 | 0.16305907\n | jtemp1c37l3b_baseline_windows_with_collections | person_id\n | f | 0 | -1 | | 101 | 1\n | jtemp1c37l3b_baseline_windows_with_collections | uuid\n | f | 0 | -1 | | 101 | 0.37703142\n(17 rows)\n```\n\ntable schemas\n\n```\n--\n-- PostgreSQL database dump\n--\n\n-- Dumped from database version 12.2\n-- Dumped by pg_dump version 12.2\n\nSET statement_timeout = 0;\nSET lock_timeout = 0;\nSET idle_in_transaction_session_timeout = 0;\nSET client_encoding = 'UTF8';\nSET standard_conforming_strings = on;\nSELECT pg_catalog.set_config('search_path', '', false);\nSET check_function_bodies = false;\nSET xmloption = content;\nSET client_min_messages = warning;\nSET row_security = off;\n\n--\n-- Name: jigsaw_temp; Type: SCHEMA; Schema: -; Owner: -\n--\n\nCREATE SCHEMA jigsaw_temp;\n\n\nSET default_tablespace = '';\n\nSET default_table_access_method = heap;\n\n--\n-- Name: jtemp1c37l3b_baseline_windows_after_inclusion; Type: TABLE;\nSchema: jigsaw_temp; Owner: -\n--\n\nCREATE TABLE jigsaw_temp.jtemp1c37l3b_baseline_windows_after_inclusion (\n uuid text,\n person_id bigint,\n criterion_id bigint,\n criterion_table text,\n criterion_domain text,\n start_date date,\n end_date date,\n source_value text,\n source_vocabulary_id text,\n drug_amount double precision,\n drug_amount_units text,\n drug_days_supply integer,\n drug_name text,\n drug_quantity bigint,\n window_id bigint\n);\n\n\n--\n-- Name: jtemp1c37l3b_baseline_windows_with_collections; Type: TABLE;\nSchema: jigsaw_temp; Owner: -\n--\n\nCREATE TABLE jigsaw_temp.jtemp1c37l3b_baseline_windows_with_collections (\n person_id bigint,\n uuid text\n);\n\n\n--\n-- PostgreSQL database dump complete\n--\n\n```\n\nExplains\n\n<a href=\"https://explain.depesz.com/s/HGDc\">HGDc :\nversion_9_6_work_mem_4GB_random_page_cost_1_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/f31D\">f31D :\nversion_12_2_work_mem_4MB_random_page_cost_4_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/UUPA\">UUPA :\nversion_12_2_work_mem_4MB_random_page_cost_4_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/P9DF\">P9DF :\nversion_12_2_work_mem_4MB_random_page_cost_4_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/YIU8\">YIU8 :\nversion_12_2_work_mem_4MB_random_page_cost_4_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/4woa\">4woa :\nversion_12_2_work_mem_4MB_random_page_cost_2_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/VC2b\">VC2b :\nversion_12_2_work_mem_4MB_random_page_cost_2_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/mQeT4\">mQeT4 :\nversion_12_2_work_mem_4MB_random_page_cost_2_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/BPTA\">BPTA :\nversion_12_2_work_mem_4MB_random_page_cost_2_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/K5Af\">K5Af :\nversion_12_2_work_mem_4MB_random_page_cost_1_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/kFhX\">kFhX :\nversion_12_2_work_mem_4MB_random_page_cost_1_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/6tNy\">6tNy :\nversion_12_2_work_mem_4MB_random_page_cost_1_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/OXDs\">OXDs :\nversion_12_2_work_mem_4MB_random_page_cost_1_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/jU6F\">jU6F :\nversion_12_2_work_mem_4GB_random_page_cost_4_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/layi\">layi :\nversion_12_2_work_mem_4GB_random_page_cost_4_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/m98L\">m98L :\nversion_12_2_work_mem_4GB_random_page_cost_4_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/iLSa\">iLSa :\nversion_12_2_work_mem_4GB_random_page_cost_4_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/8IK7\">8IK7 :\nversion_12_2_work_mem_4GB_random_page_cost_2_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/cHr1V\">cHr1V :\nversion_12_2_work_mem_4GB_random_page_cost_2_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/W5fF\">W5fF :\nversion_12_2_work_mem_4GB_random_page_cost_2_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/SQ9U\">SQ9U :\nversion_12_2_work_mem_4GB_random_page_cost_2_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/W2Qy\">W2Qy :\nversion_12_2_work_mem_4GB_random_page_cost_1_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/QNuX\">QNuX :\nversion_12_2_work_mem_4GB_random_page_cost_1_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/1yOO\">1yOO :\nversion_12_2_work_mem_4GB_random_page_cost_1_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/B3TY\">B3TY :\nversion_12_2_work_mem_4GB_random_page_cost_1_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/kZ8d\">kZ8d :\nversion_9_6_work_mem_4GB_random_page_cost_1_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/Axup\">Axup :\nversion_9_6_work_mem_4MB_random_page_cost_4_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/x2J7\">x2J7 :\nversion_9_6_work_mem_4MB_random_page_cost_4_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/OX6f\">OX6f :\nversion_9_6_work_mem_4MB_random_page_cost_4_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/k3He\">k3He :\nversion_9_6_work_mem_4MB_random_page_cost_2_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/BBhy\">BBhy :\nversion_9_6_work_mem_4MB_random_page_cost_2_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/5yaz\">5yaz :\nversion_9_6_work_mem_4MB_random_page_cost_2_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/7yiv\">7yiv :\nversion_9_6_work_mem_4MB_random_page_cost_2_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/E1cp\">E1cp :\nversion_9_6_work_mem_4MB_random_page_cost_1_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/nlQp\">nlQp :\nversion_9_6_work_mem_4MB_random_page_cost_1_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/b76Y\">b76Y :\nversion_9_6_work_mem_4MB_random_page_cost_1_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/V07e\">V07e :\nversion_9_6_work_mem_4MB_random_page_cost_1_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/EHHN\">EHHN :\nversion_9_6_work_mem_4GB_random_page_cost_4_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/w4dV\">w4dV :\nversion_9_6_work_mem_4GB_random_page_cost_4_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/TnAd\">TnAd :\nversion_9_6_work_mem_4GB_random_page_cost_4_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/Dh0pU\">Dh0pU :\nversion_9_6_work_mem_4GB_random_page_cost_4_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/jYZE\">jYZE :\nversion_9_6_work_mem_4GB_random_page_cost_2_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/9pex\">9pex :\nversion_9_6_work_mem_4GB_random_page_cost_2_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/feMn\">feMn :\nversion_9_6_work_mem_4GB_random_page_cost_2_effective_cache_size_36GB_query_file_deleted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/BtSV\">BtSV :\nversion_9_6_work_mem_4GB_random_page_cost_2_effective_cache_size_36GB_query_file_casted_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/rtxW\">rtxW :\nversion_9_6_work_mem_4GB_random_page_cost_1_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/ltmx\">ltmx :\nversion_9_6_work_mem_4GB_random_page_cost_1_effective_cache_size_36GB_query_file_nulls_sql\n| explain.depesz.com</a><br/>\n<a href=\"https://explain.depesz.com/s/Xfca\">Xfca :\nversion_9_6_work_mem_4MB_random_page_cost_4_effective_cache_size_36GB_query_file_present_sql\n| explain.depesz.com</a><br/>\n\nExample of quick query:\n\n```\nSET work_mem = '4GB';\nSET random_page_cost = '1';\nSET effective_cache_size = '36GB';\nEXPLAIN (ANALYZE,BUFFERS,SETTINGS) SELECT\n *\n FROM (\n SELECT\n \"l\".\"person_id\" AS \"person_id\",\n \"l\".\"criterion_id\" AS \"criterion_id\",\n \"l\".\"criterion_table\" AS \"criterion_table\",\n \"l\".\"criterion_domain\" AS \"criterion_domain\",\n \"l\".\"start_date\" AS \"start_date\",\n \"l\".\"end_date\" AS \"end_date\",\n \"l\".\"source_value\" AS \"source_value\",\n \"l\".\"source_vocabulary_id\" AS \"source_vocabulary_id\",\n \"l\".\"drug_amount\" AS \"drug_amount\",\n \"l\".\"drug_amount_units\" AS \"drug_amount_units\",\n \"l\".\"drug_days_supply\" AS \"drug_days_supply\",\n \"l\".\"drug_name\" AS \"drug_name\",\n \"l\".\"drug_quantity\" AS \"drug_quantity\",\n \"l\".\"uuid\" AS \"uuid\",\n \"l\".\"window_id\" AS \"window_id\"\n FROM (\n SELECT\n \"l\".\"person_id\" AS \"person_id\",\n \"l\".\"criterion_id\" AS \"criterion_id\",\n \"l\".\"criterion_table\" AS \"criterion_table\",\n \"l\".\"criterion_domain\" AS \"criterion_domain\",\n \"l\".\"start_date\" AS \"start_date\",\n \"l\".\"end_date\" AS \"end_date\",\n \"l\".\"source_value\" AS \"source_value\",\n \"l\".\"source_vocabulary_id\" AS\n\"source_vocabulary_id\",\n \"l\".\"drug_amount\" AS \"drug_amount\",\n \"l\".\"drug_amount_units\" AS \"drug_amount_units\",\n \"l\".\"drug_days_supply\" AS \"drug_days_supply\",\n \"l\".\"drug_name\" AS \"drug_name\",\n \"l\".\"drug_quantity\" AS \"drug_quantity\",\n \"l\".\"uuid\" AS \"uuid\",\n \"l\".\"window_id\" AS \"window_id\"\n FROM (\n SELECT\n \"person_id\" AS \"person_id\",\n \"criterion_id\" AS \"criterion_id\",\n \"criterion_table\" AS \"criterion_table\",\n \"criterion_domain\" AS \"criterion_domain\",\n \"start_date\" AS \"start_date\",\n \"end_date\" AS \"end_date\",\n \"source_value\" AS \"source_value\",\n \"source_vocabulary_id\" AS\n\"source_vocabulary_id\",\n \"drug_amount\" AS \"drug_amount\",\n \"drug_amount_units\" AS \"drug_amount_units\",\n \"drug_days_supply\" AS \"drug_days_supply\",\n \"drug_name\" AS \"drug_name\",\n \"drug_quantity\" AS \"drug_quantity\",\n \"uuid\" AS \"uuid\",\n \"window_id\" AS \"window_id\"\n FROM\n\n\"jigsaw_temp\".\"jtemp1c37l3b_baseline_windows_after_inclusion\") AS \"l\") AS\n\"l\"\n WHERE (EXISTS (\n SELECT\n 1\n FROM (\n SELECT\n \"r\".\"uuid\" AS \"uuid\"\n FROM (\n SELECT\n \"uuid\" AS \"uuid\",\n CAST(NULL AS float) AS \"drug_amount\",\n CAST(NULL AS text) AS\n\"drug_amount_units\",\n CAST(NULL AS bigint) AS\n\"drug_days_supply\",\n CAST(NULL AS text) AS \"drug_name\",\n CAST(NULL AS float) AS \"drug_quantity\",\n CAST(NULL AS integer) AS \"window_id\",\n \"person_id\" AS \"person_id\",\n CAST(NULL as bigint) AS \"criterion_id\",\n CAST(NULL as text) AS \"criterion_table\",\n CAST(NULL as date) AS \"start_date\",\n CAST(NULL as date) AS \"end_date\"\n FROM\n\n\"jigsaw_temp\".\"jtemp1c37l3b_baseline_windows_with_collections\") AS \"r\"\n GROUP BY \"r\".\"uuid\") AS \"r\"\n\n WHERE (\"l\".\"uuid\" = \"r\".\"uuid\")))) AS \"match_2\"\n```\n\nQuick EXPLAIN:\n\n```\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=7686.74..23252.46 rows=138972 width=265) (actual\ntime=147.773..407.372 rows=138972 loops=1)\n Hash Cond: (jtemp1c37l3b_baseline_windows_after_inclusion.uuid =\njtemp1c37l3b_baseline_windows_with_collections.uuid)\n Buffers: shared hit=10620\n -> Seq Scan on jtemp1c37l3b_baseline_windows_after_inclusion\n (cost=0.00..14239.44 rows=505244 width=265) (actual time=0.020..59.875\nrows=505244 loops=1)\n Buffers: shared hit=9187\n -> Hash (cost=5949.59..5949.59 rows=138972 width=46) (actual\ntime=146.796..146.797 rows=138972 loops=1)\n Buckets: 262144 Batches: 1 Memory Usage: 12711kB\n Buffers: shared hit=1433\n -> HashAggregate (cost=3170.15..4559.87 rows=138972 width=46)\n(actual time=74.334..103.543 rows=138972 loops=1)\n Group Key:\njtemp1c37l3b_baseline_windows_with_collections.uuid\n Buffers: shared hit=1433\n -> Seq Scan on\njtemp1c37l3b_baseline_windows_with_collections (cost=0.00..2822.72\nrows=138972 width=46) (actual time=0.011..16.294 rows=138972 loops=1)\n Buffers: shared hit=1433\n Settings: effective_cache_size = '36GB', random_page_cost = '1', work_mem\n= '4GB'\n Planning Time: 0.597 ms\n Execution Time: 418.440 ms\n(16 rows)\n```\n\nExample of Slow Query:\n\n```\nSET work_mem = '4GB';\nSET random_page_cost = '1';\nSET effective_cache_size = '36GB';\nEXPLAIN (ANALYZE,BUFFERS,SETTINGS) SELECT\n *\n FROM (\n SELECT\n \"l\".\"person_id\" AS \"person_id\",\n \"l\".\"criterion_id\" AS \"criterion_id\",\n \"l\".\"criterion_table\" AS \"criterion_table\",\n \"l\".\"criterion_domain\" AS \"criterion_domain\",\n \"l\".\"start_date\" AS \"start_date\",\n \"l\".\"end_date\" AS \"end_date\",\n \"l\".\"source_value\" AS \"source_value\",\n \"l\".\"source_vocabulary_id\" AS \"source_vocabulary_id\",\n \"l\".\"drug_amount\" AS \"drug_amount\",\n \"l\".\"drug_amount_units\" AS \"drug_amount_units\",\n \"l\".\"drug_days_supply\" AS \"drug_days_supply\",\n \"l\".\"drug_name\" AS \"drug_name\",\n \"l\".\"drug_quantity\" AS \"drug_quantity\",\n \"l\".\"uuid\" AS \"uuid\",\n \"l\".\"window_id\" AS \"window_id\"\n FROM (\n SELECT\n \"l\".\"person_id\" AS \"person_id\",\n \"l\".\"criterion_id\" AS \"criterion_id\",\n \"l\".\"criterion_table\" AS \"criterion_table\",\n \"l\".\"criterion_domain\" AS \"criterion_domain\",\n \"l\".\"start_date\" AS \"start_date\",\n \"l\".\"end_date\" AS \"end_date\",\n \"l\".\"source_value\" AS \"source_value\",\n \"l\".\"source_vocabulary_id\" AS\n\"source_vocabulary_id\",\n \"l\".\"drug_amount\" AS \"drug_amount\",\n \"l\".\"drug_amount_units\" AS \"drug_amount_units\",\n \"l\".\"drug_days_supply\" AS \"drug_days_supply\",\n \"l\".\"drug_name\" AS \"drug_name\",\n \"l\".\"drug_quantity\" AS \"drug_quantity\",\n \"l\".\"uuid\" AS \"uuid\",\n \"l\".\"window_id\" AS \"window_id\"\n FROM (\n SELECT\n \"person_id\" AS \"person_id\",\n \"criterion_id\" AS \"criterion_id\",\n \"criterion_table\" AS \"criterion_table\",\n \"criterion_domain\" AS \"criterion_domain\",\n \"start_date\" AS \"start_date\",\n \"end_date\" AS \"end_date\",\n \"source_value\" AS \"source_value\",\n \"source_vocabulary_id\" AS\n\"source_vocabulary_id\",\n \"drug_amount\" AS \"drug_amount\",\n \"drug_amount_units\" AS \"drug_amount_units\",\n \"drug_days_supply\" AS \"drug_days_supply\",\n \"drug_name\" AS \"drug_name\",\n \"drug_quantity\" AS \"drug_quantity\",\n \"uuid\" AS \"uuid\",\n \"window_id\" AS \"window_id\"\n FROM\n\n\"jigsaw_temp\".\"jtemp1c37l3b_baseline_windows_after_inclusion\") AS \"l\") AS\n\"l\"\n WHERE (EXISTS (\n SELECT\n 1\n FROM (\n SELECT\n \"r\".\"uuid\" AS \"uuid\"\n FROM (\n SELECT\n \"uuid\" AS \"uuid\",\n CAST(NULL AS float) AS \"drug_amount\",\n CAST(NULL AS text) AS\n\"drug_amount_units\",\n CAST(NULL AS bigint) AS\n\"drug_days_supply\",\n CAST(NULL AS text) AS \"drug_name\",\n CAST(NULL AS float) AS \"drug_quantity\",\n CAST(NULL AS integer) AS \"window_id\",\n \"person_id\" AS \"person_id\",\n \"criterion_id\" AS \"criterion_id\",\n \"criterion_table\" AS \"criterion_table\",\n \"criterion_domain\" AS\n\"criterion_domain\",\n \"start_date\" AS \"start_date\",\n \"end_date\" AS \"end_date\"\n FROM\n\n\"jigsaw_temp\".\"jtemp1c37l3b_baseline_windows_with_collections\") AS \"r\"\n GROUP BY \"r\".\"uuid\") AS \"r\"\n\n WHERE (\"l\".\"uuid\" = \"r\".\"uuid\")))) AS \"match_2\"\n\n```\n\nSlow Query EXPLAIN\n\n```\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on jtemp1c37l3b_baseline_windows_after_inclusion\n (cost=0.00..1601719821.59 rows=252622 width=265) (actual\ntime=721.931..5424987.346 rows=138972 loops=1)\n Filter: (SubPlan 1)\n Rows Removed by Filter: 366272\n Buffers: shared hit=624493395\n SubPlan 1\n -> Subquery Scan on r (cost=0.00..3170.16 rows=1 width=0) (actual\ntime=10.731..10.731 rows=0 loops=505244)\n Buffers: shared hit=624484208\n -> Group (cost=0.00..3170.15 rows=1 width=46) (actual\ntime=10.730..10.730 rows=0 loops=505244)\n Group Key:\njtemp1c37l3b_baseline_windows_with_collections.uuid\n Buffers: shared hit=624484208\n -> Seq Scan on\njtemp1c37l3b_baseline_windows_with_collections (cost=0.00..3170.15 rows=1\nwidth=46) (actual time=10.724..10.724 rows=0 loops=505244)\n Filter:\n(jtemp1c37l3b_baseline_windows_after_inclusion.uuid = uuid)\n Rows Removed by Filter: 119859\n Buffers: shared hit=624484208\n Settings: effective_cache_size = '36GB', random_page_cost = '1', work_mem\n= '4GB'\n Planning Time: 0.872 ms\n JIT:\n Functions: 12\n Options: Inlining true, Optimization true, Expressions true, Deforming\ntrue\n Timing: Generation 3.531 ms, Inlining 113.077 ms, Optimization 408.591\nms, Emission 160.142 ms, Total 685.341 ms\n Execution Time: 5425095.601 ms\n(21 rows)\n```\n\nThanks,\nRyan", "msg_date": "Mon, 18 May 2020 13:42:09 -0500", "msg_from": "A Guy Named Ryan <[email protected]>", "msg_from_op": true, "msg_subject": "Execution time from >1s -> 80m+ when extra columns added in SELECT\n for sub-query" }, { "msg_contents": "Hi\n\nIt looks so in slow plan is some strange relations between subselects - the\nslow plan looks like plan for correlated subquery, and it should be slow.\n\nMinimally you miss a index on column\n\njtemp1c37l3b_baseline_windows_after_inclusion.uuid\n\nHiIt looks so in slow plan is some strange relations between subselects - the slow plan looks like plan for correlated subquery, and it should be slow.Minimally you miss a index on column jtemp1c37l3b_baseline_windows_after_inclusion.uuid", "msg_date": "Mon, 18 May 2020 21:17:59 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution time from >1s -> 80m+ when extra columns added in\n SELECT for sub-query" }, { "msg_contents": "Thanks for responding!\n\n\nOn May 18, 2020 at 12:18:37 PM, Pavel Stehule\n([email protected](mailto:[email protected])) wrote:\n\n> Hi\n>\n> It looks so in slow plan is some strange relations between subselects - the slow plan looks like plan for correlated subquery, and it should be slow.\n\nI'm not very saavy about query planning and have very little (if any)\nidea of what I'm talking about.\n\n\nOur dynamically generated SQL does generate a lot of \"strange\"\nsubselects. They're hard to avoid for what we're doing. I admit it\nmakes for some abnormal looking queries. That said, I think we've\nrevealed an interesting issue here.\n\n\nWhy would the planner switch plans so drastically given that all I'm\ndoing is including a few extra columns in the subselect, particularly\nwhen those columns are discarded by the super? parent? subselect\n\n\nI'd expect the query planner to know that only the uuid column of the\nsubselect is used and not bother to actually project any additional\ncolumns. Also, my impression about the EXISTS operator is that it\nwould not really need to pull any column values from disk except where\nthose values would be used to determine the conditions for the EXISTS.\n\n\nMaybe I'm misunderstanding how this query should be executed, but I\nwanted to bring it to your attention because it seems like, though the\nSQL is a bit crazy, it reveals some very inconsistent planning on\nPostgreSQL's part and might be something to look into.\n\n> Minimally you miss a index on column\n> jtemp1c37l3b_baseline_windows_after_inclusion.uuid\n\nThanks for pointing this out!\n\n\nThis is an intermediate table we generate as part of a much larger\nprocess and the table is only used once. I'm under the impression that\nthere's a trade-off between taking the time to first build an index\nthen run the query rather than just running the query that one time.\n\nIt's an interesting idea to build the indexes just to avoid poor query\nplans and it's something I'll keep in mind if we run into other\nqueries that trigger poor performance and we're unable to work around\nthem some other way.\n\n\nIf you'd like, I can slap on index on those tables and re-run a few\nqueries, but again, even if performance improves in this one test\ncase, I'm not sure it'd convince us to start adding indexes to some or\nall our tables as part of our process just to avoid this one bad plan\nin this one query. We've run 15-20 of our larger processes and only\nhit this situation in one query in one process. I imagine adding\nindexes across the board might be a heavy-handed solution to work\naround this issue.\n\n\n", "msg_date": "Mon, 18 May 2020 13:49:39 -0700", "msg_from": "A Guy Named Ryan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Execution time from >1s -> 80m+ when extra columns added in\n SELECT for sub-query" }, { "msg_contents": "A Guy Named Ryan <[email protected]> writes:\n> Why would the planner switch plans so drastically given that all I'm\n> doing is including a few extra columns in the subselect, particularly\n> when those columns are discarded by the super? parent? subselect\n\nThe problem is that the columns you're adding *don't belong to that\ntable*. Per your schema dump,\njtemp1c37l3b_baseline_windows_with_collections only contains the columns\nperson_id and uuid. So when you write\n\n SELECT\n \"uuid\" AS \"uuid\",\n CAST(NULL AS float) AS \"drug_amount\",\n CAST(NULL AS text) AS\n\"drug_amount_units\",\n CAST(NULL AS bigint) AS\n\"drug_days_supply\",\n CAST(NULL AS text) AS \"drug_name\",\n CAST(NULL AS float) AS \"drug_quantity\",\n CAST(NULL AS integer) AS \"window_id\",\n \"person_id\" AS \"person_id\",\n \"criterion_id\" AS \"criterion_id\",\n \"criterion_table\" AS \"criterion_table\",\n \"criterion_domain\" AS\n\"criterion_domain\",\n \"start_date\" AS \"start_date\",\n \"end_date\" AS \"end_date\"\n FROM\n\"jigsaw_temp\".\"jtemp1c37l3b_baseline_windows_with_collections\"\n\nthose are the only two columns that are \"legitimately\" part of that bottom\nsub-select, and the others are outer references to\njtemp1c37l3b_baseline_windows_after_inclusion. That's legal per SQL,\nbut it makes the EXISTS into a correlated sub-select, which is something\nwe can't turn into a semijoin.\n\nIndeed, the unreferenced columns do get thrown away later, but that\ndoesn't happen until well past the point where the join restructuring\ndecisions are made (and there are good reasons for that ordering of\noperations).\n\nBasically I'd write this off as \"broken SQL code generator\". If it\ndoesn't understand the difference between a local reference and an\nouter reference, you shouldn't be letting it near your database.\nThat sort of fundamental misunderstanding often leads to incorrect\nquery results, never mind whether the query is fast or not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 May 2020 17:58:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution time from >1s -> 80m+ when extra columns added in\n SELECT for sub-query" } ]
[ { "msg_contents": "Hi folks,\n\nWe met unexpected PostgreSQL shutdown. After a little investigation we've\ndiscovered that problem is in OOM killer which kills our PostgreSQL.\nUnfortunately we can't find query on DB causing this problem. Log is as\nbelow:\n\nMay 05 09:05:33 HOST kernel: postgres invoked oom-killer: gfp_mask=0x201da,\norder=0, oom_score_adj=-1000\nMay 05 09:05:34 HOST kernel: postgres cpuset=/ mems_allowed=0\nMay 05 09:05:34 HOST kernel: CPU: 0 PID: 28286 Comm: postgres Not tainted\n3.10.0-1127.el7.x86_64 #1\nMay 05 09:05:34 HOST kernel: Hardware name: Red Hat KVM, BIOS 0.5.1\n01/01/2011\nMay 05 09:05:34 HOST kernel: Call Trace:\nMay 05 09:05:34 HOST kernel: [<ffffffffa097ff85>] dump_stack+0x19/0x1b\nMay 05 09:05:34 HOST kernel: [<ffffffffa097a8a3>] dump_header+0x90/0x229\nMay 05 09:05:34 HOST kernel: [<ffffffffa050da5b>] ?\ncred_has_capability+0x6b/0x120\nMay 05 09:05:34 HOST kernel: [<ffffffffa03c246e>]\noom_kill_process+0x25e/0x3f0\nMay 05 09:05:35 HOST kernel: [<ffffffffa0333a41>] ?\ncpuset_mems_allowed_intersects+0x21/0x30\nMay 05 09:05:40 HOST kernel: [<ffffffffa03c1ecd>] ?\noom_unkillable_task+0xcd/0x120\nMay 05 09:05:42 HOST kernel: [<ffffffffa03c1f76>] ?\nfind_lock_task_mm+0x56/0xc0\nMay 05 09:05:42 HOST kernel: [<ffffffffa03c2cc6>] out_of_memory+0x4b6/0x4f0\nMay 05 09:05:42 HOST kernel: [<ffffffffa097b3c0>]\n__alloc_pages_slowpath+0x5db/0x729\nMay 05 09:05:42 HOST kernel: [<ffffffffa03c9146>]\n__alloc_pages_nodemask+0x436/0x450\nMay 05 09:05:42 HOST kernel: [<ffffffffa0418e18>]\nalloc_pages_current+0x98/0x110\nMay 05 09:05:42 HOST kernel: [<ffffffffa03be377>]\n__page_cache_alloc+0x97/0xb0\nMay 05 09:05:42 HOST kernel: [<ffffffffa03c0f30>] filemap_fault+0x270/0x420\nMay 05 09:05:42 HOST kernel: [<ffffffffc03c07d6>]\next4_filemap_fault+0x36/0x50 [ext4]\nMay 05 09:05:42 HOST kernel: [<ffffffffa03edeea>]\n__do_fault.isra.61+0x8a/0x100\nMay 05 09:05:42 HOST kernel: [<ffffffffa03ee49c>]\ndo_read_fault.isra.63+0x4c/0x1b0\nMay 05 09:05:42 HOST kernel: [<ffffffffa03f5d00>]\nhandle_mm_fault+0xa20/0xfb0\nMay 05 09:05:42 HOST kernel: [<ffffffffa098d653>]\n__do_page_fault+0x213/0x500\nMay 05 09:05:42 HOST kernel: [<ffffffffa098da26>]\ntrace_do_page_fault+0x56/0x150\nMay 05 09:05:42 HOST kernel: [<ffffffffa098cfa2>]\ndo_async_page_fault+0x22/0xf0\nMay 05 09:05:42 HOST kernel: [<ffffffffa09897a8>]\nasync_page_fault+0x28/0x30\nMay 05 09:05:42 HOST kernel: Mem-Info:\nMay 05 09:05:42 HOST kernel: active_anon:5382083 inactive_anon:514069\nisolated_anon:0\n active_file:653\ninactive_file:412 isolated_file:75\n unevictable:0 dirty:0\nwriteback:0 unstable:0\n slab_reclaimable:120624\nslab_unreclaimable:14538\n mapped:814755 shmem:816586\npagetables:60496 bounce:0\n free:30218 free_pcp:562\nfree_cma:0\n\nCan You tell me how to find problematic query? Or how to \"pimp\"\nconfiguration to let db be alive and let us find problematic query?\n\n-- \n\nPozdrawiam\nPiotr Włodarczyk\n\nHi folks,We met unexpected PostgreSQL shutdown. After a little investigation we've discovered that problem is in OOM killer which kills our PostgreSQL. Unfortunately we can't find query on DB causing this problem. Log is as below:May 05 09:05:33 HOST kernel: postgres invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=-1000May 05 09:05:34 HOST kernel: postgres cpuset=/ mems_allowed=0May 05 09:05:34 HOST kernel: CPU: 0 PID: 28286 Comm: postgres Not tainted 3.10.0-1127.el7.x86_64 #1May 05 09:05:34 HOST kernel: Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011May 05 09:05:34 HOST kernel: Call Trace:May 05 09:05:34 HOST kernel:  [<ffffffffa097ff85>] dump_stack+0x19/0x1bMay 05 09:05:34 HOST kernel:  [<ffffffffa097a8a3>] dump_header+0x90/0x229May 05 09:05:34 HOST kernel:  [<ffffffffa050da5b>] ? cred_has_capability+0x6b/0x120May 05 09:05:34 HOST kernel:  [<ffffffffa03c246e>] oom_kill_process+0x25e/0x3f0May 05 09:05:35 HOST kernel:  [<ffffffffa0333a41>] ? cpuset_mems_allowed_intersects+0x21/0x30May 05 09:05:40 HOST kernel:  [<ffffffffa03c1ecd>] ? oom_unkillable_task+0xcd/0x120May 05 09:05:42 HOST kernel:  [<ffffffffa03c1f76>] ? find_lock_task_mm+0x56/0xc0May 05 09:05:42 HOST kernel:  [<ffffffffa03c2cc6>] out_of_memory+0x4b6/0x4f0May 05 09:05:42 HOST kernel:  [<ffffffffa097b3c0>] __alloc_pages_slowpath+0x5db/0x729May 05 09:05:42 HOST kernel:  [<ffffffffa03c9146>] __alloc_pages_nodemask+0x436/0x450May 05 09:05:42 HOST kernel:  [<ffffffffa0418e18>] alloc_pages_current+0x98/0x110May 05 09:05:42 HOST kernel:  [<ffffffffa03be377>] __page_cache_alloc+0x97/0xb0May 05 09:05:42 HOST kernel:  [<ffffffffa03c0f30>] filemap_fault+0x270/0x420May 05 09:05:42 HOST kernel:  [<ffffffffc03c07d6>] ext4_filemap_fault+0x36/0x50 [ext4]May 05 09:05:42 HOST kernel:  [<ffffffffa03edeea>] __do_fault.isra.61+0x8a/0x100May 05 09:05:42 HOST kernel:  [<ffffffffa03ee49c>] do_read_fault.isra.63+0x4c/0x1b0May 05 09:05:42 HOST kernel:  [<ffffffffa03f5d00>] handle_mm_fault+0xa20/0xfb0May 05 09:05:42 HOST kernel:  [<ffffffffa098d653>] __do_page_fault+0x213/0x500May 05 09:05:42 HOST kernel:  [<ffffffffa098da26>] trace_do_page_fault+0x56/0x150May 05 09:05:42 HOST kernel:  [<ffffffffa098cfa2>] do_async_page_fault+0x22/0xf0May 05 09:05:42 HOST kernel:  [<ffffffffa09897a8>] async_page_fault+0x28/0x30May 05 09:05:42 HOST kernel: Mem-Info:May 05 09:05:42 HOST kernel: active_anon:5382083 inactive_anon:514069 isolated_anon:0                                                active_file:653 inactive_file:412 isolated_file:75                                                unevictable:0 dirty:0 writeback:0 unstable:0                                                slab_reclaimable:120624 slab_unreclaimable:14538                                                mapped:814755 shmem:816586 pagetables:60496 bounce:0                                                free:30218 free_pcp:562 free_cma:0Can You tell me how to find problematic query? Or how to \"pimp\" configuration to let db be alive and let us find problematic query?-- PozdrawiamPiotr Włodarczyk", "msg_date": "Wed, 20 May 2020 09:30:52 +0200", "msg_from": "=?UTF-8?Q?Piotr_W=C5=82odarczyk?= <[email protected]>", "msg_from_op": true, "msg_subject": "OOM Killer kills PostgreSQL" }, { "msg_contents": "On Wed, 2020-05-20 at 09:30 +0200, Piotr Włodarczyk wrote:\n> We met unexpected PostgreSQL shutdown. After a little investigation\n> we've discovered that problem is in OOM killer which kills our PostgreSQL.\n> Unfortunately we can't find query on DB causing this problem. Log is as below:\n\nIs there nothing in the PostgreSQL log?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Wed, 20 May 2020 10:22:24 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OOM Killer kills PostgreSQL" }, { "msg_contents": "Nothing special. I'll check it agin after next dead\n\nOn Wed, May 20, 2020 at 10:22 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Wed, 2020-05-20 at 09:30 +0200, Piotr Włodarczyk wrote:\n> > We met unexpected PostgreSQL shutdown. After a little investigation\n> > we've discovered that problem is in OOM killer which kills our\n> PostgreSQL.\n> > Unfortunately we can't find query on DB causing this problem. Log is as\n> below:\n>\n> Is there nothing in the PostgreSQL log?\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\n-- \n\nPozdrawiam\nPiotr Włodarczyk\n\nNothing special. I'll check it agin after next deadOn Wed, May 20, 2020 at 10:22 AM Laurenz Albe <[email protected]> wrote:On Wed, 2020-05-20 at 09:30 +0200, Piotr Włodarczyk wrote:\n> We met unexpected PostgreSQL shutdown. After a little investigation\n> we've discovered that problem is in OOM killer which kills our PostgreSQL.\n> Unfortunately we can't find query on DB causing this problem. Log is as below:\n\nIs there nothing in the PostgreSQL log?\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n-- PozdrawiamPiotr Włodarczyk", "msg_date": "Wed, 20 May 2020 10:28:30 +0200", "msg_from": "=?UTF-8?Q?Piotr_W=C5=82odarczyk?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OOM Killer kills PostgreSQL" }, { "msg_contents": "Maybe your memory budget does not meet the RAM on the machine?\n\nThe problem is not in the query you are looking for, but in the settings you are using for Postgres.\n\nregards,\n\nfabio pardi\n\n\n\nOn 20/05/2020 09:30, Piotr Włodarczyk wrote:\n> Hi folks,\n>\n> We met unexpected PostgreSQL shutdown. After a little investigation we've discovered that problem is in OOM killer which kills our PostgreSQL. Unfortunately we can't find query on DB causing this problem. Log is as below:\n>\n> May 05 09:05:33 HOST kernel: postgres invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=-1000\n> May 05 09:05:34 HOST kernel: postgres cpuset=/ mems_allowed=0\n> May 05 09:05:34 HOST kernel: CPU: 0 PID: 28286 Comm: postgres Not tainted 3.10.0-1127.el7.x86_64 #1\n> May 05 09:05:34 HOST kernel: Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011\n> May 05 09:05:34 HOST kernel: Call Trace:\n> May 05 09:05:34 HOST kernel:  [<ffffffffa097ff85>] dump_stack+0x19/0x1b\n> May 05 09:05:34 HOST kernel:  [<ffffffffa097a8a3>] dump_header+0x90/0x229\n> May 05 09:05:34 HOST kernel:  [<ffffffffa050da5b>] ? cred_has_capability+0x6b/0x120\n> May 05 09:05:34 HOST kernel:  [<ffffffffa03c246e>] oom_kill_process+0x25e/0x3f0\n> May 05 09:05:35 HOST kernel:  [<ffffffffa0333a41>] ? cpuset_mems_allowed_intersects+0x21/0x30\n> May 05 09:05:40 HOST kernel:  [<ffffffffa03c1ecd>] ? oom_unkillable_task+0xcd/0x120\n> May 05 09:05:42 HOST kernel:  [<ffffffffa03c1f76>] ? find_lock_task_mm+0x56/0xc0\n> May 05 09:05:42 HOST kernel:  [<ffffffffa03c2cc6>] out_of_memory+0x4b6/0x4f0\n> May 05 09:05:42 HOST kernel:  [<ffffffffa097b3c0>] __alloc_pages_slowpath+0x5db/0x729\n> May 05 09:05:42 HOST kernel:  [<ffffffffa03c9146>] __alloc_pages_nodemask+0x436/0x450\n> May 05 09:05:42 HOST kernel:  [<ffffffffa0418e18>] alloc_pages_current+0x98/0x110\n> May 05 09:05:42 HOST kernel:  [<ffffffffa03be377>] __page_cache_alloc+0x97/0xb0\n> May 05 09:05:42 HOST kernel:  [<ffffffffa03c0f30>] filemap_fault+0x270/0x420\n> May 05 09:05:42 HOST kernel:  [<ffffffffc03c07d6>] ext4_filemap_fault+0x36/0x50 [ext4]\n> May 05 09:05:42 HOST kernel:  [<ffffffffa03edeea>] __do_fault.isra.61+0x8a/0x100\n> May 05 09:05:42 HOST kernel:  [<ffffffffa03ee49c>] do_read_fault.isra.63+0x4c/0x1b0\n> May 05 09:05:42 HOST kernel:  [<ffffffffa03f5d00>] handle_mm_fault+0xa20/0xfb0\n> May 05 09:05:42 HOST kernel:  [<ffffffffa098d653>] __do_page_fault+0x213/0x500\n> May 05 09:05:42 HOST kernel:  [<ffffffffa098da26>] trace_do_page_fault+0x56/0x150\n> May 05 09:05:42 HOST kernel:  [<ffffffffa098cfa2>] do_async_page_fault+0x22/0xf0\n> May 05 09:05:42 HOST kernel:  [<ffffffffa09897a8>] async_page_fault+0x28/0x30\n> May 05 09:05:42 HOST kernel: Mem-Info:\n> May 05 09:05:42 HOST kernel: active_anon:5382083 inactive_anon:514069 isolated_anon:0\n>                                                 active_file:653 inactive_file:412 isolated_file:75\n>                                                 unevictable:0 dirty:0 writeback:0 unstable:0\n>                                                 slab_reclaimable:120624 slab_unreclaimable:14538\n>                                                 mapped:814755 shmem:816586 pagetables:60496 bounce:0\n>                                                 free:30218 free_pcp:562 free_cma:0\n>\n> Can You tell me how to find problematic query? Or how to \"pimp\" configuration to let db be alive and let us find problematic query?\n>\n> -- \n>\n> Pozdrawiam\n> Piotr Włodarczyk\n\n\n\n\n\n\n\nMaybe your memory budget does not meet\n the RAM on the machine?\n\n The problem is not in the query you are looking for, but in the\n settings you are using for Postgres. \n\n regards,\n\n fabio pardi\n\n\n\nOn 20/05/2020 09:30, Piotr Włodarczyk\n wrote:\n\n\n\nHi folks,\n \nWe met unexpected PostgreSQL shutdown. After a little\n investigation we've discovered that problem is in OOM killer\n which kills our PostgreSQL. Unfortunately we can't find\n query on DB causing this problem. Log is as below:\n\n\n\nMay 05 09:05:33 HOST kernel: postgres invoked\n oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=-1000\n May 05 09:05:34 HOST kernel: postgres cpuset=/\n mems_allowed=0\n May 05 09:05:34 HOST kernel: CPU: 0 PID: 28286 Comm:\n postgres Not tainted 3.10.0-1127.el7.x86_64 #1\n May 05 09:05:34 HOST kernel: Hardware name: Red Hat KVM,\n BIOS 0.5.1 01/01/2011\n May 05 09:05:34 HOST kernel: Call Trace:\n May 05 09:05:34 HOST kernel:  [<ffffffffa097ff85>]\n dump_stack+0x19/0x1b\n May 05 09:05:34 HOST kernel:  [<ffffffffa097a8a3>]\n dump_header+0x90/0x229\n May 05 09:05:34 HOST kernel:  [<ffffffffa050da5b>] ?\n cred_has_capability+0x6b/0x120\n May 05 09:05:34 HOST kernel:  [<ffffffffa03c246e>]\n oom_kill_process+0x25e/0x3f0\n May 05 09:05:35 HOST kernel:  [<ffffffffa0333a41>] ?\n cpuset_mems_allowed_intersects+0x21/0x30\n May 05 09:05:40 HOST kernel:  [<ffffffffa03c1ecd>] ?\n oom_unkillable_task+0xcd/0x120\n May 05 09:05:42 HOST kernel:  [<ffffffffa03c1f76>] ?\n find_lock_task_mm+0x56/0xc0\n May 05 09:05:42 HOST kernel:  [<ffffffffa03c2cc6>]\n out_of_memory+0x4b6/0x4f0\n May 05 09:05:42 HOST kernel:  [<ffffffffa097b3c0>]\n __alloc_pages_slowpath+0x5db/0x729\n May 05 09:05:42 HOST kernel:  [<ffffffffa03c9146>]\n __alloc_pages_nodemask+0x436/0x450\n May 05 09:05:42 HOST kernel:  [<ffffffffa0418e18>]\n alloc_pages_current+0x98/0x110\n May 05 09:05:42 HOST kernel:  [<ffffffffa03be377>]\n __page_cache_alloc+0x97/0xb0\n May 05 09:05:42 HOST kernel:  [<ffffffffa03c0f30>]\n filemap_fault+0x270/0x420\n May 05 09:05:42 HOST kernel:  [<ffffffffc03c07d6>]\n ext4_filemap_fault+0x36/0x50 [ext4]\n May 05 09:05:42 HOST kernel:  [<ffffffffa03edeea>]\n __do_fault.isra.61+0x8a/0x100\n May 05 09:05:42 HOST kernel:  [<ffffffffa03ee49c>]\n do_read_fault.isra.63+0x4c/0x1b0\n May 05 09:05:42 HOST kernel:  [<ffffffffa03f5d00>]\n handle_mm_fault+0xa20/0xfb0\n May 05 09:05:42 HOST kernel:  [<ffffffffa098d653>]\n __do_page_fault+0x213/0x500\n May 05 09:05:42 HOST kernel:  [<ffffffffa098da26>]\n trace_do_page_fault+0x56/0x150\n May 05 09:05:42 HOST kernel:  [<ffffffffa098cfa2>]\n do_async_page_fault+0x22/0xf0\n May 05 09:05:42 HOST kernel:  [<ffffffffa09897a8>]\n async_page_fault+0x28/0x30\n May 05 09:05:42 HOST kernel: Mem-Info:\n May 05 09:05:42 HOST kernel: active_anon:5382083\n inactive_anon:514069 isolated_anon:0\n                                                \n active_file:653 inactive_file:412 isolated_file:75\n                                                \n unevictable:0 dirty:0 writeback:0 unstable:0\n                                                \n slab_reclaimable:120624 slab_unreclaimable:14538\n                                                \n mapped:814755 shmem:816586 pagetables:60496 bounce:0\n                                                 free:30218\n free_pcp:562 free_cma:0\n\n\n\nCan You tell me how to find problematic query? Or how\n to \"pimp\" configuration to let db be alive and let us find\n problematic query?\n\n\n-- \n\nPozdrawiam\nPiotr Włodarczyk", "msg_date": "Wed, 20 May 2020 10:40:21 +0200", "msg_from": "Fabio Pardi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OOM Killer kills PostgreSQL" }, { "msg_contents": "What postgres version ? What environment (RAM) and config ?\nhttps://wiki.postgresql.org/wiki/Server_Configuration\n\nI think you can probably find more info in dmesg/syslog ; probably a\nline saying \"OOM killed ...\" showing which PID and its vsz.\n\nAre you able to see some particular process continuously growing (like\nin top or ps) ?\n\nDo you have full query logs enabled to help determine which pid/query\nwas involved ?\nlog_statement=all log_min_messages=info log_checkpoints=on\nlog_lock_waits=on log_temp_files=0\n\n\n\nOn Wed, May 20, 2020 at 2:31 AM Piotr Włodarczyk\n<[email protected]> wrote:\n>\n> Hi folks,\n>\n> We met unexpected PostgreSQL shutdown. After a little investigation we've discovered that problem is in OOM killer which kills our PostgreSQL. Unfortunately we can't find query on DB causing this problem. Log is as below:\n>\n> May 05 09:05:33 HOST kernel: postgres invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=-1000\n> May 05 09:05:34 HOST kernel: postgres cpuset=/ mems_allowed=0\n> May 05 09:05:34 HOST kernel: CPU: 0 PID: 28286 Comm: postgres Not tainted 3.10.0-1127.el7.x86_64 #1\n> May 05 09:05:34 HOST kernel: Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011\n> May 05 09:05:34 HOST kernel: Call Trace:\n> May 05 09:05:34 HOST kernel: [<ffffffffa097ff85>] dump_stack+0x19/0x1b\n> May 05 09:05:34 HOST kernel: [<ffffffffa097a8a3>] dump_header+0x90/0x229\n> May 05 09:05:34 HOST kernel: [<ffffffffa050da5b>] ? cred_has_capability+0x6b/0x120\n> May 05 09:05:34 HOST kernel: [<ffffffffa03c246e>] oom_kill_process+0x25e/0x3f0\n> May 05 09:05:35 HOST kernel: [<ffffffffa0333a41>] ? cpuset_mems_allowed_intersects+0x21/0x30\n> May 05 09:05:40 HOST kernel: [<ffffffffa03c1ecd>] ? oom_unkillable_task+0xcd/0x120\n> May 05 09:05:42 HOST kernel: [<ffffffffa03c1f76>] ? find_lock_task_mm+0x56/0xc0\n> May 05 09:05:42 HOST kernel: [<ffffffffa03c2cc6>] out_of_memory+0x4b6/0x4f0\n> May 05 09:05:42 HOST kernel: [<ffffffffa097b3c0>] __alloc_pages_slowpath+0x5db/0x729\n> May 05 09:05:42 HOST kernel: [<ffffffffa03c9146>] __alloc_pages_nodemask+0x436/0x450\n> May 05 09:05:42 HOST kernel: [<ffffffffa0418e18>] alloc_pages_current+0x98/0x110\n> May 05 09:05:42 HOST kernel: [<ffffffffa03be377>] __page_cache_alloc+0x97/0xb0\n> May 05 09:05:42 HOST kernel: [<ffffffffa03c0f30>] filemap_fault+0x270/0x420\n> May 05 09:05:42 HOST kernel: [<ffffffffc03c07d6>] ext4_filemap_fault+0x36/0x50 [ext4]\n> May 05 09:05:42 HOST kernel: [<ffffffffa03edeea>] __do_fault.isra.61+0x8a/0x100\n> May 05 09:05:42 HOST kernel: [<ffffffffa03ee49c>] do_read_fault.isra.63+0x4c/0x1b0\n> May 05 09:05:42 HOST kernel: [<ffffffffa03f5d00>] handle_mm_fault+0xa20/0xfb0\n> May 05 09:05:42 HOST kernel: [<ffffffffa098d653>] __do_page_fault+0x213/0x500\n> May 05 09:05:42 HOST kernel: [<ffffffffa098da26>] trace_do_page_fault+0x56/0x150\n> May 05 09:05:42 HOST kernel: [<ffffffffa098cfa2>] do_async_page_fault+0x22/0xf0\n> May 05 09:05:42 HOST kernel: [<ffffffffa09897a8>] async_page_fault+0x28/0x30\n> May 05 09:05:42 HOST kernel: Mem-Info:\n> May 05 09:05:42 HOST kernel: active_anon:5382083 inactive_anon:514069 isolated_anon:0\n> active_file:653 inactive_file:412 isolated_file:75\n> unevictable:0 dirty:0 writeback:0 unstable:0\n> slab_reclaimable:120624 slab_unreclaimable:14538\n> mapped:814755 shmem:816586 pagetables:60496 bounce:0\n> free:30218 free_pcp:562 free_cma:0\n>\n> Can You tell me how to find problematic query? Or how to \"pimp\" configuration to let db be alive and let us find problematic query?\n>\n> --\n>\n> Pozdrawiam\n> Piotr Włodarczyk\n\n\n", "msg_date": "Wed, 20 May 2020 03:46:38 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OOM Killer kills PostgreSQL" }, { "msg_contents": "Greetings,\n\n* Piotr Włodarczyk ([email protected]) wrote:\n> We met unexpected PostgreSQL shutdown. After a little investigation we've\n> discovered that problem is in OOM killer which kills our PostgreSQL.\n\nYou need to configure your system to not overcommit.\n\nRead up on overcommit_ratio and overcommit_memory Linux settings.\n\nThanks,\n\nStephen", "msg_date": "Wed, 20 May 2020 12:54:19 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OOM Killer kills PostgreSQL" } ]
[ { "msg_contents": "Hi Team,\n\nThanks for your support.\n\nWe are using below environment:\n\nApplication :\nProgramming Language : JAVA\nGeoserver\n\nDatabase Stack:\nPostgreSQL : 9.5.15\nPostgis\n\nWe have 3 geoserver queries and are getting some performance issues after\nchanging the GeoServer queries.I have posted the queries and explain the\nplans of both the old and new queries.\n\nThe same type of issues found for 3 queries:\n1. Changed index scan to Bitmap scan.\n2. All New Queries, again condition checked.\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nOld Queriy:\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nQuery No:1\n\n1. No issue while executing query.\n2. It is feteching: 38 rows only.\n\n===\n\nEXPLAIN ANALYZE SELECT\n\"underground_route_id\",\"ug_route_sub_type\",\"sw_uid22\",encode(ST_AsBinary(ST_Simplify(ST_Force2D(\"the_geom\"),\n1.506687768824122E-5, true)),'base64') as \"the_geom\" FROM\n\"schema\".\"underground_route\" WHERE (\"the_geom\" && ST_GeomFromText('POLYGON\n((77.20637798309326 28.627887618687176, 77.20637798309326\n28.632784466413323, 77.21195697784424 28.632784466413323, 77.21195697784424\n28.627887618687176, 77.20637798309326 28.627887618687176))', 4326) AND\n((\"ug_route_sub_type\" = 'IP1-IRU-Intercity' AND \"ug_route_sub_type\" IS NOT\nNULL ) OR (\"ug_route_sub_type\" = 'IP1-IRU-Intracity' AND\n\"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IRU-Intracity'\nAND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" =\n'IRU-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'IP1-Own-Intercity' AND \"ug_route_sub_type\" IS NOT\nNULL ) OR (\"ug_route_sub_type\" = 'IP1-Own-Intracity' AND\n\"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Own-Intracity'\nAND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" =\n'Own-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'Own-Intercity-Patch-replacement' AND\n\"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" =\n'Own-Intracity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'Clamping' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'None' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'On kerb' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'Other' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'Suspend' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'In Duct Chamber' AND \"ug_route_sub_type\" IS NOT\nNULL ) OR (\"ug_route_sub_type\" = '' AND \"ug_route_sub_type\" IS NOT NULL )\nOR \"ug_route_sub_type\" IS NULL OR (\"sw_uid22\" = 'Overhead' AND \"sw_uid22\"\nIS NOT NULL AND \"ug_route_sub_type\" = 'Own-Intercity' AND\n\"ug_route_sub_type\" IS NOT NULL ) OR (\"sw_uid22\" = 'Overhead' AND\n\"sw_uid22\" IS NOT NULL AND \"ug_route_sub_type\" = 'Own-Intracity' AND\n\"ug_route_sub_type\" IS NOT NULL ) OR (\"sw_uid22\" = 'Overhead' AND\n\"sw_uid22\" IS NOT NULL AND \"ug_route_sub_type\" =\n'Own-Intercity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"sw_uid22\" = 'Overhead' AND \"sw_uid22\" IS NOT NULL AND\n\"ug_route_sub_type\" = 'Own-Intracity-Patch-replacement' AND\n\"ug_route_sub_type\" IS NOT NULL )));\n\n Explan Plan:\n ============\n\n Index Scan using underground_route_the_geom_geo_idx on underground_route\n (cost=0.41..41.20 rows=7 width=157) (actual time=0.158..1.010 rows=38\nloops=1)\n Index Cond: (the_geom &&\n'0103000020E610000001000000050000000000004C354D534022D3333EBDA03C400000004C354D53407BA9AC29FEA13C40000000B4904D53407BA9AC29FEA13C40000000B49\n04D534022D3333EBDA03C400000004C354D534022D3333EBDA03C40'::geometry)\n Filter: ((((ug_route_sub_type)::text = 'IP1-IRU-Intercity'::text) AND\n(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =\n'IP1-IRU-Intracity'::text) AN\nD (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =\n'IRU-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR\n(((ug_route_sub_type)::text = 'IRU-In\ntercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR\n(((ug_route_sub_type)::text = 'IP1-Own-Intercity'::text) AND\n(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub\n_type)::text = 'IP1-Own-Intracity'::text) AND (ug_route_sub_type IS NOT\nNULL)) OR (((ug_route_sub_type)::text = 'Own-Intracity'::text) AND\n(ug_route_sub_type IS NOT NUL\nL)) OR (((ug_route_sub_type)::text = 'Own-Intercity'::text) AND\n(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =\n'Own-Intercity-Patch-replacement'::tex\nt) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =\n'Own-Intracity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT\nNULL)) OR (((ug_route_su\nb_type)::text = 'Clamping'::text) AND (ug_route_sub_type IS NOT NULL)) OR\n(((ug_route_sub_type)::text = 'None'::text) AND (ug_route_sub_type IS NOT\nNULL)) OR (((ug_rout\ne_sub_type)::text = 'On kerb'::text) AND (ug_route_sub_type IS NOT NULL))\nOR (((ug_route_sub_type)::text = 'Other'::text) AND (ug_route_sub_type IS\nNOT NULL)) OR (((ug_\nroute_sub_type)::text = 'Suspend'::text) AND (ug_route_sub_type IS NOT\nNULL)) OR (((ug_route_sub_type)::text = 'In Duct Chamber'::text) AND\n(ug_route_sub_type IS NOT NU\nLL)) OR (((ug_route_sub_type)::text = ''::text) AND (ug_route_sub_type IS\nNOT NULL)) OR (ug_route_sub_type IS NULL) OR (((sw_uid22)::text =\n'Overhead'::text) AND (sw_ui\nd22 IS NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intercity'::text)\nAND (ug_route_sub_type IS NOT NULL)) OR (((sw_uid22)::text =\n'Overhead'::text) AND (sw_uid22 IS\n NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intracity'::text) AND\n(ug_route_sub_type IS NOT NULL)) OR (((sw_uid22)::text = 'Overhead'::text)\nAND (sw_uid22 IS NOT N\nULL) AND ((ug_route_sub_type)::text =\n'Own-Intercity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT\nNULL)) OR (((sw_uid22)::text = 'Overhead'::text) AND (sw_ui\nd22 IS NOT NULL) AND ((ug_route_sub_type)::text =\n'Own-Intracity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT\nNULL)))\n Planning time: 0.845 ms\n Execution time: 1.104 ms\n\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nNew Queries:\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nQuery No:1\n==========\n\n1. Issue while executing query ==> Taking long time 541.423 ms\n2. It is feteching: 71815 rows.\n\nQuery Changes:\n==============\n\na). Changed encode(ST_AsBinary(ST_Simplify(ST_Force2D(\"the_geom\"),\n1.506687768824122E-5, true)),'base64') TO\n encode(ST_AsBinary(ST_Simplify(ST_Force2D(\"the_geom\"),\n0.026540849041691673, true))\n\n2). Some geom changes.\n\n\n\nExplain Plan Observations:\n==========================\n\n1. Bitmap Scan instead of Index scan .\n=================================\n\n-> Bitmap Index Scan on underground_route_the_geom_geo_idx\n (cost=0.00..2382.03 rows=64216 width=0) (actual time=30.147..30.147\nrows=71847 loops=1)\n Index Cond: (the_geom &&\n'0103000020E61000000100000005000000AA8FF2FF5F7A56403B4CE76BFF7FB6BFAA8FF2FF5F7A5640DC47B36EEA8544408BE7F5FFCFE26040DC47B36EEA8544408BE\n7F5FFCFE260403B4CE76BFF7FB6BFAA8FF2FF5F7A56403B4CE76BFF7FB6BF'::geometry)\n\n\n2. Again recheck cond on new query.\n===================================\n\nBitmap Heap Scan on underground_route (cost=2394.70..139217.49 rows=50676\nwidth=157) (actual time=50.335..535.617 rows=71847 loops=1)\n Recheck Cond: (the_geom &&\n'0103000020E61000000100000005000000AA8FF2FF5F7A56403B4CE76BFF7FB6BFAA8FF2FF5F7A5640DC47B36EEA8544408BE7F5FFCFE26040DC47B36EEA8544408BE7F5F\nFCFE260403B4CE76BFF7FB6BFAA8FF2FF5F7A56403B4CE76BFF7FB6BF'::geometry)\n\nExplain Plan for new query:\n==========================\n\n explain analyze SELECT\n\"underground_route_id\",\"ug_route_sub_type\",\"sw_uid22\",encode(ST_AsBinary(ST_Simplify(ST_Force2D(\"the_geom\"),\n0.026540849041691673, true)),'base64') as \"the_geom\" FROM\n\"schema\".\"underground_route\" WHERE (\"the_geom\" && ST_GeomFromText('POLYGON\n((89.91210936248413 -0.0878905905185982, 89.91210936248413\n41.04621680978718, 135.0878906061956 41.04621680978718, 135.0878906061956\n-0.0878905905185982, 89.91210936248413 -0.0878905905185982))', 4326) AND\n((\"ug_route_sub_type\" = 'IP1-IRU-Intercity' AND \"ug_route_sub_type\" IS NOT\nNULL ) OR (\"ug_route_sub_type\" = 'IP1-IRU-Intracity' AND\n\"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IRU-Intracity'\nAND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" =\n'IRU-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'IP1-Own-Intercity' AND \"ug_route_sub_type\" IS NOT\nNULL ) OR (\"ug_route_sub_type\" = 'IP1-Own-Intracity' AND\n\"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Own-Intracity'\nAND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" =\n'Own-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'Own-Intercity-Patch-replacement' AND\n\"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" =\n'Own-Intracity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'Clamping' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'None' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'On kerb' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'Other' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'Suspend' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"ug_route_sub_type\" = 'In Duct Chamber' AND \"ug_route_sub_type\" IS NOT\nNULL ) OR (\"ug_route_sub_type\" = '' AND \"ug_route_sub_type\" IS NOT NULL )\nOR \"ug_route_sub_type\" IS NULL OR (\"sw_uid22\" = 'Overhead' AND \"sw_uid22\"\nIS NOT NULL AND \"ug_route_sub_type\" = 'Own-Intercity' AND\n\"ug_route_sub_type\" IS NOT NULL ) OR (\"sw_uid22\" = 'Overhead' AND\n\"sw_uid22\" IS NOT NULL AND \"ug_route_sub_type\" = 'Own-Intracity' AND\n\"ug_route_sub_type\" IS NOT NULL ) OR (\"sw_uid22\" = 'Overhead' AND\n\"sw_uid22\" IS NOT NULL AND \"ug_route_sub_type\" =\n'Own-Intercity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL ) OR\n(\"sw_uid22\" = 'Overhead' AND \"sw_uid22\" IS NOT NULL AND\n\"ug_route_sub_type\" = 'Own-Intracity-Patch-replacement' AND\n\"ug_route_sub_type\" IS NOT NULL )));\n\n\n Bitmap Heap Scan on underground_route (cost=2394.70..139217.49 rows=50676\nwidth=157) (actual time=50.335..535.617 rows=71847 loops=1)\n Recheck Cond: (the_geom &&\n'0103000020E61000000100000005000000AA8FF2FF5F7A56403B4CE76BFF7FB6BFAA8FF2FF5F7A5640DC47B36EEA8544408BE7F5FFCFE26040DC47B36EEA8544408BE7F5F\nFCFE260403B4CE76BFF7FB6BFAA8FF2FF5F7A56403B4CE76BFF7FB6BF'::geometry)\n Filter: ((((ug_route_sub_type)::text = 'IP1-IRU-Intercity'::text) AND\n(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =\n'IP1-IRU-Intracity'::text) AN\nD (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =\n'IRU-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR\n(((ug_route_sub_type)::text = 'IRU-In\ntercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR\n(((ug_route_sub_type)::text = 'IP1-Own-Intercity'::text) AND\n(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub\n_type)::text = 'IP1-Own-Intracity'::text) AND (ug_route_sub_type IS NOT\nNULL)) OR (((ug_route_sub_type)::text = 'Own-Intracity'::text) AND\n(ug_route_sub_type IS NOT NUL\nL)) OR (((ug_route_sub_type)::text = 'Own-Intercity'::text) AND\n(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =\n'Own-Intercity-Patch-replacement'::tex\nt) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =\n'Own-Intracity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT\nNULL)) OR (((ug_route_su\nb_type)::text = 'Clamping'::text) AND (ug_route_sub_type IS NOT NULL)) OR\n(((ug_route_sub_type)::text = 'None'::text) AND (ug_route_sub_type IS NOT\nNULL)) OR (((ug_rout\ne_sub_type)::text = 'On kerb'::text) AND (ug_route_sub_type IS NOT NULL))\nOR (((ug_route_sub_type)::text = 'Other'::text) AND (ug_route_sub_type IS\nNOT NULL)) OR (((ug_\nroute_sub_type)::text = 'Suspend'::text) AND (ug_route_sub_type IS NOT\nNULL)) OR (((ug_route_sub_type)::text = 'In Duct Chamber'::text) AND\n(ug_route_sub_type IS NOT NU\nLL)) OR (((ug_route_sub_type)::text = ''::text) AND (ug_route_sub_type IS\nNOT NULL)) OR (ug_route_sub_type IS NULL) OR (((sw_uid22)::text =\n'Overhead'::text) AND (sw_ui\nd22 IS NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intercity'::text)\nAND (ug_route_sub_type IS NOT NULL)) OR (((sw_uid22)::text =\n'Overhead'::text) AND (sw_uid22 IS\n NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intracity'::text) AND\n(ug_route_sub_type IS NOT NULL)) OR (((sw_uid22)::text = 'Overhead'::text)\nAND (sw_uid22 IS NOT N\nULL) AND ((ug_route_sub_type)::text =\n'Own-Intercity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT\nNULL)) OR (((sw_uid22)::text = 'Overhead'::text) AND (sw_ui\nd22 IS NOT NULL) AND ((ug_route_sub_type)::text =\n'Own-Intracity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT\nNULL)))\n Heap Blocks: exact=45957\n -> Bitmap Index Scan on underground_route_the_geom_geo_idx\n (cost=0.00..2382.03 rows=64216 width=0) (actual time=30.147..30.147\nrows=71847 loops=1)\n Index Cond: (the_geom &&\n'0103000020E61000000100000005000000AA8FF2FF5F7A56403B4CE76BFF7FB6BFAA8FF2FF5F7A5640DC47B36EEA8544408BE7F5FFCFE26040DC47B36EEA8544408BE\n7F5FFCFE260403B4CE76BFF7FB6BFAA8FF2FF5F7A56403B4CE76BFF7FB6BF'::geometry)\n Planning time: 0.906 ms\n Execution time: 541.423 ms\n(8 rows)\n\n\n************************************************************************************************************************\n\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nOld Queriy:\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nQuery No:2\n\n1. No issue while executing query.\n2. It is feteching: None.\n\n EXPLAIN ANALYZE SELECT\n\"building_id\",\"color\",encode(ST_AsBinary(ST_Force2D(\"the_geom\")),'base64')\nas \"the_geom\" FROM \"schema\".\"building\" WHERE (\"the_geom\" &&\nST_GeomFromText('POLYGON ((55.94238281250001 21.657428197370628,\n55.94238281250001 32.212801068015175, 67.80761718750001 32.212801068015175,\n67.80761718750001 21.657428197370628, 55.94238281250001\n21.657428197370628))', 4326) AND ((\"color\" = 'RED' AND \"color\" IS NOT NULL\n) OR (\"color\" = 'GREEN' AND \"color\" IS NOT NULL ) OR (\"color\" = 'AMBER' AND\n\"color\" IS NOT NULL ) OR (\"color\" = 'YELLOW' AND \"color\" IS NOT NULL ) OR\n(\"color\" = 'BLACK' AND \"color\" IS NOT NULL ) OR (\"color\" = 'BLUE' AND\n\"color\" IS NOT NULL ) OR (\"color\" = 'LIGHTGREEN' AND \"color\" IS NOT NULL\n)));\n\n Index Scan using building_the_geom_geo_idx on building (cost=0.28..8.32\nrows=1 width=47) (actual time=0.014..0.014 rows=0 loops=1)\n Index Cond: (the_geom &&\n'0103000020E6100000010000000500000001000000A0F84B40D22CDF364DA8354001000000A0F84B40EBD6BD103D1B404001000000B0F35040EBD6BD103D1B404001000000B\n0F35040D22CDF364DA8354001000000A0F84B40D22CDF364DA83540'::geometry)\n Filter: ((color IS NOT NULL) AND (((color)::text = 'RED'::text) OR\n((color)::text = 'GREEN'::text) OR ((color)::text = 'AMBER'::text) OR\n((color)::text = 'YELLOW'::t\next) OR ((color)::text = 'BLACK'::text) OR ((color)::text = 'BLUE'::text)\nOR ((color)::text = 'LIGHTGREEN'::text)))\n Planning time: 12.002 ms\n Execution time: 0.099 ms\n(5 rows)\n\n\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nNew Queries:\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\n1. issue while executing query ==>\n2. It is feteching: None. ==> --1462 rows returned\n\nQuery Changes:\n==============\n\n1). Some geom changes.\n\nExplain Plan Observations:\n==========================\n\n1. Bitmap Scan instead of Index scan .\n=================================\n\n-> Bitmap Index Scan on building_the_geom_geo_idx (cost=0.00..49.53\nrows=1234 width=0) (actual time=0.560..0.560 rows=1445 loops=1)\n Index Cond: (building.the_geom &&\n'0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B019\n9B444057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry)\n\n\n2. Again recheck cond on new query.\n===================================\n\nBitmap Heap Scan on schema.building (cost=49.75..2775.12 rows=860\nwidth=47) (actual time=0.923..11.523 rows=1444 loops=1)\n Output: building_id, color, encode(st_asbinary(st_force2d(the_geom)),\n'base64'::text)\n Recheck Cond: (building.the_geom &&\n'0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B0199B44\n4057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry)\n\nExplain Plan:\n-------------\n\nexplain analyze verbose SELECT\n\"building_id\",\"color\",encode(ST_AsBinary(ST_Force2D(\"the_geom\")),'base64')\nas \"the_geom\" FROM \"schema\".\"building\" WHERE (\"the_geom\" &&\nST_GeomFromText('POLYGON ((89.69238280001471 -0.3076157096010902,\n89.69238280001471 41.211721505803354, 135.30761716866502\n41.211721505803354, 135.30761716866502 -0.3076157096010902,\n89.69238280001471 -0.3076157096010902))', 4326) AND ((\"color\" = 'RED' AND\n\"color\" IS NOT NULL ) OR (\"color\" = 'GREEN' AND \"color\" IS NOT NULL ) OR\n(\"color\" = 'AMBER' AND \"color\" IS NOT NULL ) OR (\"color\" = 'YELLOW' AND\n\"color\" IS NOT NULL ) OR (\"color\" = 'BLACK' AND \"color\" IS NOT NULL ) OR\n(\"color\" = 'BLUE' AND \"color\" IS NOT NULL ) OR (\"color\" = 'LIGHTGREEN' AND\n\"color\" IS NOT NULL )));\n\nBitmap Heap Scan on schema.building (cost=49.75..2775.12 rows=860\nwidth=47) (actual time=0.923..11.523 rows=1444 loops=1)\n Output: building_id, color, encode(st_asbinary(st_force2d(the_geom)),\n'base64'::text)\n Recheck Cond: (building.the_geom &&\n'0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B0199B44\n4057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry)\n Filter: ((building.color IS NOT NULL) AND (((building.color)::text =\n'RED'::text) OR ((building.color)::text = 'GREEN'::text) OR\n((building.color)::text = 'AMBER'::t\next) OR ((building.color)::text = 'YELLOW'::text) OR\n((building.color)::text = 'BLACK'::text) OR ((building.color)::text =\n'BLUE'::text) OR ((building.color)::text = 'L\nIGHTGREEN'::text)))\n Rows Removed by Filter: 1\n Heap Blocks: exact=1148\n -> Bitmap Index Scan on building_the_geom_geo_idx (cost=0.00..49.53\nrows=1234 width=0) (actual time=0.560..0.560 rows=1445 loops=1)\n Index Cond: (building.the_geom &&\n'0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B019\n9B444057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry)\n Planning time: 0.813 ms\n Execution time: 11.785 ms\n(10 rows)\n\n\n******************************************************************************************************************\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nOld Queriy:\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nQuery No:3\n\n1. No issue while executing query.\n2. It is feteching: 13 rows.\n\n\nEXPLAIN ANALYZE SELECT\n\"mahole_id\",\"color\",\"u_id15\",encode(ST_AsBinary(ST_Force2D(\"the_geom\")),'base64')\nas \"the_geom\" FROM \"schema\".\"manhole\" WHERE \"the_geom\" &&\nST_GeomFromText('POLYGON ((77.22275018692015 28.632614963963334,\n77.22275018692015 28.637699918367634, 77.22854375839233 28.637699918367634,\n77.22854375839233 28.632614963963334, 77.22275018692015\n28.632614963963334))', 4326);\n\n\n'Index Scan using manhole_the_geom_geo_idx on manhole (cost=0.28..20.38\nrows=4 width=51) (actual time=0.056..0.103 rows=13 loops=1)'\n' Index Cond: (the_geom &&\n'0103000020E61000000100000005000000FFFFFF89414E5340C82EE50DF3A13C40FFFFFF89414E5340050D464D40A33C4000000076A04E5340050D464D40A33C4000000076A04E5340C82EE50DF3A13C40FFFFFF89414E5340C82EE50DF3A13C40'::geometry)'\n'Planning time: 0.266 ms'\n'Execution time: 0.132 ms'\n\n============================================\n\nexplain analyze SELECT\n\"mahole_id\",\"color\",\"u_id15\",encode(ST_AsBinary(ST_Force2D(\"the_geom\")),'base64')\nas \"the_geom\" FROM \"schema\".\"manhole\" WHERE \"the_geom\" &&\nST_GeomFromText('POLYGON ((89.69238280001471 -0.3076157096010902,\n89.69238280001471 41.211721505803354, 135.30761716866502\n41.211721505803354, 135.30761716866502 -0.3076157096010902,\n89.69238280001471 -0.3076157096010902))', 4326);\n\n'Bitmap Heap Scan on manhole (cost=272.70..14311.39 rows=7280 width=51)\n(actual time=1.956..74.734 rows=7537 loops=1)'\n' Recheck Cond: (the_geom &&\n'0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B0199B444057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry)'\n' Heap Blocks: exact=5512'\n' -> Bitmap Index Scan on manhole_the_geom_geo_idx (cost=0.00..270.88\nrows=7280 width=0) (actual time=1.181..1.181 rows=7537 loops=1)'\n' Index Cond: (the_geom &&\n'0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B0199B444057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry)'\n'Planning time: 0.287 ms'\n'Execution time: 75.180 ms'\n\n--7537 rows returned.\n\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n\nPlease provide some suggestion on this.\n\nThanks & Regards,\nPostgAnn.\n\nHi Team,Thanks for your support.We are using below environment: Application :Programming Language : JAVAGeoserverDatabase Stack:PostgreSQL : 9.5.15PostgisWe have 3 geoserver queries and are getting some performance issues after changing the GeoServer queries.I have posted the queries and explain the plans of both the old and new queries.The same type of issues found for 3 queries:1. Changed index scan to Bitmap scan.2. All New Queries, again condition checked. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Old Queriy:>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Query No:11. No issue while executing query.2. It is feteching: 38 rows only.===EXPLAIN ANALYZE SELECT \"underground_route_id\",\"ug_route_sub_type\",\"sw_uid22\",encode(ST_AsBinary(ST_Simplify(ST_Force2D(\"the_geom\"), 1.506687768824122E-5, true)),'base64') as \"the_geom\" FROM \"schema\".\"underground_route\" WHERE  (\"the_geom\" && ST_GeomFromText('POLYGON ((77.20637798309326 28.627887618687176, 77.20637798309326 28.632784466413323, 77.21195697784424 28.632784466413323, 77.21195697784424 28.627887618687176, 77.20637798309326 28.627887618687176))', 4326) AND ((\"ug_route_sub_type\" = 'IP1-IRU-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IP1-IRU-Intracity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IRU-Intracity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IRU-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IP1-Own-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IP1-Own-Intracity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Own-Intracity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Own-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Own-Intercity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Own-Intracity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Clamping' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'None' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'On kerb' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Other' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Suspend' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'In Duct Chamber' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = '' AND \"ug_route_sub_type\" IS NOT NULL ) OR \"ug_route_sub_type\" IS NULL  OR (\"sw_uid22\" = 'Overhead' AND \"sw_uid22\" IS NOT NULL  AND \"ug_route_sub_type\" = 'Own-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"sw_uid22\" = 'Overhead' AND \"sw_uid22\" IS NOT NULL  AND \"ug_route_sub_type\" = 'Own-Intracity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"sw_uid22\" = 'Overhead' AND \"sw_uid22\" IS NOT NULL  AND \"ug_route_sub_type\" = 'Own-Intercity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"sw_uid22\" = 'Overhead' AND \"sw_uid22\" IS NOT NULL  AND \"ug_route_sub_type\" = 'Own-Intracity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL )));   Explan Plan: ============  Index Scan using underground_route_the_geom_geo_idx on underground_route  (cost=0.41..41.20 rows=7 width=157) (actual time=0.158..1.010 rows=38 loops=1)   Index Cond: (the_geom && '0103000020E610000001000000050000000000004C354D534022D3333EBDA03C400000004C354D53407BA9AC29FEA13C40000000B4904D53407BA9AC29FEA13C40000000B4904D534022D3333EBDA03C400000004C354D534022D3333EBDA03C40'::geometry)   Filter: ((((ug_route_sub_type)::text = 'IP1-IRU-Intercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'IP1-IRU-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'IRU-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'IRU-Intercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'IP1-Own-Intercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'IP1-Own-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Own-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Own-Intercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Own-Intercity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Own-Intracity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Clamping'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'None'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'On kerb'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Other'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Suspend'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'In Duct Chamber'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = ''::text) AND (ug_route_sub_type IS NOT NULL)) OR (ug_route_sub_type IS NULL) OR (((sw_uid22)::text = 'Overhead'::text) AND (sw_uid22 IS NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((sw_uid22)::text = 'Overhead'::text) AND (sw_uid22 IS NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((sw_uid22)::text = 'Overhead'::text) AND (sw_uid22 IS NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intercity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((sw_uid22)::text = 'Overhead'::text) AND (sw_uid22 IS NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intracity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT NULL))) Planning time: 0.845 ms Execution time: 1.104 ms>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>New Queries:>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Query No:1==========1. Issue while executing query ==> Taking long time 541.423 ms2. It is feteching: 71815 rows.Query Changes:==============a). Changed encode(ST_AsBinary(ST_Simplify(ST_Force2D(\"the_geom\"), 1.506687768824122E-5, true)),'base64')  TO  encode(ST_AsBinary(ST_Simplify(ST_Force2D(\"the_geom\"), 0.026540849041691673, true)) 2). Some geom changes.Explain Plan Observations:==========================1. Bitmap Scan instead of Index scan .=================================->  Bitmap Index Scan on underground_route_the_geom_geo_idx  (cost=0.00..2382.03 rows=64216 width=0) (actual time=30.147..30.147 rows=71847 loops=1)         Index Cond: (the_geom && '0103000020E61000000100000005000000AA8FF2FF5F7A56403B4CE76BFF7FB6BFAA8FF2FF5F7A5640DC47B36EEA8544408BE7F5FFCFE26040DC47B36EEA8544408BE7F5FFCFE260403B4CE76BFF7FB6BFAA8FF2FF5F7A56403B4CE76BFF7FB6BF'::geometry)2. Again recheck cond on new query.===================================Bitmap Heap Scan on underground_route  (cost=2394.70..139217.49 rows=50676 width=157) (actual time=50.335..535.617 rows=71847 loops=1)   Recheck Cond: (the_geom && '0103000020E61000000100000005000000AA8FF2FF5F7A56403B4CE76BFF7FB6BFAA8FF2FF5F7A5640DC47B36EEA8544408BE7F5FFCFE26040DC47B36EEA8544408BE7F5FFCFE260403B4CE76BFF7FB6BFAA8FF2FF5F7A56403B4CE76BFF7FB6BF'::geometry)Explain Plan for new query:==========================  explain analyze SELECT \"underground_route_id\",\"ug_route_sub_type\",\"sw_uid22\",encode(ST_AsBinary(ST_Simplify(ST_Force2D(\"the_geom\"), 0.026540849041691673, true)),'base64') as \"the_geom\" FROM \"schema\".\"underground_route\" WHERE  (\"the_geom\" && ST_GeomFromText('POLYGON ((89.91210936248413 -0.0878905905185982, 89.91210936248413 41.04621680978718, 135.0878906061956 41.04621680978718, 135.0878906061956 -0.0878905905185982, 89.91210936248413 -0.0878905905185982))', 4326) AND ((\"ug_route_sub_type\" = 'IP1-IRU-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IP1-IRU-Intracity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IRU-Intracity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IRU-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IP1-Own-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'IP1-Own-Intracity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Own-Intracity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Own-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Own-Intercity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Own-Intracity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Clamping' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'None' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'On kerb' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Other' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'Suspend' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = 'In Duct Chamber' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"ug_route_sub_type\" = '' AND \"ug_route_sub_type\" IS NOT NULL ) OR \"ug_route_sub_type\" IS NULL  OR (\"sw_uid22\" = 'Overhead' AND \"sw_uid22\" IS NOT NULL  AND \"ug_route_sub_type\" = 'Own-Intercity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"sw_uid22\" = 'Overhead' AND \"sw_uid22\" IS NOT NULL  AND \"ug_route_sub_type\" = 'Own-Intracity' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"sw_uid22\" = 'Overhead' AND \"sw_uid22\" IS NOT NULL  AND \"ug_route_sub_type\" = 'Own-Intercity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL ) OR (\"sw_uid22\" = 'Overhead' AND \"sw_uid22\" IS NOT NULL  AND \"ug_route_sub_type\" = 'Own-Intracity-Patch-replacement' AND \"ug_route_sub_type\" IS NOT NULL )));     Bitmap Heap Scan on underground_route  (cost=2394.70..139217.49 rows=50676 width=157) (actual time=50.335..535.617 rows=71847 loops=1)   Recheck Cond: (the_geom && '0103000020E61000000100000005000000AA8FF2FF5F7A56403B4CE76BFF7FB6BFAA8FF2FF5F7A5640DC47B36EEA8544408BE7F5FFCFE26040DC47B36EEA8544408BE7F5FFCFE260403B4CE76BFF7FB6BFAA8FF2FF5F7A56403B4CE76BFF7FB6BF'::geometry)   Filter: ((((ug_route_sub_type)::text = 'IP1-IRU-Intercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'IP1-IRU-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'IRU-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'IRU-Intercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'IP1-Own-Intercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'IP1-Own-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Own-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Own-Intercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Own-Intercity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Own-Intracity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Clamping'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'None'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'On kerb'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Other'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'Suspend'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = 'In Duct Chamber'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text = ''::text) AND (ug_route_sub_type IS NOT NULL)) OR (ug_route_sub_type IS NULL) OR (((sw_uid22)::text = 'Overhead'::text) AND (sw_uid22 IS NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((sw_uid22)::text = 'Overhead'::text) AND (sw_uid22 IS NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((sw_uid22)::text = 'Overhead'::text) AND (sw_uid22 IS NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intercity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT NULL)) OR (((sw_uid22)::text = 'Overhead'::text) AND (sw_uid22 IS NOT NULL) AND ((ug_route_sub_type)::text = 'Own-Intracity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT NULL)))   Heap Blocks: exact=45957   ->  Bitmap Index Scan on underground_route_the_geom_geo_idx  (cost=0.00..2382.03 rows=64216 width=0) (actual time=30.147..30.147 rows=71847 loops=1)         Index Cond: (the_geom && '0103000020E61000000100000005000000AA8FF2FF5F7A56403B4CE76BFF7FB6BFAA8FF2FF5F7A5640DC47B36EEA8544408BE7F5FFCFE26040DC47B36EEA8544408BE7F5FFCFE260403B4CE76BFF7FB6BFAA8FF2FF5F7A56403B4CE76BFF7FB6BF'::geometry) Planning time: 0.906 ms Execution time: 541.423 ms(8 rows)************************************************************************************************************************>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Old Queriy:>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Query No:21. No issue while executing query.2. It is feteching: None. EXPLAIN ANALYZE SELECT \"building_id\",\"color\",encode(ST_AsBinary(ST_Force2D(\"the_geom\")),'base64') as \"the_geom\" FROM \"schema\".\"building\" WHERE  (\"the_geom\" && ST_GeomFromText('POLYGON ((55.94238281250001 21.657428197370628, 55.94238281250001 32.212801068015175, 67.80761718750001 32.212801068015175, 67.80761718750001 21.657428197370628, 55.94238281250001 21.657428197370628))', 4326) AND ((\"color\" = 'RED' AND \"color\" IS NOT NULL ) OR (\"color\" = 'GREEN' AND \"color\" IS NOT NULL ) OR (\"color\" = 'AMBER' AND \"color\" IS NOT NULL ) OR (\"color\" = 'YELLOW' AND \"color\" IS NOT NULL ) OR (\"color\" = 'BLACK' AND \"color\" IS NOT NULL ) OR (\"color\" = 'BLUE' AND \"color\" IS NOT NULL ) OR (\"color\" = 'LIGHTGREEN' AND \"color\" IS NOT NULL ))); Index Scan using building_the_geom_geo_idx on building  (cost=0.28..8.32 rows=1 width=47) (actual time=0.014..0.014 rows=0 loops=1)   Index Cond: (the_geom && '0103000020E6100000010000000500000001000000A0F84B40D22CDF364DA8354001000000A0F84B40EBD6BD103D1B404001000000B0F35040EBD6BD103D1B404001000000B0F35040D22CDF364DA8354001000000A0F84B40D22CDF364DA83540'::geometry)   Filter: ((color IS NOT NULL) AND (((color)::text = 'RED'::text) OR ((color)::text = 'GREEN'::text) OR ((color)::text = 'AMBER'::text) OR ((color)::text = 'YELLOW'::text) OR ((color)::text = 'BLACK'::text) OR ((color)::text = 'BLUE'::text) OR ((color)::text = 'LIGHTGREEN'::text))) Planning time: 12.002 ms Execution time: 0.099 ms(5 rows)>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>New Queries:>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>1. issue while executing query ==> 2. It is feteching: None.\t\t==> --1462 rows returnedQuery Changes:==============1). Some geom changes.Explain Plan Observations:==========================1. Bitmap Scan instead of Index scan .=================================->  Bitmap Index Scan on building_the_geom_geo_idx  (cost=0.00..49.53 rows=1234 width=0) (actual time=0.560..0.560 rows=1445 loops=1)         Index Cond: (building.the_geom && '0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B0199B444057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry)2. Again recheck cond on new query.===================================Bitmap Heap Scan on schema.building  (cost=49.75..2775.12 rows=860 width=47) (actual time=0.923..11.523 rows=1444 loops=1)   Output: building_id, color, encode(st_asbinary(st_force2d(the_geom)), 'base64'::text)   Recheck Cond: (building.the_geom && '0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B0199B444057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry)Explain Plan:-------------explain analyze verbose SELECT \"building_id\",\"color\",encode(ST_AsBinary(ST_Force2D(\"the_geom\")),'base64') as \"the_geom\" FROM \"schema\".\"building\" WHERE  (\"the_geom\" && ST_GeomFromText('POLYGON ((89.69238280001471 -0.3076157096010902, 89.69238280001471 41.211721505803354, 135.30761716866502 41.211721505803354, 135.30761716866502 -0.3076157096010902, 89.69238280001471 -0.3076157096010902))', 4326) AND ((\"color\" = 'RED' AND \"color\" IS NOT NULL ) OR (\"color\" = 'GREEN' AND \"color\" IS NOT NULL ) OR (\"color\" = 'AMBER' AND \"color\" IS NOT NULL ) OR (\"color\" = 'YELLOW' AND \"color\" IS NOT NULL ) OR (\"color\" = 'BLACK' AND \"color\" IS NOT NULL ) OR (\"color\" = 'BLUE' AND \"color\" IS NOT NULL ) OR (\"color\" = 'LIGHTGREEN' AND \"color\" IS NOT NULL )));Bitmap Heap Scan on schema.building  (cost=49.75..2775.12 rows=860 width=47) (actual time=0.923..11.523 rows=1444 loops=1)   Output: building_id, color, encode(st_asbinary(st_force2d(the_geom)), 'base64'::text)   Recheck Cond: (building.the_geom && '0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B0199B444057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry)   Filter: ((building.color IS NOT NULL) AND (((building.color)::text = 'RED'::text) OR ((building.color)::text = 'GREEN'::text) OR ((building.color)::text = 'AMBER'::text) OR ((building.color)::text = 'YELLOW'::text) OR ((building.color)::text = 'BLACK'::text) OR ((building.color)::text = 'BLUE'::text) OR ((building.color)::text = 'LIGHTGREEN'::text)))   Rows Removed by Filter: 1   Heap Blocks: exact=1148   ->  Bitmap Index Scan on building_the_geom_geo_idx  (cost=0.00..49.53 rows=1234 width=0) (actual time=0.560..0.560 rows=1445 loops=1)         Index Cond: (building.the_geom && '0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B0199B444057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry) Planning time: 0.813 ms Execution time: 11.785 ms(10 rows)******************************************************************************************************************>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Old Queriy:>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Query No:31. No issue while executing query.2. It is feteching: 13 rows.EXPLAIN ANALYZE SELECT \"mahole_id\",\"color\",\"u_id15\",encode(ST_AsBinary(ST_Force2D(\"the_geom\")),'base64') as \"the_geom\" FROM \"schema\".\"manhole\" WHERE  \"the_geom\" && ST_GeomFromText('POLYGON ((77.22275018692015 28.632614963963334, 77.22275018692015 28.637699918367634, 77.22854375839233 28.637699918367634, 77.22854375839233 28.632614963963334, 77.22275018692015 28.632614963963334))', 4326);'Index Scan using manhole_the_geom_geo_idx on manhole  (cost=0.28..20.38 rows=4 width=51) (actual time=0.056..0.103 rows=13 loops=1)''  Index Cond: (the_geom && '0103000020E61000000100000005000000FFFFFF89414E5340C82EE50DF3A13C40FFFFFF89414E5340050D464D40A33C4000000076A04E5340050D464D40A33C4000000076A04E5340C82EE50DF3A13C40FFFFFF89414E5340C82EE50DF3A13C40'::geometry)''Planning time: 0.266 ms''Execution time: 0.132 ms'============================================explain analyze SELECT \"mahole_id\",\"color\",\"u_id15\",encode(ST_AsBinary(ST_Force2D(\"the_geom\")),'base64') as \"the_geom\" FROM \"schema\".\"manhole\" WHERE  \"the_geom\" && ST_GeomFromText('POLYGON ((89.69238280001471 -0.3076157096010902, 89.69238280001471 41.211721505803354, 135.30761716866502 41.211721505803354, 135.30761716866502 -0.3076157096010902, 89.69238280001471 -0.3076157096010902))', 4326);'Bitmap Heap Scan on manhole  (cost=272.70..14311.39 rows=7280 width=51) (actual time=1.956..74.734 rows=7537 loops=1)''  Recheck Cond: (the_geom && '0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B0199B444057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry)''  Heap Blocks: exact=5512''  ->  Bitmap Index Scan on manhole_the_geom_geo_idx  (cost=0.00..270.88 rows=7280 width=0) (actual time=1.181..1.181 rows=7537 loops=1)''        Index Cond: (the_geom && '0103000020E610000001000000050000001298F2FF4F6C5640B23D1ECDF9AFD3BF1298F2FF4F6C564084A4B7B0199B444057E3F5FFD7E9604084A4B7B0199B444057E3F5FFD7E96040B23D1ECDF9AFD3BF1298F2FF4F6C5640B23D1ECDF9AFD3BF'::geometry)''Planning time: 0.287 ms''Execution time: 75.180 ms'--7537 rows returned.>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Please provide some suggestion on this.Thanks & Regards,PostgAnn.", "msg_date": "Wed, 20 May 2020 23:50:53 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Suggestion to improve query performance." } ]
[ { "msg_contents": "Hi Team,\n\nThanks for your support.\n\nCould you please suggest on below query.\n\nEnvironment\nPostgreSQL: 9.5.15\nPostgis: 2.2.7\n\nMostly table contain GIS data.\n\nWhile analyzing the table getting below NOTICE. It seems is pretty\nunderstanding, but needs help on the below points.\n\n1 . What might be the reason for getting the NOTICE?.\n2. Is this lead to any problems in the future?.\n\nANALYZE SCHEMA.TABLE;\n\nNOTICE: no non-null/empty features, unable to compute statistics\nNOTICE: no non-null/empty features, unable to compute statistics\nQuery returned successfully with no result in 1.1 secs.\n\nThanks for your support.\n\nRegards,\nPostgAnn.\n\nHi Team,Thanks for your support.Could you please suggest on below query.EnvironmentPostgreSQL: 9.5.15Postgis: 2.2.7Mostly table contain GIS data.While analyzing the table getting below NOTICE. It seems is pretty understanding, but needs help on the below points.1 . What might be the reason for getting the NOTICE?.2. Is this lead to any problems in the future?.ANALYZE SCHEMA.TABLE;NOTICE:  no non-null/empty features, unable to compute statisticsNOTICE:  no non-null/empty features, unable to compute statisticsQuery returned successfully with no result in 1.1 secs.Thanks for your support.Regards,PostgAnn.", "msg_date": "Thu, 21 May 2020 19:48:03 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Suggestion on table analyze" }, { "msg_contents": "On 5/21/20 7:18 AM, postgann2020 s wrote:\n> Hi Team,\n> \n> Thanks for your support.\n> \n> Could you please suggest on below query.\n> \n> Environment\n> PostgreSQL: 9.5.15\n> Postgis: 2.2.7\n> \n> Mostly table contain GIS data.\n> \n> While analyzing the table getting below NOTICE. It seems is pretty \n> understanding, but needs help on the below points.\n> \n> 1 . What might be the reason for getting the NOTICE?.\n> 2. Is this lead to any problems in the future?.\n> \n> ANALYZE SCHEMA.TABLE;\n> \n> NOTICE:  no non-null/empty features, unable to compute statistics\n> NOTICE:  no non-null/empty features, unable to compute statistics\n> Query returned successfully with no result in 1.1 secs.\n\nThis is coming from PostGIS:\n\npostgis/gserialized_estimate.c:\n/* If there's no useful features, we can't work out stats */\n if ( ! notnull_cnt )\n {\n elog(NOTICE, \"no non-null/empty features, unable to \ncompute statistics\");\n stats->stats_valid = false;\n return;\n }\n\n\n\nYou might find more information from here:\n\nhttps://postgis.net/support/\n\nThough FYI PostGIS 2.2.7 is past EOL:\n\nhttps://postgis.net/source/\n\n> \n> Thanks for your support.\n> \n> Regards,\n> PostgAnn.\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n", "msg_date": "Thu, 21 May 2020 07:41:37 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestion on table analyze" }, { "msg_contents": "Hi Adrian,\n\nThanks, I'll check it out.\n\nRegards,\nPostgAnn.\n\nOn Thu, May 21, 2020 at 8:11 PM Adrian Klaver <[email protected]>\nwrote:\n\n> On 5/21/20 7:18 AM, postgann2020 s wrote:\n> > Hi Team,\n> >\n> > Thanks for your support.\n> >\n> > Could you please suggest on below query.\n> >\n> > Environment\n> > PostgreSQL: 9.5.15\n> > Postgis: 2.2.7\n> >\n> > Mostly table contain GIS data.\n> >\n> > While analyzing the table getting below NOTICE. It seems is pretty\n> > understanding, but needs help on the below points.\n> >\n> > 1 . What might be the reason for getting the NOTICE?.\n> > 2. Is this lead to any problems in the future?.\n> >\n> > ANALYZE SCHEMA.TABLE;\n> >\n> > NOTICE: no non-null/empty features, unable to compute statistics\n> > NOTICE: no non-null/empty features, unable to compute statistics\n> > Query returned successfully with no result in 1.1 secs.\n>\n> This is coming from PostGIS:\n>\n> postgis/gserialized_estimate.c:\n> /* If there's no useful features, we can't work out stats */\n> if ( ! notnull_cnt )\n> {\n> elog(NOTICE, \"no non-null/empty features, unable to\n> compute statistics\");\n> stats->stats_valid = false;\n> return;\n> }\n>\n>\n>\n> You might find more information from here:\n>\n> https://postgis.net/support/\n>\n> Though FYI PostGIS 2.2.7 is past EOL:\n>\n> https://postgis.net/source/\n>\n> >\n> > Thanks for your support.\n> >\n> > Regards,\n> > PostgAnn.\n>\n>\n> --\n> Adrian Klaver\n> [email protected]\n>\n\nHi Adrian,Thanks, I'll check it out. Regards,PostgAnn.On Thu, May 21, 2020 at 8:11 PM Adrian Klaver <[email protected]> wrote:On 5/21/20 7:18 AM, postgann2020 s wrote:\n> Hi Team,\n> \n> Thanks for your support.\n> \n> Could you please suggest on below query.\n> \n> Environment\n> PostgreSQL: 9.5.15\n> Postgis: 2.2.7\n> \n> Mostly table contain GIS data.\n> \n> While analyzing the table getting below NOTICE. It seems is pretty \n> understanding, but needs help on the below points.\n> \n> 1 . What might be the reason for getting the NOTICE?.\n> 2. Is this lead to any problems in the future?.\n> \n> ANALYZE SCHEMA.TABLE;\n> \n> NOTICE:  no non-null/empty features, unable to compute statistics\n> NOTICE:  no non-null/empty features, unable to compute statistics\n> Query returned successfully with no result in 1.1 secs.\n\nThis is coming from PostGIS:\n\npostgis/gserialized_estimate.c:\n/* If there's no useful features, we can't work out stats */\n         if ( ! notnull_cnt )\n         {\n                 elog(NOTICE, \"no non-null/empty features, unable to \ncompute statistics\");\n                 stats->stats_valid = false;\n                 return;\n         }\n\n\n\nYou might find more information from here:\n\nhttps://postgis.net/support/\n\nThough FYI PostGIS 2.2.7 is past EOL:\n\nhttps://postgis.net/source/\n\n> \n> Thanks for your support.\n> \n> Regards,\n> PostgAnn.\n\n\n-- \nAdrian Klaver\[email protected]", "msg_date": "Thu, 21 May 2020 20:18:29 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestion on table analyze" } ]
[ { "msg_contents": "Hi Team,\nThanks for your support.\nCould you please suggest on below query.\n\nEnvironment\n\nPostgreSQL: 9.5.15\nPostgis: 2.2.7\nMostly table contains GIS data and we are trying to creating an index on\nthe column which is having an avg width of 149bytes.\n\n CREATE INDEX index_idx\n ON SCHEMA.TABLE\n USING btree\n (column);\n\nERROR: index row size 2976 exceeds maximum 2712 for index \"index_idx\"\nHINT: Values larger than 1/3 of a buffer page cannot be indexed.\nConsider a function index of an MD5 hash of the value, or use full-text\nindexing.\n\nCould you please suggest on below queries.\n1. How to solve the issue?.\n2. What type of index is the best suited for this type of data?.\n\nThanks for your support.\n\nRegards,\nPostgAnn.\n\nHi Team,Thanks for your support.Could you please suggest on below query.EnvironmentPostgreSQL: 9.5.15Postgis: 2.2.7Mostly table contains GIS data and we are trying to creating an index on the column which is having an avg width of 149bytes. CREATE INDEX index_idx  ON SCHEMA.TABLE  USING btree  (column);ERROR:  index row size 2976 exceeds maximum 2712 for index \"index_idx\"HINT:  Values larger than 1/3 of a buffer page cannot be indexed.Consider a function index of an MD5 hash of the value, or use full-text indexing.Could you please suggest on below queries.1. How to solve the issue?.2. What type of index is the best suited for this type of data?.Thanks for your support.Regards,PostgAnn.", "msg_date": "Thu, 21 May 2020 19:57:44 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Suggestion on index creation for TEXT data field" }, { "msg_contents": "On Thu, May 21, 2020 at 7:28 AM postgann2020 s <[email protected]>\nwrote:\n\n> which is having an avg width of 149bytes.\n>\n\nThe average is meaningless if your maximum value exceeds a limit.\n\n2. What type of index is the best suited for this type of data?.\n>\n\nAnd what type of data exactly are we talking about. \"TEXT\" is not a useful\nanswer.\n\nIf the raw data is too large no index is going to be \"best\" - as the hint\nsuggests you either need to drop the idea of indexing the column altogether\nor apply some function to the raw data and then index the result.\n\nDavid J.\n\nOn Thu, May 21, 2020 at 7:28 AM postgann2020 s <[email protected]> wrote:which is having an avg width of 149bytes.The average is meaningless if your maximum value exceeds a limit.2. What type of index is the best suited for this type of data?.And what type of data exactly are we talking about.  \"TEXT\" is not a useful answer.If the raw data is too large no index is going to be \"best\" -  as the hint suggests you either need to drop the idea of indexing the column altogether or apply some function to the raw data and then index the result.David J.", "msg_date": "Thu, 21 May 2020 07:36:39 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestion on index creation for TEXT data field" }, { "msg_contents": "Hi David,\n\nThanks for your email.\n\n>And what type of data exactly are we talking about. ==> Column is stroing\nGIS data.\n\nRegards,\nPostgAnn.\n\nOn Thu, May 21, 2020 at 8:06 PM David G. Johnston <\[email protected]> wrote:\n\n> On Thu, May 21, 2020 at 7:28 AM postgann2020 s <[email protected]>\n> wrote:\n>\n>> which is having an avg width of 149bytes.\n>>\n>\n> The average is meaningless if your maximum value exceeds a limit.\n>\n> 2. What type of index is the best suited for this type of data?.\n>>\n>\n> And what type of data exactly are we talking about. \"TEXT\" is not a\n> useful answer.\n>\n> If the raw data is too large no index is going to be \"best\" - as the hint\n> suggests you either need to drop the idea of indexing the column altogether\n> or apply some function to the raw data and then index the result.\n>\n> David J.\n>\n>\n\nHi David,Thanks for your email.>And what type of data exactly are we talking about.  ==> Column is stroing GIS data.Regards,PostgAnn.On Thu, May 21, 2020 at 8:06 PM David G. Johnston <[email protected]> wrote:On Thu, May 21, 2020 at 7:28 AM postgann2020 s <[email protected]> wrote:which is having an avg width of 149bytes.The average is meaningless if your maximum value exceeds a limit.2. What type of index is the best suited for this type of data?.And what type of data exactly are we talking about.  \"TEXT\" is not a useful answer.If the raw data is too large no index is going to be \"best\" -  as the hint suggests you either need to drop the idea of indexing the column altogether or apply some function to the raw data and then index the result.David J.", "msg_date": "Thu, 21 May 2020 20:14:37 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestion on index creation for TEXT data field" }, { "msg_contents": "On 5/21/20 7:27 AM, postgann2020 s wrote:\n> Hi Team,\n> Thanks for your support.\n> Could you please suggest on below query.\n> \n> Environment\n> \n> PostgreSQL: 9.5.15\n> Postgis: 2.2.7\n> Mostly table contains GIS data and we are trying to creating an index on \n> the column which is having an avg width of 149bytes.\n> \n>  CREATE INDEX index_idx\n>   ON SCHEMA.TABLE\n>   USING btree\n>   (column);\n> \n> ERROR:  index row size 2976 exceeds maximum 2712 for index \"index_idx\"\n> HINT:  Values larger than 1/3 of a buffer page cannot be indexed.\n> Consider a function index of an MD5 hash of the value, or use full-text \n> indexing.^^^^^^^^^^^^^^^^^^^^^^\nHint supplies answer to 1) and 2) below.\n\n> \n> Could you please suggest on below queries.\n> 1. How to solve the issue?.\n> 2. What type of index is the best suited for this type of data?.\n> \n> Thanks for your support.\n> \n> Regards,\n> PostgAnn.\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n", "msg_date": "Thu, 21 May 2020 07:45:26 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestion on index creation for TEXT data field" }, { "msg_contents": "On Thu, May 21, 2020 at 7:45 AM postgann2020 s <[email protected]>\nwrote:\n\n> >And what type of data exactly are we talking about. ==> Column is\n> stroing GIS data.\n>\n\nGIS data isn't really TEXT and isn't a core datatype of PostgreSQL so this\nis maybe better posted to the PostGIS community directly...\n\nDavid J.\n\nOn Thu, May 21, 2020 at 7:45 AM postgann2020 s <[email protected]> wrote:>And what type of data exactly are we talking about.  ==> Column is stroing GIS data.GIS data isn't really TEXT and isn't a core datatype of PostgreSQL so this is maybe better posted to the PostGIS community directly...David J.", "msg_date": "Thu, 21 May 2020 07:51:30 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestion on index creation for TEXT data field" }, { "msg_contents": "Hi David, Adrian,\n\nThanks for the information.\nSure, will post on PostGIS community.\n\nRegards,\nPostgAnn.\n\nOn Thu, May 21, 2020 at 8:21 PM David G. Johnston <\[email protected]> wrote:\n\n> On Thu, May 21, 2020 at 7:45 AM postgann2020 s <[email protected]>\n> wrote:\n>\n>> >And what type of data exactly are we talking about. ==> Column is\n>> stroing GIS data.\n>>\n>\n> GIS data isn't really TEXT and isn't a core datatype of PostgreSQL so this\n> is maybe better posted to the PostGIS community directly...\n>\n> David J.\n>\n>\n\nHi David, Adrian,Thanks for the information. Sure, will post on PostGIS community.Regards,PostgAnn.On Thu, May 21, 2020 at 8:21 PM David G. Johnston <[email protected]> wrote:On Thu, May 21, 2020 at 7:45 AM postgann2020 s <[email protected]> wrote:>And what type of data exactly are we talking about.  ==> Column is stroing GIS data.GIS data isn't really TEXT and isn't a core datatype of PostgreSQL so this is maybe better posted to the PostGIS community directly...David J.", "msg_date": "Thu, 21 May 2020 20:23:18 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestion on index creation for TEXT data field" } ]
[ { "msg_contents": "Hi Team,\nThanks for your support.\n\nCould you please suggest on below query.\n\nWe have multiple long procs that are having 100s of data validations and\ncurrently we have written as below.\n\n***********\n\nif (SELECT 1 FROM SCHEMA.TABLE WHERE column=data AND column=data) then\nstatements\netc..\n\n***********\n\nAre there any other ways to validate the data, which will help us to\nimprove the performance of the query?.\n\nThanks for your support.\n\nRegards,\nPostgAnn.\n\nHi Team,Thanks for your support.Could you please suggest on below query. We have multiple long procs that are having 100s of data validations and currently we have written as below.***********if (SELECT 1  FROM SCHEMA.TABLE WHERE column=data AND column=data) then\tstatements\tetc..***********Are there any other ways to validate the data, which will help us to improve the performance of the query?.Thanks for your support.Regards,PostgAnn.", "msg_date": "Fri, 22 May 2020 12:11:10 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Suggestion to improve query performance of data validation in proc." }, { "msg_contents": "You should read through the and the contained linked FAQ - note especially\nthe concept and recommendation for “cross-posting”.\n\nhttps://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics\n\nOn Thursday, May 21, 2020, postgann2020 s <[email protected]> wrote:\n\n>\n> We have multiple long procs that are having 100s of data validations and\n> currently we have written as below.\n>\n> ***********\n>\n> if (SELECT 1 FROM SCHEMA.TABLE WHERE column=data AND column=data) then\n> statements\n> etc..\n>\n> ***********\n>\n> Are there any other ways to validate the data, which will help us to\n> improve the performance of the query?\n>\n\nI have no idea what your are trying to get at here. You should try\nproviding SQL that actually runs. Though at first glance it seems quite\nprobable your are doing useless work anyway.\n\nDavid J.\n\nYou should read through the and the contained linked FAQ - note especially the concept and recommendation for “cross-posting”.https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanicsOn Thursday, May 21, 2020, postgann2020 s <[email protected]> wrote:We have multiple long procs that are having 100s of data validations and currently we have written as below.***********if (SELECT 1  FROM SCHEMA.TABLE WHERE column=data AND column=data) then\tstatements\tetc..***********Are there any other ways to validate the data, which will help us to improve the performance of the query?I have no idea what your are trying to get at here.  You should try providing SQL that actually runs.  Though at first glance it seems quite probable your are doing useless work anyway.David J.", "msg_date": "Fri, 22 May 2020 00:06:05 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestion to improve query performance of data validation in\n proc." }, { "msg_contents": "Hi David,\n\nThanks for your feedback.\n\nWe are using the below kind of validation throughout the proc in multiple\nlocations and for validation we are using the below statements.\n\n--check Data available or not for structure_id1\n IF EXISTS(SELECT 1 FROM schema.table_name WHERE\ncolumn1=structure_id1) THEN\n is_exists1 :=true;\nEND IF;\n\nWe are looking for a better query than \"*SELECT 1 FROM schema.table_name\nWHERE column1=structure_id1*\" this query for data validation.\n\nPlease suggest is there any other ways to validate this kind of queries\nwhich will improve the overall performance.\n\nRegards,\nPostgann.\n\nOn Fri, May 22, 2020 at 12:36 PM David G. Johnston <\[email protected]> wrote:\n\n> You should read through the and the contained linked FAQ - note especially\n> the concept and recommendation for “cross-posting”.\n>\n> https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics\n>\n> On Thursday, May 21, 2020, postgann2020 s <[email protected]> wrote:\n>\n>>\n>> We have multiple long procs that are having 100s of data validations and\n>> currently we have written as below.\n>>\n>> ***********\n>>\n>> if (SELECT 1 FROM SCHEMA.TABLE WHERE column=data AND column=data) then\n>> statements\n>> etc..\n>>\n>> ***********\n>>\n>> Are there any other ways to validate the data, which will help us to\n>> improve the performance of the query?\n>>\n>\n> I have no idea what your are trying to get at here. You should try\n> providing SQL that actually runs. Though at first glance it seems quite\n> probable your are doing useless work anyway.\n>\n> David J.\n>\n\nHi David,Thanks for your feedback. We are using the below kind of validation throughout the proc in multiple locations and for validation we are using the below statements.--check Data available or not for structure_id1       IF EXISTS(SELECT 1  FROM schema.table_name WHERE column1=structure_id1)  THEN \t      is_exists1 :=true;\t END IF;We are looking for a better query than \"SELECT 1  FROM schema.table_name WHERE column1=structure_id1\" this query for data validation.Please suggest is there any other ways to validate this kind of queries which will improve the overall performance.Regards,Postgann.On Fri, May 22, 2020 at 12:36 PM David G. Johnston <[email protected]> wrote:You should read through the and the contained linked FAQ - note especially the concept and recommendation for “cross-posting”.https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanicsOn Thursday, May 21, 2020, postgann2020 s <[email protected]> wrote:We have multiple long procs that are having 100s of data validations and currently we have written as below.***********if (SELECT 1  FROM SCHEMA.TABLE WHERE column=data AND column=data) then\tstatements\tetc..***********Are there any other ways to validate the data, which will help us to improve the performance of the query?I have no idea what your are trying to get at here.  You should try providing SQL that actually runs.  Though at first glance it seems quite probable your are doing useless work anyway.David J.", "msg_date": "Fri, 22 May 2020 13:14:27 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestion to improve query performance of data validation in\n proc." }, { "msg_contents": "On Friday, May 22, 2020, postgann2020 s <[email protected]> wrote:\n\n\n>\n> We are looking for a better query than \"*SELECT 1 FROM schema.table_name\n> WHERE column1=structure_id1*\" this query for data validation.\n>\n\n There is no more simple a query that involve records on a single,table.\n\nPlease suggest is there any other ways to validate this kind of queries\n> which will improve the overall performance.\n>\n\nAbandon procedural logic and embrace the declarative set oriented nature of\nSQL.\n\nDavid J.\n\nOn Friday, May 22, 2020, postgann2020 s <[email protected]> wrote: We are looking for a better query than \"SELECT 1  FROM schema.table_name WHERE column1=structure_id1\" this query for data validation. There is no more simple a query that involve records on a single,table.Please suggest is there any other ways to validate this kind of queries which will improve the overall performance.Abandon procedural logic and embrace the declarative set oriented nature of SQL.David J.", "msg_date": "Fri, 22 May 2020 01:09:40 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestion to improve query performance of data validation in\n proc." }, { "msg_contents": "On Fri, May 22, 2020 at 2:09 AM David G. Johnston <\[email protected]> wrote:\n\n> On Friday, May 22, 2020, postgann2020 s <[email protected]> wrote:\n>\n>\n>>\n>> We are looking for a better query than \"*SELECT 1 FROM\n>> schema.table_name WHERE column1=structure_id1*\" this query for data\n>> validation.\n>>\n>\n If many rows match potentially, then wrapping the query with select\nexists(old_query) would allow the execution to bail asap.\n\nOn Fri, May 22, 2020 at 2:09 AM David G. Johnston <[email protected]> wrote:On Friday, May 22, 2020, postgann2020 s <[email protected]> wrote: We are looking for a better query than \"SELECT 1  FROM schema.table_name WHERE column1=structure_id1\" this query for data validation. If many rows match potentially, then wrapping the query with select exists(old_query) would allow the execution to bail asap.", "msg_date": "Fri, 22 May 2020 08:26:05 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestion to improve query performance of data validation in\n proc." } ]
[ { "msg_contents": "Hi Team,\n\nThanks for your support.\n\nCould you please suggest on below query.\n\nEnvironmentPostgreSQL: 9.5.15\nPostgis: 2.2.7\n\nThe table contains GIS data which is fiber data(underground routes).\n\nWe are using the below query inside the proc which is taking a long time to\ncomplete.\n\n*************************************************************\n\nSELECT seq_no+1 INTO pair_seq_no FROM SCHEMA.TABLE WHERE (Column1 like\n'%,sheath--'||cable_seq_id ||',%' or Column1 like 'sheath--'||cable_seq_id\n||',%' or Column1 like '%,sheath--'||cable_seq_id or\nColumn1='sheath--'||cable_seq_id) order by seq_no desc limit 1 ;\n\n****************************************************************\n\nWe have created an index on parental_path Column1 still it is taking 4secs\nto get the results.\n\nCould you please suggest a better way to execute the query.\n\nThanks for your support.\n\nRegards,\nPostgAnn.\n\nHi Team,Thanks for your support.Could you please suggest on below query.EnvironmentPostgreSQL: 9.5.15Postgis: 2.2.7The table contains GIS data which is fiber data(underground routes).We are using the below query inside the proc which is taking a long time to complete.*************************************************************SELECT seq_no+1 INTO pair_seq_no FROM SCHEMA.TABLE WHERE (Column1 like '%,sheath--'||cable_seq_id ||',%' or Column1 like 'sheath--'||cable_seq_id ||',%' or Column1 like '%,sheath--'||cable_seq_id  or Column1='sheath--'||cable_seq_id) order by seq_no desc limit 1 ;****************************************************************We have created an index on parental_path Column1 still it is taking 4secs to get the results.Could you please suggest a better way to execute the query.Thanks for your support.Regards,PostgAnn.", "msg_date": "Fri, 22 May 2020 12:29:16 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Suggestion to improve query performance for GIS query." }, { "msg_contents": "On Thursday, May 21, 2020, postgann2020 s <[email protected]> wrote:\n\n>\n> SELECT seq_no+1 INTO pair_seq_no FROM SCHEMA.TABLE WHERE (Column1 like\n> '%,sheath--'||cable_seq_id ||',%' or Column1 like 'sheath--'||cable_seq_id\n> ||',%' or Column1 like '%,sheath--'||cable_seq_id or\n> Column1='sheath--'||cable_seq_id) order by seq_no desc limit 1 ;\n>\n>\n> Could you please suggest a better way to execute the query\n>\n\nAdd a trigger to the table to normalize the contents of column1 upon insert\nand then rewrite your query to reference the newly created normalized\nfields.\n\nDavid J.\n\nOn Thursday, May 21, 2020, postgann2020 s <[email protected]> wrote:SELECT seq_no+1 INTO pair_seq_no FROM SCHEMA.TABLE WHERE (Column1 like '%,sheath--'||cable_seq_id ||',%' or Column1 like 'sheath--'||cable_seq_id ||',%' or Column1 like '%,sheath--'||cable_seq_id  or Column1='sheath--'||cable_seq_id) order by seq_no desc limit 1 ;Could you please suggest a better way to execute the queryAdd a trigger to the table to normalize the contents of column1 upon insert and then rewrite your query to reference the newly created normalized fields.David J.", "msg_date": "Fri, 22 May 2020 00:14:57 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestion to improve query performance for GIS query." }, { "msg_contents": "Dear team,\n\nKindly try to execute the vacuum analyzer on that particular table and\nrefresh the session and execute the query.\n\nVACUUM (VERBOSE, ANALYZE) tablename;\n\nRegards,\nMohammed Afsar\nDatabase engineer\n\nOn Fri, May 22, 2020, 12:30 PM postgann2020 s <[email protected]>\nwrote:\n\n> Hi Team,\n>\n> Thanks for your support.\n>\n> Could you please suggest on below query.\n>\n> EnvironmentPostgreSQL: 9.5.15\n> Postgis: 2.2.7\n>\n> The table contains GIS data which is fiber data(underground routes).\n>\n> We are using the below query inside the proc which is taking a long time\n> to complete.\n>\n> *************************************************************\n>\n> SELECT seq_no+1 INTO pair_seq_no FROM SCHEMA.TABLE WHERE (Column1 like\n> '%,sheath--'||cable_seq_id ||',%' or Column1 like 'sheath--'||cable_seq_id\n> ||',%' or Column1 like '%,sheath--'||cable_seq_id or\n> Column1='sheath--'||cable_seq_id) order by seq_no desc limit 1 ;\n>\n> ****************************************************************\n>\n> We have created an index on parental_path Column1 still it is taking 4secs\n> to get the results.\n>\n> Could you please suggest a better way to execute the query.\n>\n> Thanks for your support.\n>\n> Regards,\n> PostgAnn.\n>\n\nDear team,Kindly try to execute the vacuum analyzer on that particular table and refresh the session and execute the query.VACUUM (VERBOSE, ANALYZE) tablename;Regards,Mohammed AfsarDatabase engineerOn Fri, May 22, 2020, 12:30 PM postgann2020 s <[email protected]> wrote:Hi Team,Thanks for your support.Could you please suggest on below query.EnvironmentPostgreSQL: 9.5.15Postgis: 2.2.7The table contains GIS data which is fiber data(underground routes).We are using the below query inside the proc which is taking a long time to complete.*************************************************************SELECT seq_no+1 INTO pair_seq_no FROM SCHEMA.TABLE WHERE (Column1 like '%,sheath--'||cable_seq_id ||',%' or Column1 like 'sheath--'||cable_seq_id ||',%' or Column1 like '%,sheath--'||cable_seq_id  or Column1='sheath--'||cable_seq_id) order by seq_no desc limit 1 ;****************************************************************We have created an index on parental_path Column1 still it is taking 4secs to get the results.Could you please suggest a better way to execute the query.Thanks for your support.Regards,PostgAnn.", "msg_date": "Fri, 22 May 2020 12:46:36 +0530", "msg_from": "Mohammed Afsar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestion to improve query performance for GIS query." }, { "msg_contents": "Thanks for your support David and Afsar.\n\nHi David,\n\nCould you please suggest the resource link to \"Add a trigger to the table\nto normalize the contents of column1 upon insert and then rewrite your\nquery to reference the newly created normalized fields.\" if anything\navailable. So that it will help me to get into issues.\n\nThanks for your support.\n\nRegards,\nPostgann.\n\n\nOn Fri, May 22, 2020 at 12:46 PM Mohammed Afsar <[email protected]> wrote:\n\n> Dear team,\n>\n> Kindly try to execute the vacuum analyzer on that particular table and\n> refresh the session and execute the query.\n>\n> VACUUM (VERBOSE, ANALYZE) tablename;\n>\n> Regards,\n> Mohammed Afsar\n> Database engineer\n>\n> On Fri, May 22, 2020, 12:30 PM postgann2020 s <[email protected]>\n> wrote:\n>\n>> Hi Team,\n>>\n>> Thanks for your support.\n>>\n>> Could you please suggest on below query.\n>>\n>> EnvironmentPostgreSQL: 9.5.15\n>> Postgis: 2.2.7\n>>\n>> The table contains GIS data which is fiber data(underground routes).\n>>\n>> We are using the below query inside the proc which is taking a long time\n>> to complete.\n>>\n>> *************************************************************\n>>\n>> SELECT seq_no+1 INTO pair_seq_no FROM SCHEMA.TABLE WHERE (Column1 like\n>> '%,sheath--'||cable_seq_id ||',%' or Column1 like 'sheath--'||cable_seq_id\n>> ||',%' or Column1 like '%,sheath--'||cable_seq_id or\n>> Column1='sheath--'||cable_seq_id) order by seq_no desc limit 1 ;\n>>\n>> ****************************************************************\n>>\n>> We have created an index on parental_path Column1 still it is taking\n>> 4secs to get the results.\n>>\n>> Could you please suggest a better way to execute the query.\n>>\n>> Thanks for your support.\n>>\n>> Regards,\n>> PostgAnn.\n>>\n>\n\nThanks for your support David and Afsar.Hi David,Could you please suggest the resource link to  \"Add a trigger to the table to normalize the contents of column1 upon insert and then rewrite your query to reference the newly created normalized fields.\" if anything available. So that it will help me to get into issues.Thanks for your support.Regards,Postgann.On Fri, May 22, 2020 at 12:46 PM Mohammed Afsar <[email protected]> wrote:Dear team,Kindly try to execute the vacuum analyzer on that particular table and refresh the session and execute the query.VACUUM (VERBOSE, ANALYZE) tablename;Regards,Mohammed AfsarDatabase engineerOn Fri, May 22, 2020, 12:30 PM postgann2020 s <[email protected]> wrote:Hi Team,Thanks for your support.Could you please suggest on below query.EnvironmentPostgreSQL: 9.5.15Postgis: 2.2.7The table contains GIS data which is fiber data(underground routes).We are using the below query inside the proc which is taking a long time to complete.*************************************************************SELECT seq_no+1 INTO pair_seq_no FROM SCHEMA.TABLE WHERE (Column1 like '%,sheath--'||cable_seq_id ||',%' or Column1 like 'sheath--'||cable_seq_id ||',%' or Column1 like '%,sheath--'||cable_seq_id  or Column1='sheath--'||cable_seq_id) order by seq_no desc limit 1 ;****************************************************************We have created an index on parental_path Column1 still it is taking 4secs to get the results.Could you please suggest a better way to execute the query.Thanks for your support.Regards,PostgAnn.", "msg_date": "Fri, 22 May 2020 13:04:21 +0530", "msg_from": "postgann2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestion to improve query performance for GIS query." } ]
[ { "msg_contents": "Hi Team,\n\nThanks for your support.\n\nCould someone please suggest on the below query.\n\nOne of the query which was created on GIS data is taking a long time and\neven it is not taking the index as well. I have included all the required\ndetails for reference.\n\nDatabase Stack:\n===============\nPostgreSQL : 9.5.15\nPostgis: 2.2.7\n\nTable Structure:\n===================\n\nALTER TABLE SCHEMA.TABLE_NAME ADD COLUMN parental_path text;\n\nCreated Indexes on column parental_path:\n=================================\n\nCREATE INDEX cable_pair_parental_path_idx\n ON SCHEMA.TABLE_NAME\n USING btree\n (md5(parental_path) COLLATE pg_catalog.\"default\");\n\nCREATE INDEX cable_pair_parental_path_idx_fulltext\n ON SCHEMA.TABLE_NAME\n USING gist\n (parental_path COLLATE pg_catalog.\"default\");\n\nSample data in \"parental_path\" column:\n======================================\n\n'route--2309421/2951584/3373649/2511322/1915187/2696397/2623291/2420708/2144348/2294454,circuit--88458/88460,sheath--8874'\n\nActual Query:\n=============\n\nSELECT seq_no + 1 FROM SCHEMA.TABLE_NAME WHERE (parental_path LIKE\n'%,sheath--' || cable_seq_id || ',%' OR parental_path LIKE 'sheath--' ||\ncable_seq_id || ',%' OR parental_path LIKE '%,sheath--' || cable_seq_id OR\nparental_path = 'sheath--' || cable_seq_id) ORDER BY seq_no DESC LIMIT 1;\n\nExplain Plan:\n=============\n\nLimit (cost=108111.60..108111.61 rows=1 width=4) (actual\ntime=4597.605..4597.605 rows=0 loops=1)\n Output: ((seq_no + 1)), seq_no\n Buffers: shared hit=2967 read=69606 dirtied=1\n -> Sort (cost=108111.60..108113.09 rows=595 width=4) (actual\ntime=4597.603..4597.603 rows=0 loops=1)\n Output: ((seq_no + 1)), seq_no\n Sort Key: TABLE_NAME.seq_no DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=2967 read=69606 dirtied=1\n -> *Seq Scan on SCHEMA.TABLE_NAME (cost=0.00..108108.63 rows=595\nwidth=4) (actual time=4597.595..4597.595 rows=0 loops=1)*\n Output: (seq_no + 1), seq_no\n Filter: ((TABLE_NAME.parental_path ~~\n'%,sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~\n'sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~\n'%,sheath--64690'::text) OR (TABLE_NAME.parental_path =\n'sheath--64690'::text))\n Rows Removed by Filter: 1930188\n Buffers: shared hit=2967 read=69606 dirtied=1\n\nPlease share your suggestion.\n\nThanks & Regards,\nDevchef.\n\nHi Team,Thanks for your support.Could someone please suggest on the below query.One of the query which was created on GIS data is taking a long time and even it is not taking the index as well. I have included all the required details for reference.Database Stack:===============PostgreSQL : 9.5.15Postgis: 2.2.7Table Structure:===================ALTER TABLE SCHEMA.TABLE_NAME ADD COLUMN parental_path text;Created Indexes on column parental_path:=================================CREATE INDEX cable_pair_parental_path_idx  ON SCHEMA.TABLE_NAME  USING btree  (md5(parental_path) COLLATE pg_catalog.\"default\");  CREATE INDEX cable_pair_parental_path_idx_fulltext  ON SCHEMA.TABLE_NAME  USING gist  (parental_path COLLATE pg_catalog.\"default\");  Sample data in \"parental_path\" column:======================================  'route--2309421/2951584/3373649/2511322/1915187/2696397/2623291/2420708/2144348/2294454,circuit--88458/88460,sheath--8874'Actual Query:=============SELECT seq_no + 1 FROM SCHEMA.TABLE_NAME WHERE (parental_path LIKE '%,sheath--' || cable_seq_id || ',%' OR parental_path LIKE 'sheath--' || cable_seq_id || ',%' OR parental_path LIKE '%,sheath--' || cable_seq_id OR parental_path = 'sheath--' || cable_seq_id) ORDER BY seq_no DESC LIMIT 1;Explain Plan:=============Limit  (cost=108111.60..108111.61 rows=1 width=4) (actual time=4597.605..4597.605 rows=0 loops=1)\t  Output: ((seq_no + 1)), seq_no\t  Buffers: shared hit=2967 read=69606 dirtied=1\t  ->  Sort  (cost=108111.60..108113.09 rows=595 width=4) (actual time=4597.603..4597.603 rows=0 loops=1)\t        Output: ((seq_no + 1)), seq_no\t        Sort Key: TABLE_NAME.seq_no DESC\t        Sort Method: quicksort  Memory: 25kB\t        Buffers: shared hit=2967 read=69606 dirtied=1\t        ->  Seq Scan on SCHEMA.TABLE_NAME  (cost=0.00..108108.63 rows=595 width=4) (actual time=4597.595..4597.595 rows=0 loops=1)\t              Output: (seq_no + 1), seq_no\t              Filter: ((TABLE_NAME.parental_path ~~ '%,sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~ 'sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~ '%,sheath--64690'::text) OR (TABLE_NAME.parental_path = 'sheath--64690'::text))\t              Rows Removed by Filter: 1930188\t              Buffers: shared hit=2967 read=69606 dirtied=1Please share your suggestion.Thanks & Regards,Devchef.", "msg_date": "Fri, 22 May 2020 16:15:03 +0530", "msg_from": "devchef2020 d <[email protected]>", "msg_from_op": true, "msg_subject": "Request to help on Query improvement suggestion." }, { "msg_contents": "On Fri, 2020-05-22 at 16:15 +0530, devchef2020 d wrote:\n> PostgreSQL : 9.5.15\n\n> Created Indexes on column parental_path:\n> =================================\n> \n> CREATE INDEX cable_pair_parental_path_idx\n> ON SCHEMA.TABLE_NAME\n> USING btree\n> (md5(parental_path) COLLATE pg_catalog.\"default\");\n> \n> CREATE INDEX cable_pair_parental_path_idx_fulltext\n> ON SCHEMA.TABLE_NAME\n> USING gist\n> (parental_path COLLATE pg_catalog.\"default\");\n\n> SELECT seq_no + 1 FROM SCHEMA.TABLE_NAME WHERE (parental_path LIKE '%,sheath--' || cable_seq_id || ',%' OR parental_path LIKE 'sheath--' || cable_seq_id || ',%' OR parental_path LIKE '%,sheath--' ||\n> cable_seq_id OR parental_path = 'sheath--' || cable_seq_id) ORDER BY seq_no DESC LIMIT 1;\n> \n> Explain Plan:\n> =============\n> \n> Limit (cost=108111.60..108111.61 rows=1 width=4) (actual time=4597.605..4597.605 rows=0 loops=1)\n> Output: ((seq_no + 1)), seq_no\n> Buffers: shared hit=2967 read=69606 dirtied=1\n> -> Sort (cost=108111.60..108113.09 rows=595 width=4) (actual time=4597.603..4597.603 rows=0 loops=1)\n> Output: ((seq_no + 1)), seq_no\n> Sort Key: TABLE_NAME.seq_no DESC\n> Sort Method: quicksort Memory: 25kB\n> Buffers: shared hit=2967 read=69606 dirtied=1\n> -> Seq Scan on SCHEMA.TABLE_NAME (cost=0.00..108108.63 rows=595 width=4) (actual time=4597.595..4597.595 rows=0 loops=1)\n> Output: (seq_no + 1), seq_no\n> Filter: ((TABLE_NAME.parental_path ~~ '%,sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~ 'sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~ '%,sheath--64690'::text) OR\n> (TABLE_NAME.parental_path = 'sheath--64690'::text))\n> Rows Removed by Filter: 1930188\n> Buffers: shared hit=2967 read=69606 dirtied=1\n\nAn index on an expression can only be used if the expression is exactly the same as on one\nside of an operator in a WHERE condition.\n\nSo your only chance with that query is to hope for a bitmap OR with an index on \"parental path\".\n\nTwo things to try:\n\n1) CREATE INDEX ON table_name (parental_path text_pattern_ops);\n\n2) CREATE EXTENSION pg_trgm;\n CREATE INDEX ON table_name USING GIN (parental_path gin_trgm_ops);\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Mon, 25 May 2020 08:47:59 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request to help on Query improvement suggestion." }, { "msg_contents": "On Sun, May 24, 2020, 11:48 PM Laurenz Albe <[email protected]>\nwrote:\n\n> On Fri, 2020-05-22 at 16:15 +0530, devchef2020 d wrote:\n> > PostgreSQL : 9.5.15\n>\n> > Created Indexes on column parental_path:\n> > =================================\n> >\n> > CREATE INDEX cable_pair_parental_path_idx\n> > ON SCHEMA.TABLE_NAME\n> > USING btree\n> > (md5(parental_path) COLLATE pg_catalog.\"default\");\n> >\n> > CREATE INDEX cable_pair_parental_path_idx_fulltext\n> > ON SCHEMA.TABLE_NAME\n> > USING gist\n> > (parental_path COLLATE pg_catalog.\"default\");\n>\n> > SELECT seq_no + 1 FROM SCHEMA.TABLE_NAME WHERE (parental_path LIKE\n> '%,sheath--' || cable_seq_id || ',%' OR parental_path LIKE 'sheath--' ||\n> cable_seq_id || ',%' OR parental_path LIKE '%,sheath--' ||\n> > cable_seq_id OR parental_path = 'sheath--' || cable_seq_id) ORDER BY\n> seq_no DESC LIMIT 1;\n> >\n> > Explain Plan:\n> > =============\n> >\n> > Limit (cost=108111.60..108111.61 rows=1 width=4) (actual\n> time=4597.605..4597.605 rows=0 loops=1)\n> > Output: ((seq_no + 1)), seq_no\n> > Buffers: shared hit=2967 read=69606 dirtied=1\n> > -> Sort (cost=108111.60..108113.09 rows=595 width=4) (actual\n> time=4597.603..4597.603 rows=0 loops=1)\n> > Output: ((seq_no + 1)), seq_no\n> > Sort Key: TABLE_NAME.seq_no DESC\n> > Sort Method: quicksort Memory: 25kB\n> > Buffers: shared hit=2967 read=69606 dirtied=1\n> > -> Seq Scan on SCHEMA.TABLE_NAME (cost=0.00..108108.63 rows=595\n> width=4) (actual time=4597.595..4597.595 rows=0 loops=1)\n> > Output: (seq_no + 1), seq_no\n> > Filter: ((TABLE_NAME.parental_path ~~\n> '%,sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~\n> 'sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~\n> '%,sheath--64690'::text) OR\n> > (TABLE_NAME.parental_path = 'sheath--64690'::text))\n> > Rows Removed by Filter: 1930188\n> > Buffers: shared hit=2967 read=69606 dirtied=1\n>\n> An index on an expression can only be used if the expression is exactly\n> the same as on one\n> side of an operator in a WHERE condition.\n>\n> So your only chance with that query is to hope for a bitmap OR with an\n> index on \"parental path\".\n>\n> Two things to try:\n>\n> 1) CREATE INDEX ON table_name (parental_path text_pattern_ops);\n>\n> 2) CREATE EXTENSION pg_trgm;\n> CREATE INDEX ON table_name USING GIN (parental_path gin_trgm_ops);\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n>\n>\n\nOn Sun, May 24, 2020, 11:48 PM Laurenz Albe <[email protected]> wrote:On Fri, 2020-05-22 at 16:15 +0530, devchef2020 d wrote:\n> PostgreSQL : 9.5.15\n\n> Created Indexes on column parental_path:\n> =================================\n> \n> CREATE INDEX cable_pair_parental_path_idx\n>   ON SCHEMA.TABLE_NAME\n>   USING btree\n>   (md5(parental_path) COLLATE pg_catalog.\"default\");\n>   \n> CREATE INDEX cable_pair_parental_path_idx_fulltext\n>   ON SCHEMA.TABLE_NAME\n>   USING gist\n>   (parental_path COLLATE pg_catalog.\"default\");\n\n> SELECT seq_no + 1 FROM SCHEMA.TABLE_NAME WHERE (parental_path LIKE '%,sheath--' || cable_seq_id || ',%' OR parental_path LIKE 'sheath--' || cable_seq_id || ',%' OR parental_path LIKE '%,sheath--' ||\n> cable_seq_id OR parental_path = 'sheath--' || cable_seq_id) ORDER BY seq_no DESC LIMIT 1;\n> \n> Explain Plan:\n> =============\n> \n> Limit  (cost=108111.60..108111.61 rows=1 width=4) (actual time=4597.605..4597.605 rows=0 loops=1)\n>  Output: ((seq_no + 1)), seq_no\n>  Buffers: shared hit=2967 read=69606 dirtied=1\n>  ->  Sort  (cost=108111.60..108113.09 rows=595 width=4) (actual time=4597.603..4597.603 rows=0 loops=1)\n>        Output: ((seq_no + 1)), seq_no\n>        Sort Key: TABLE_NAME.seq_no DESC\n>        Sort Method: quicksort  Memory: 25kB\n>        Buffers: shared hit=2967 read=69606 dirtied=1\n>        ->  Seq Scan on SCHEMA.TABLE_NAME  (cost=0.00..108108.63 rows=595 width=4) (actual time=4597.595..4597.595 rows=0 loops=1)\n>              Output: (seq_no + 1), seq_no\n>              Filter: ((TABLE_NAME.parental_path ~~ '%,sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~ 'sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~ '%,sheath--64690'::text) OR\n> (TABLE_NAME.parental_path = 'sheath--64690'::text))\n>              Rows Removed by Filter: 1930188\n>              Buffers: shared hit=2967 read=69606 dirtied=1\n\nAn index on an expression can only be used if the expression is exactly the same as on one\nside of an operator in a WHERE condition.\n\nSo your only chance with that query is to hope for a bitmap OR with an index on \"parental path\".\n\nTwo things to try:\n\n1)  CREATE INDEX ON table_name (parental_path text_pattern_ops);\n\n2)  CREATE EXTENSION pg_trgm;\n    CREATE INDEX ON table_name USING GIN (parental_path gin_trgm_ops);\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com", "msg_date": "Sat, 11 Jul 2020 10:15:27 -0700", "msg_from": "Marlene Villanueva <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request to help on Query improvement suggestion." } ]
[ { "msg_contents": "Hi Team,\n\nThanks for your support.\n\nCould someone please suggest on the below query.\n\nOne of the query which was created on GIS data is taking a long time and\neven it is not taking the index as well. I have included all the required\ndetails for reference.\n\nDatabase Stack:\n===============\nPostgreSQL : 9.5.15\nPostgis: 2.2.7\n\nTable Structure:\n===================\n\nALTER TABLE SCHEMA.TABLE_NAME ADD COLUMN parental_path text;\n\nCreated Indexes on column parental_path:\n=================================\n\nCREATE INDEX cable_pair_parental_path_idx\n ON SCHEMA.TABLE_NAME\n USING btree\n (md5(parental_path) COLLATE pg_catalog.\"default\");\n\nCREATE INDEX cable_pair_parental_path_idx_fulltext\n ON SCHEMA.TABLE_NAME\n USING gist\n (parental_path COLLATE pg_catalog.\"default\");\n\nSample data in \"parental_path\" column:\n======================================\n\n'route--2309421/2951584/3373649/2511322/1915187/2696397/2623291/2420708/2144348/2294454,circuit--88458/88460,sheath--8874'\n\nActual Query:\n=============\n\nSELECT seq_no + 1 FROM SCHEMA.TABLE_NAME WHERE (parental_path LIKE\n'%,sheath--' || cable_seq_id || ',%' OR parental_path LIKE 'sheath--' ||\ncable_seq_id || ',%' OR parental_path LIKE '%,sheath--' || cable_seq_id OR\nparental_path = 'sheath--' || cable_seq_id) ORDER BY seq_no DESC LIMIT 1;\n\nExplain Plan:\n=============\n\nLimit (cost=108111.60..108111.61 rows=1 width=4) (actual\ntime=4597.605..4597.605 rows=0 loops=1)\n Output: ((seq_no + 1)), seq_no\n Buffers: shared hit=2967 read=69606 dirtied=1\n -> Sort (cost=108111.60..108113.09 rows=595 width=4) (actual\ntime=4597.603..4597.603 rows=0 loops=1)\n Output: ((seq_no + 1)), seq_no\n Sort Key: TABLE_NAME.seq_no DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=2967 read=69606 dirtied=1\n -> *Seq Scan on SCHEMA.TABLE_NAME (cost=0.00..108108.63 rows=595\nwidth=4) (actual time=4597.595..4597.595 rows=0 loops=1)*\n Output: (seq_no + 1), seq_no\n Filter: ((TABLE_NAME.parental_path ~~\n'%,sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~\n'sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~\n'%,sheath--64690'::text) OR (TABLE_NAME.parental_path =\n'sheath--64690'::text))\n Rows Removed by Filter: 1930188\n Buffers: shared hit=2967 read=69606 dirtied=1\n\nPlease share your suggestion if I have to change or add new objects to the\ntable etc..\n\n\nThanks & Regards,\nPostgAnn.\n\nHi Team,Thanks for your support.Could someone please suggest on the below query.One of the query which was created on GIS data is taking a long time and even it is not taking the index as well. I have included all the required details for reference.Database Stack:===============PostgreSQL : 9.5.15Postgis: 2.2.7Table Structure:===================ALTER TABLE SCHEMA.TABLE_NAME ADD COLUMN parental_path text;Created Indexes on column parental_path:=================================CREATE INDEX cable_pair_parental_path_idx  ON SCHEMA.TABLE_NAME  USING btree  (md5(parental_path) COLLATE pg_catalog.\"default\"); CREATE INDEX cable_pair_parental_path_idx_fulltext  ON SCHEMA.TABLE_NAME  USING gist  (parental_path COLLATE pg_catalog.\"default\"); Sample data in \"parental_path\" column:====================================== 'route--2309421/2951584/3373649/2511322/1915187/2696397/2623291/2420708/2144348/2294454,circuit--88458/88460,sheath--8874'Actual Query:=============SELECT seq_no + 1 FROM SCHEMA.TABLE_NAME WHERE (parental_path LIKE '%,sheath--' || cable_seq_id || ',%' OR parental_path LIKE 'sheath--' || cable_seq_id || ',%' OR parental_path LIKE '%,sheath--' || cable_seq_id OR parental_path = 'sheath--' || cable_seq_id) ORDER BY seq_no DESC LIMIT 1;Explain Plan:=============Limit  (cost=108111.60..108111.61 rows=1 width=4) (actual time=4597.605..4597.605 rows=0 loops=1) Output: ((seq_no + 1)), seq_no Buffers: shared hit=2967 read=69606 dirtied=1 ->  Sort  (cost=108111.60..108113.09 rows=595 width=4) (actual time=4597.603..4597.603 rows=0 loops=1)       Output: ((seq_no + 1)), seq_no       Sort Key: TABLE_NAME.seq_no DESC       Sort Method: quicksort  Memory: 25kB       Buffers: shared hit=2967 read=69606 dirtied=1       ->  Seq Scan on SCHEMA.TABLE_NAME  (cost=0.00..108108.63 rows=595 width=4) (actual time=4597.595..4597.595 rows=0 loops=1)             Output: (seq_no + 1), seq_no             Filter: ((TABLE_NAME.parental_path ~~ '%,sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~ 'sheath--64690,%'::text) OR (TABLE_NAME.parental_path ~~ '%,sheath--64690'::text) OR (TABLE_NAME.parental_path = 'sheath--64690'::text))             Rows Removed by Filter: 1930188             Buffers: shared hit=2967 read=69606 dirtied=1Please share your suggestion if I have to change or add new objects to the table etc..Thanks & Regards,PostgAnn.", "msg_date": "Fri, 22 May 2020 16:23:16 +0530", "msg_from": "postggen2020 s <[email protected]>", "msg_from_op": true, "msg_subject": "Request to help on GIS Query improvement suggestion." }, { "msg_contents": "Your indexes and operators are not compatible. You have added a btree index\non md5 function result and are not using md5 in your query, and also using\nLIKE operator not one of the supported ones. I believe it might use a btree\noperator (plain value, not md5 result) if you are always searching for\n\"string starts with ____ but I don't know what it ends with\" but you can't\npossibly use a btree index where you are putting a wild card at the front.\n\nhttps://www.postgresql.org/docs/9.5/indexes-types.html\n\na gist index operators supported-\nhttps://www.postgresql.org/docs/9.5/gist-builtin-opclasses.html\n\nHere's a whole page on full text search, it would be worth a read-\nhttps://www.postgresql.org/docs/9.5/textsearch-tables.html#TEXTSEARCH-TABLES-INDEX\n\nYour indexes and operators are not compatible. You have added a btree index on md5 function result and are not using md5 in your query, and also using LIKE operator not one of the supported ones. I believe it might use a btree operator (plain value, not md5 result) if you are always searching for \"string starts with ____ but I don't know what it ends with\" but you can't possibly use a btree index where you are putting a wild card at the front.https://www.postgresql.org/docs/9.5/indexes-types.htmla gist index operators supported-https://www.postgresql.org/docs/9.5/gist-builtin-opclasses.htmlHere's a whole page on full text search, it would be worth a read-https://www.postgresql.org/docs/9.5/textsearch-tables.html#TEXTSEARCH-TABLES-INDEX", "msg_date": "Fri, 22 May 2020 08:14:29 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Request to help on GIS Query improvement suggestion." } ]
[ { "msg_contents": "Apologies for the cross-post to the general list.\n\nI'm keen to know if there are any good reasons apart from disk space and\npossible replication connection overhead to avoid the strategy proposed\nbelow.\n\nWe have quite a few databases of type a and many of type b in a cluster.\nBoth a and b types are fairly complex and are different solutions to a\nsimilar problem domain. All the databases are very read-centric, and all\ndatabase interaction is currently through plpgsql with no materialised\ndata.\n\nSome organisations have several type a and many type b databases, and\nneed to query these in a homogeneous manner. We presently do this with\nmany middleware requests or pl/proxy. An a or b type database belongs to\n0 or 1 organisations.\n\nMaking a and b generally the same would be a very big project.\nConsequently I'm discussing materialising a subset of data in a common\nformat between the two database types and shipping that data to\norganisation databases. This would have the benefit of providing a\ncommon data interface and speeding up queries for all database types.\nUsers would have faster queries, and it would be a big time saver for\nour development team, who presently have to deal with three quite\ndifferent data APIs.\n\nPresently I've been thinking of using triggers or materialized views in\neach database to materialise data into a \"matview\" schema which is then\nshipped via logical replication to an organisation database when\nrequired. New columns in the matview schema tables would ensure replica\nidentity uniqueness and allow the data to be safely stored in common\ntables in the organisation database.\n\nA few issues I foresee with this approach include:\n\n* requiring two to three times current storage for materialisation\n (the cluster is currently ~250GB)\n\n* having to have many logical replication slots\n (we sometimes suffer from pl/proxy connection storms)\n\nCommentary gratefully received,\nRory\n\n\n\n\n\n", "msg_date": "Fri, 22 May 2020 15:48:20 +0100", "msg_from": "Rory Campbell-Lange <[email protected]>", "msg_from_op": true, "msg_subject": "Strategy for materialisation and centralisation of data" } ]
[ { "msg_contents": "Just in case someone is interested enough to answer this. Does anyone know\nif the performance for a date column vs a timestamp column as a partition\nkey is large? What i mean with large is that say you have 6 partitions with\n10GB each. Would it be a 10 second+ difference? An explanation of how this\ninternally works would be appreciated.\n\nJust in case someone is interested enough to answer this. Does anyone know if the performance for a date column vs a timestamp column as a partition key is large? What i mean with large is that say you have 6 partitions with 10GB each. Would it be a 10 second+ difference? An explanation of how this internally works would be appreciated.", "msg_date": "Sun, 24 May 2020 23:50:05 -0400", "msg_from": "Cedric Leong <[email protected]>", "msg_from_op": true, "msg_subject": "Date vs Timestamp without timezone Partition Key" }, { "msg_contents": "Cedric Leong <[email protected]> writes:\n> Just in case someone is interested enough to answer this. Does anyone know\n> if the performance for a date column vs a timestamp column as a partition\n> key is large?\n\nI doubt it's even measurable, at least on 64-bit machines. You're\nbasically talking about 32-bit integer comparisons vs 64-bit integer\ncomparisons.\n\nOn a 32-bit machine it's possible that an index on a date column\nwill be physically smaller, so you could get some wins from reduced\nI/O. But on (most?) 64-bit machines that difference goes away too,\nbecause of alignment restrictions.\n\nAs always, YMMV; it never hurts to do your own testing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 25 May 2020 00:48:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Date vs Timestamp without timezone Partition Key" }, { "msg_contents": "Somewhat unrelated but note to anyone who wants to swap out partition keys.\nDon't create a clone of the table with the new partition key and insert\ndata. It messes up the query planner massively and makes everything much\nslower.\n\nOn Mon, May 25, 2020 at 12:48 AM Tom Lane <[email protected]> wrote:\n\n> Cedric Leong <[email protected]> writes:\n> > Just in case someone is interested enough to answer this. Does anyone\n> know\n> > if the performance for a date column vs a timestamp column as a partition\n> > key is large?\n>\n> I doubt it's even measurable, at least on 64-bit machines. You're\n> basically talking about 32-bit integer comparisons vs 64-bit integer\n> comparisons.\n>\n> On a 32-bit machine it's possible that an index on a date column\n> will be physically smaller, so you could get some wins from reduced\n> I/O. But on (most?) 64-bit machines that difference goes away too,\n> because of alignment restrictions.\n>\n> As always, YMMV; it never hurts to do your own testing.\n>\n> regards, tom lane\n>\n\nSomewhat unrelated but note to anyone who wants to swap out partition keys. Don't create a clone of the table with the new partition key and insert data. It messes up the query planner massively and makes everything much slower.On Mon, May 25, 2020 at 12:48 AM Tom Lane <[email protected]> wrote:Cedric Leong <[email protected]> writes:\n> Just in case someone is interested enough to answer this. Does anyone know\n> if the performance for a date column vs a timestamp column as a partition\n> key is large?\n\nI doubt it's even measurable, at least on 64-bit machines.  You're\nbasically talking about 32-bit integer comparisons vs 64-bit integer\ncomparisons.\n\nOn a 32-bit machine it's possible that an index on a date column\nwill be physically smaller, so you could get some wins from reduced\nI/O.  But on (most?) 64-bit machines that difference goes away too,\nbecause of alignment restrictions.\n\nAs always, YMMV; it never hurts to do your own testing.\n\n                        regards, tom lane", "msg_date": "Fri, 5 Jun 2020 22:12:26 -0400", "msg_from": "Cedric Leong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Date vs Timestamp without timezone Partition Key" }, { "msg_contents": "On Sat, 6 Jun 2020 at 14:12, Cedric Leong <[email protected]> wrote:\n> Somewhat unrelated but note to anyone who wants to swap out partition keys. Don't create a clone of the table with the new partition key and insert data. It messes up the query planner massively and makes everything much slower.\n\nThat complaint would have more meaning if you'd mentioned which\nversion of PostgreSQL you're using. The performance of partitioning in\nPostgreSQL has changed significantly over the past 3 releases. Also\nwould be useful to know what you've actually done (actual commands).\nI can't imagine it makes *everything* slower, so it might be good to\nmention what is actually slower.\n\nDavid\n\n\n", "msg_date": "Sat, 6 Jun 2020 14:16:50 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Date vs Timestamp without timezone Partition Key" }, { "msg_contents": "It's less of a complaint rather than just a warning not to do what I did.\n\nVersion:\nPostgreSQL 11.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bit\n\nThe actual command list would probably be impractical to put in here just\nbecause the majority of it would just be creating a large amount of\npartition tables. But in summary what i've done is basically this:\nExisting database has a partitioned fact table\n1. Create an exact clone of that partitioned fact table which includes all\nthe same indexes, columns, and partitioned tables\n2. Change the partitioned table's partition key from an indexed date column\nto an indexed timestamp without timezone column\n3. Do an insert into from the old partitioned fact table to the new\npartitioned fact table which includes all the same rows (insert into since\ni wanted the timestamp without timezone column to be in a new timezone)\n4. Switch the names of the tables so the new one will be the one that's used\n5. VACUUM FULL; ANALYZE;\n\nFor my use case which is a data warehouse star schema, this fact table is\nbasically the base table of every report. To be more specific, the reports\nI've tested on varied from 2x slower to 4x slower. From what I see so far\nthat's because the query plan is drastically different for both. An example\nof this test would look like this: https://explain.depesz.com/s/6rP8 and\nhttps://explain.depesz.com/s/cLUY\nThese tests are running the exact same query on two different tables with\nthe exception that they use their respective partition keys.\n\n\nOn Fri, Jun 5, 2020 at 10:17 PM David Rowley <[email protected]> wrote:\n\n> On Sat, 6 Jun 2020 at 14:12, Cedric Leong <[email protected]> wrote:\n> > Somewhat unrelated but note to anyone who wants to swap out partition\n> keys. Don't create a clone of the table with the new partition key and\n> insert data. It messes up the query planner massively and makes everything\n> much slower.\n>\n> That complaint would have more meaning if you'd mentioned which\n> version of PostgreSQL you're using. The performance of partitioning in\n> PostgreSQL has changed significantly over the past 3 releases. Also\n> would be useful to know what you've actually done (actual commands).\n> I can't imagine it makes *everything* slower, so it might be good to\n> mention what is actually slower.\n>\n> David\n>\n\nIt's less of a complaint rather than just a warning not to do what I did.Version:PostgreSQL 11.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bitThe actual command list would probably be impractical to put in here just because the majority of it would just be creating a large amount of partition tables. But in summary what i've done is basically this:Existing database has a partitioned fact table1.  Create an exact clone of that partitioned fact table which includes all the same indexes, columns, and partitioned tables 2. Change the partitioned table's partition key from an indexed date column to an indexed timestamp without timezone column3. Do an insert into from the old partitioned fact table to the new partitioned fact table which includes all the same rows (insert into since i wanted the timestamp without timezone column to be in a new timezone)4. Switch the names of the tables so the new one will be the one that's used5. VACUUM FULL; ANALYZE;For my use case which is a data warehouse star schema, this fact table is basically the base table of every report. To be more specific, the reports I've tested on varied from 2x slower to 4x slower. From what I see so far that's because the query plan is drastically different for both. An example of this test would look like this: https://explain.depesz.com/s/6rP8 and https://explain.depesz.com/s/cLUYThese tests are running the exact same query on two different tables with the exception that they use their respective partition keys.On Fri, Jun 5, 2020 at 10:17 PM David Rowley <[email protected]> wrote:On Sat, 6 Jun 2020 at 14:12, Cedric Leong <[email protected]> wrote:\n> Somewhat unrelated but note to anyone who wants to swap out partition keys. Don't create a clone of the table with the new partition key and insert data. It messes up the query planner massively and makes everything much slower.\n\nThat complaint would have more meaning if you'd mentioned which\nversion of PostgreSQL you're using. The performance of partitioning in\nPostgreSQL has changed significantly over the past 3 releases. Also\nwould be useful to know what you've actually done (actual commands).\nI can't imagine it makes *everything* slower, so it might be good to\nmention what is actually slower.\n\nDavid", "msg_date": "Fri, 5 Jun 2020 22:49:46 -0400", "msg_from": "Cedric Leong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Date vs Timestamp without timezone Partition Key" }, { "msg_contents": "On Sat, 6 Jun 2020 at 14:49, Cedric Leong <[email protected]> wrote:\n> It's less of a complaint rather than just a warning not to do what I did.\n\nMy point was really that nobody really knew what you did or what you\ndid it on. So it didn't seem like a worthwhile warning as it\ncompletely lacked detail.\n\n> These tests are running the exact same query on two different tables with the exception that they use their respective partition keys.\n\nAre you sure? It looks like the old one does WHERE date =\n((now())::date - '7 days'::interval) and the new version does\n(date(created_at) = ((now())::date - '7 days'::interval). I guess you\nrenamed date to \"created_at\" and changed the query to use date(). If\nthat expression is not indexed then I imagine that would be a good\nreason for the planner to have moved away from using the index on that\ncolumn. Also having date(created_at) will also not allow run-time\npruning to work since your partition key is \"created_at\".\n\nYou might be able to change the query to query a range of value on the\nnew timestamp column. This will allow you to get rid of the date()\nfunction. For example:\n\nwhere created_at >= date_trunc('day', now() - '7 days'::interval) and\ncreated_at < date_trunc('day', now() - '6 days'::interval)\n\nDavid\n\n\n", "msg_date": "Sat, 6 Jun 2020 15:13:40 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Date vs Timestamp without timezone Partition Key" }, { "msg_contents": "I can confirm that was the issue, after removing the expression and using\nonly what was indexed it definitely fixed the query plan. I appreciate all\nthe help you've given me, I didn't really think to look there but it makes\na ton of sense that a filter on the database would only work well if it's\nindexed.\n\nThanks again,\n\nOn Fri, Jun 5, 2020 at 11:13 PM David Rowley <[email protected]> wrote:\n\n> On Sat, 6 Jun 2020 at 14:49, Cedric Leong <[email protected]> wrote:\n> > It's less of a complaint rather than just a warning not to do what I did.\n>\n> My point was really that nobody really knew what you did or what you\n> did it on. So it didn't seem like a worthwhile warning as it\n> completely lacked detail.\n>\n> > These tests are running the exact same query on two different tables\n> with the exception that they use their respective partition keys.\n>\n> Are you sure? It looks like the old one does WHERE date =\n> ((now())::date - '7 days'::interval) and the new version does\n> (date(created_at) = ((now())::date - '7 days'::interval). I guess you\n> renamed date to \"created_at\" and changed the query to use date(). If\n> that expression is not indexed then I imagine that would be a good\n> reason for the planner to have moved away from using the index on that\n> column. Also having date(created_at) will also not allow run-time\n> pruning to work since your partition key is \"created_at\".\n>\n> You might be able to change the query to query a range of value on the\n> new timestamp column. This will allow you to get rid of the date()\n> function. For example:\n>\n> where created_at >= date_trunc('day', now() - '7 days'::interval) and\n> created_at < date_trunc('day', now() - '6 days'::interval)\n>\n> David\n>\n\nI can confirm that was the issue, after removing the expression and using only what was indexed it definitely fixed the query plan. I appreciate all the help you've given me, I didn't really think to look there but it makes a ton of sense that a filter on the database would only work well if it's indexed.Thanks again,On Fri, Jun 5, 2020 at 11:13 PM David Rowley <[email protected]> wrote:On Sat, 6 Jun 2020 at 14:49, Cedric Leong <[email protected]> wrote:\n> It's less of a complaint rather than just a warning not to do what I did.\n\nMy point was really that nobody really knew what you did or what you\ndid it on. So it didn't seem like a worthwhile warning as it\ncompletely lacked detail.\n\n> These tests are running the exact same query on two different tables with the exception that they use their respective partition keys.\n\nAre you sure?  It looks like the old one does WHERE date =\n((now())::date - '7 days'::interval) and the new version does\n(date(created_at) = ((now())::date - '7 days'::interval). I guess you\nrenamed date to \"created_at\" and changed the query to use date(). If\nthat expression is not indexed then I imagine that would be a good\nreason for the planner to have moved away from using the index on that\ncolumn. Also having date(created_at) will also not allow run-time\npruning to work since your partition key is \"created_at\".\n\nYou might be able to change the query to query a range of value on the\nnew timestamp column. This will allow you to get rid of the date()\nfunction. For example:\n\nwhere created_at >= date_trunc('day', now() - '7 days'::interval) and\ncreated_at < date_trunc('day', now() - '6 days'::interval)\n\nDavid", "msg_date": "Fri, 5 Jun 2020 23:56:18 -0400", "msg_from": "Cedric Leong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Date vs Timestamp without timezone Partition Key" } ]
[ { "msg_contents": "Hi PostgreSQL community,\n\nI have a system that was running version 9.6.17 running on a system with\n48gb of memory and spinning disks front-ed by a HW RAID controller with\nNVRAM cache. We moved to a new box running version 12.3 on a system with\n64gb of memory and NVME SSD drives. Here are the system config options:\n\nOLD:\nshared_buffers = 2048MB # min 128kB\nwork_mem = 128MB # min 64kB\nmaintenance_work_mem = 1024MB # min 1MB\neffective_io_concurrency = 8 # 1-1000; 0 disables prefetching\nmax_parallel_workers_per_gather = 0 # taken from max_worker_processes\neffective_cache_size = 24GB\ndefault_statistics_target = 500 # range 1-10000\nfrom_collapse_limit = 30\njoin_collapse_limit = 30 # 1 disables collapsing of explicit\nseq_page_cost = 1.0\t\t\t# measured on an arbitrary scale\nrandom_page_cost = 4.0\t\t\t# same scale as above\n\nNEW:\nshared_buffers = 12GB # min 128kB\nwork_mem = 128MB # min 64kB\nmaintenance_work_mem = 2GB # min 1MB\neffective_io_concurrency = 200 # 1-1000; 0 disables prefetching\nmax_worker_processes = 24 # (change requires restart)\nmax_parallel_workers_per_gather = 4 # taken from max_parallel_workers\nmax_parallel_workers = 24 # maximum number of max_worker_processes that\nseq_page_cost = 1.0 # measured on an arbitrary scale\nrandom_page_cost = 1.1 # same scale as above for SSDs\neffective_cache_size = 36GB\ndefault_statistics_target = 500 # range 1-10000\nfrom_collapse_limit = 30\njoin_collapse_limit = 30 # 1 disables collapsing of explicit\n\nAs far as the schema goes, it uses an id field populated by a sequence as\nthe primary key for everything. Here are the definitions for the tables\ninvolved in the query:\n\n Table \"public.users\"\n Column | Type | Modifiers \n---------------------+-----------------------------+------------------------------------------------------------\n id | integer | not null default nextval(('users_id_seq'::text)::regclass)\n name | character varying(200) | not null\n password | character varying(256) | \n comments | text | \n signature | text | \n emailaddress | character varying(120) | \n freeformcontactinfo | text | \n organization | character varying(200) | \n realname | character varying(120) | \n nickname | character varying(16) | \n lang | character varying(16) | \n gecos | character varying(16) | \n homephone | character varying(30) | \n workphone | character varying(30) | \n mobilephone | character varying(30) | \n pagerphone | character varying(30) | \n address1 | character varying(200) | \n address2 | character varying(200) | \n city | character varying(100) | \n state | character varying(100) | \n zip | character varying(16) | \n country | character varying(50) | \n timezone | character varying(50) | \n creator | integer | not null default 0\n created | timestamp without time zone | \n lastupdatedby | integer | not null default 0\n lastupdated | timestamp without time zone | \n authtoken | character varying(16) | \n smimecertificate | text | \nIndexes:\n \"users_pkey\" PRIMARY KEY, btree (id) CLUSTER\n \"users1\" UNIQUE, btree (lower(name::text))\n \"users2\" btree (lower(emailaddress::text))\n \"users_email_trgm\" gin (emailaddress gin_trgm_ops)\n\n Table \"public.principals\"\n Column | Type | Modifiers \n---------------+-----------------------+-----------------------------------------------------------------\n id | integer | not null default nextval(('principals_id_seq'::text)::regclass)\n principaltype | character varying(16) | not null\n disabled | smallint | not null default 0\nIndexes:\n \"principals_pkey\" PRIMARY KEY, btree (id) CLUSTER\n\n Table \"public.cachedgroupmembers\"\n Column | Type | Modifiers \n-------------------+----------+-------------------------------------------------------------------------\n id | integer | not null default nextval(('cachedgroupmembers_id_seq'::text)::regclass)\n groupid | integer | \n memberid | integer | \n via | integer | \n immediateparentid | integer | \n disabled | smallint | not null default 0\nIndexes:\n \"cachedgroupmembers_pkey\" PRIMARY KEY, btree (id)\n \"cachedgroupmembers1\" btree (memberid, immediateparentid)\n \"cachedgroupmembers4\" btree (memberid, groupid, disabled)\n \"disgroumem\" btree (groupid, memberid, disabled)\n \"shredder_cgm2\" btree (immediateparentid, memberid)\n \"shredder_cgm3\" btree (via, id)\n\n Table \"public.acl\"\n Column | Type | Modifiers \n---------------+-----------------------------+----------------------------------------------------------\n id | integer | not null default nextval(('acl_id_seq'::text)::regclass)\n principaltype | character varying(25) | not null\n principalid | integer | not null\n rightname | character varying(25) | not null\n objecttype | character varying(25) | not null\n objectid | integer | not null default 0\n creator | integer | not null default 0\n created | timestamp without time zone | \n lastupdatedby | integer | not null default 0\n lastupdated | timestamp without time zone | \nIndexes:\n \"acl_pkey\" PRIMARY KEY, btree (id)\n \"acl1\" btree (rightname, objecttype, objectid, principaltype, principalid) CLUSTER\n\n\nAll of the tables have been analyzed and frozen. It looks like a problem with using\na nested loop based on poor estimates. If I disable nested loops, the query only takes\n2s and not the 69s with them enabled. Of course, both of those are a far cry from the\n0.025s on the old system. I know that the old system is chosing the plan based on\nstatistics but at least the times were okay. Is there anyway to provide the system\nwith the statistics to make a better choice on the new system? Here are the EXPLAIN\nANALYZE results for the old system and two for the new system, one with and one\nwithout nested loops:\n\nOLD:\nEXPLAIN (ANALYZE, BUFFERS) SELECT DISTINCT main.* FROM Users main CROSS JOIN ACL ACL_3 JOIN Principals Principals_1 ON ( Principals_1.id = main.id ) JOIN CachedGroupMembers CachedGroupMembers_2 ON ( CachedGroupMembers_2.MemberId = Principals_1.id ) JOIN CachedGroupMembers CachedGroupMembers_4 ON ( CachedGroupMembers_4.MemberId = Principals_1.id ) WHERE ((ACL_3.ObjectType = 'RT::Ticket' AND ACL_3.ObjectId = 950423) OR (ACL_3.ObjectType = 'RT::Queue' AND ACL_3.ObjectId = 1) OR (ACL_3.ObjectType = 'RT::System' AND ACL_3.ObjectId = 1)) AND (ACL_3.PrincipalId = CachedGroupMembers_4.GroupId) AND (ACL_3.PrincipalType = 'Group') AND (ACL_3.RightName = 'OwnTicket') AND (CachedGroupMembers_2.Disabled = '0') AND (CachedGroupMembers_2.GroupId = '4') AND (CachedGroupMembers_4.Disabled = '0') AND (Principals_1.Disabled = '0') AND (Principals_1.PrincipalType = 'User') AND (Principals_1.id != '1') ORDER BY main.Name ASC;\n \n QUERY PLAN \n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------\n Unique (cost=4190.59..4190.66 rows=1 width=1268) (actual time=18.279..18.864 rows=324 loops=1)\n Buffers: shared hit=5389\n -> Sort (cost=4190.59..4190.59 rows=1 width=1268) (actual time=18.279..18.354 rows=560 loops=1)\n Sort Key: main.name, main.id, main.password, main.comments, main.signature, main.emailaddress, main.freeformcontactinfo, main.organization, main.realname, main.nickname, main.lang, main.gecos, main.home\nphone, main.workphone, main.mobilephone, main.pagerphone, main.address1, main.address2, main.city, main.state, main.zip, main.country, main.timezone, main.creator, main.created, main.lastupdatedby, main.lastupda\nted, main.authtoken, main.smimecertificate\n Sort Method: quicksort Memory: 238kB\n Buffers: shared hit=5373\n -> Nested Loop (cost=3889.42..4190.58 rows=1 width=1268) (actual time=7.653..15.122 rows=560 loops=1)\n Join Filter: (main.id = principals_1.id)\n Buffers: shared hit=5329\n -> Hash Join (cost=3888.99..4175.39 rows=31 width=1276) (actual time=7.643..9.681 rows=560 loops=1)\n Hash Cond: (cachedgroupmembers_4.memberid = main.id)\n Buffers: shared hit=3086\n -> Nested Loop (cost=0.72..117.66 rows=45103 width=4) (actual time=0.102..1.693 rows=674 loops=1)\n Buffers: shared hit=615\n -> Index Only Scan using acl1 on acl acl_3 (cost=0.29..53.93 rows=14 width=4) (actual time=0.054..0.427 rows=3 loops=1)\n Index Cond: ((rightname = 'OwnTicket'::text) AND (principaltype = 'Group'::text))\n Filter: ((((objecttype)::text = 'RT::Ticket'::text) AND (objectid = 950423)) OR (((objecttype)::text = 'RT::Queue'::text) AND (objectid = 1)) OR (((objecttype)::text = 'RT::Syste\nm'::text) AND (objectid = 1)))\n Rows Removed by Filter: 487\n Heap Fetches: 126\n Buffers: shared hit=37\n -> Index Only Scan using disgroumem on cachedgroupmembers cachedgroupmembers_4 (cost=0.43..4.51 rows=4 width=8) (actual time=0.024..0.382 rows=225 loops=3)\n Index Cond: ((groupid = acl_3.principalid) AND (disabled = '0'::smallint))\n Heap Fetches: 446\n Buffers: shared hit=578\n -> Hash (cost=3885.58..3885.58 rows=216 width=1272) (actual time=7.526..7.526 rows=520 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 144kB\n Buffers: shared hit=2471\n -> Nested Loop (cost=0.85..3885.58 rows=216 width=1272) (actual time=0.041..6.940 rows=520 loops=1)\n Buffers: shared hit=2471\n -> Index Only Scan using disgroumem on cachedgroupmembers cachedgroupmembers_2 (cost=0.43..23.49 rows=553 width=4) (actual time=0.018..1.164 rows=522 loops=1)\n Index Cond: ((groupid = 4) AND (disabled = '0'::smallint))\n Heap Fetches: 276\n Buffers: shared hit=383\n -> Index Scan using users_pkey on users main (cost=0.42..6.97 rows=1 width=1268) (actual time=0.010..0.010 rows=1 loops=522)\n Index Cond: (id = cachedgroupmembers_2.memberid)\n Buffers: shared hit=2088\n -> Index Scan using principals_pkey on principals principals_1 (cost=0.43..0.48 rows=1 width=4) (actual time=0.009..0.009 rows=1 loops=560)\n Index Cond: (id = cachedgroupmembers_4.memberid)\n Filter: ((id <> 1) AND (disabled = '0'::smallint) AND ((principaltype)::text = 'User'::text))\n Buffers: shared hit=2243\n Planning time: 5.409 ms\n Execution time: 19.080 ms\n(42 rows)\n\nNEW:\nEXPLAIN (ANALYZE, BUFFERS) SELECT DISTINCT main.* FROM Users main CROSS JOIN ACL ACL_3 JOIN Principals Principals_1 ON ( Principals_1.id = main.id ) JOIN CachedGroupMembers CachedGroupMembers_2 ON ( CachedGroupMembers_2.MemberId = Principals_1.id ) JOIN CachedGroupMembers CachedGroupMembers_4 ON ( CachedGroupMembers_4.MemberId = Principals_1.id ) WHERE ((ACL_3.ObjectType = 'RT::Ticket' AND ACL_3.ObjectId = 950423) OR (ACL_3.ObjectType = 'RT::Queue' AND ACL_3.ObjectId = 1) OR (ACL_3.ObjectType = 'RT::System' AND ACL_3.ObjectId = 1)) AND (ACL_3.PrincipalId = CachedGroupMembers_4.GroupId) AND (ACL_3.PrincipalType = 'Group') AND (ACL_3.RightName = 'OwnTicket') AND (CachedGroupMembers_2.Disabled = '0') AND (CachedGroupMembers_2.GroupId = '4') AND (CachedGroupMembers_4.Disabled = '0') AND (Principals_1.Disabled = '0') AND (Principals_1.PrincipalType = 'User') AND (Principals_1.id != '1') ORDER BY main.Name ASC;\n \n QUERY PLAN \n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------\n Unique (cost=1276.71..1276.78 rows=1 width=1298) (actual time=69483.412..69483.990 rows=324 loops=1)\n Buffers: shared hit=5327437 dirtied=2\n -> Sort (cost=1276.71..1276.71 rows=1 width=1298) (actual time=69483.409..69483.449 rows=560 loops=1)\n Sort Key: main.name, main.id, main.password, main.comments, main.signature, main.emailaddress, main.freeformcontactinfo, main.organization, main.realname, main.nickname, main.lang, main.gecos, main.home\nphone, main.workphone, main.mobilephone, main.pagerphone, main.address1, main.address2, main.city, main.state, main.zip, main.country, main.timezone, main.creator, main.created, main.lastupdatedby, main.lastupda\nted, main.authtoken, main.smimecertificate\n Sort Method: quicksort Memory: 238kB\n Buffers: shared hit=5327421 dirtied=2\n -> Nested Loop (cost=2.00..1276.70 rows=1 width=1298) (actual time=0.458..69480.206 rows=560 loops=1)\n Buffers: shared hit=5327405 dirtied=2\n -> Nested Loop (cost=1.71..1263.36 rows=2 width=1302) (actual time=0.075..413.525 rows=886318 loops=1)\n Buffers: shared hit=9496 dirtied=2\n -> Nested Loop (cost=1.28..1262.07 rows=1 width=1306) (actual time=0.053..10.123 rows=519 loops=1)\n Buffers: shared hit=4179\n -> Nested Loop (cost=0.85..1108.38 rows=208 width=1302) (actual time=0.043..5.135 rows=520 loops=1)\n Buffers: shared hit=2099\n -> Index Only Scan using disgroumem on cachedgroupmembers cachedgroupmembers_2 (cost=0.43..15.43 rows=530 width=4) (actual time=0.020..0.258 rows=522 loops=1)\n Index Cond: ((groupid = 4) AND (disabled = '0'::smallint))\n Heap Fetches: 7\n Buffers: shared hit=13\n -> Index Scan using users_pkey on users main (cost=0.42..2.06 rows=1 width=1298) (actual time=0.008..0.008 rows=1 loops=522)\n Index Cond: (id = cachedgroupmembers_2.memberid)\n Buffers: shared hit=2086\n -> Index Scan using principals_pkey on principals principals_1 (cost=0.43..0.74 rows=1 width=4) (actual time=0.009..0.009 rows=1 loops=520)\n Index Cond: (id = main.id)\n Filter: ((id <> 1) AND (disabled = '0'::smallint) AND ((principaltype)::text = 'User'::text))\n Rows Removed by Filter: 0\n Buffers: shared hit=2080\n -> Index Only Scan using cachedgroupmembers4 on cachedgroupmembers cachedgroupmembers_4 (cost=0.43..1.08 rows=21 width=8) (actual time=0.010..0.384 rows=1708 loops=519)\n Index Cond: ((memberid = principals_1.id) AND (disabled = '0'::smallint))\n Heap Fetches: 2309\n Buffers: shared hit=5317 dirtied=2\n -> Index Only Scan using acl1 on acl acl_3 (cost=0.29..6.66 rows=1 width=4) (actual time=0.078..0.078 rows=0 loops=886318)\n Index Cond: ((rightname = 'OwnTicket'::text) AND (principaltype = 'Group'::text) AND (principalid = cachedgroupmembers_4.groupid))\n Filter: ((((objecttype)::text = 'RT::Ticket'::text) AND (objectid = 950423)) OR (((objecttype)::text = 'RT::Queue'::text) AND (objectid = 1)) OR (((objecttype)::text = 'RT::System'::text) AN\nD (objectid = 1)))\n Rows Removed by Filter: 0\n Heap Fetches: 0\n Buffers: shared hit=5317909\n Planning Time: 3.099 ms\n Execution Time: 69484.104 ms\n(38 rows)\n\nTime: 69488.511 ms (01:09.489)\n\nNEW (no nested):\nEXPLAIN (ANALYZE, BUFFERS) SELECT DISTINCT main.* FROM Users main CROSS JOIN ACL ACL_3 JOIN Principals Principals_1 ON ( Principals_1.id = main.id ) JOIN CachedGroupMembers CachedGroupMembers_2 ON ( CachedGroupMembers_2.MemberId = Principals_1.id ) JOIN CachedGroupMembers CachedGroupMembers_4 ON ( CachedGroupMembers_4.MemberId = Principals_1.id ) WHERE ((ACL_3.ObjectType = 'RT::Ticket' AND ACL_3.ObjectId = 950423) OR (ACL_3.ObjectType = 'RT::Queue' AND ACL_3.ObjectId = 1) OR (ACL_3.ObjectType = 'RT::System' AND ACL_3.ObjectId = 1)) AND (ACL_3.PrincipalId = CachedGroupMembers_4.GroupId) AND (ACL_3.PrincipalType = 'Group') AND (ACL_3.RightName = 'OwnTicket') AND (CachedGroupMembers_2.Disabled = '0') AND (CachedGroupMembers_2.GroupId = '4') AND (CachedGroupMembers_4.Disabled = '0') AND (Principals_1.Disabled = '0') AND (Principals_1.PrincipalType = 'User') AND (Principals_1.id != '1') ORDER BY main.Name ASC;\n \n QUERY PLAN \n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------\n Unique (cost=117008.39..117008.47 rows=1 width=1298) (actual time=2334.366..2334.913 rows=324 loops=1)\n Buffers: shared hit=66381 dirtied=3\n -> Sort (cost=117008.39..117008.39 rows=1 width=1298) (actual time=2334.364..2334.398 rows=560 loops=1)\n Sort Key: main.name, main.id, main.password, main.comments, main.signature, main.emailaddress, main.freeformcontactinfo, main.organization, main.realname, main.nickname, main.lang, main.gecos, main.home\nphone, main.workphone, main.mobilephone, main.pagerphone, main.address1, main.address2, main.city, main.state, main.zip, main.country, main.timezone, main.creator, main.created, main.lastupdatedby, main.lastupda\nted, main.authtoken, main.smimecertificate\n Sort Method: quicksort Memory: 238kB\n Buffers: shared hit=66365 dirtied=3\n -> Hash Join (cost=113207.91..117008.38 rows=1 width=1298) (actual time=1943.567..2331.572 rows=560 loops=1)\n Hash Cond: (principals_1.id = cachedgroupmembers_2.memberid)\n Buffers: shared hit=66319 dirtied=3\n -> Gather (cost=113185.86..116953.73 rows=49 width=1306) (actual time=1903.765..2323.358 rows=564 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n Buffers: shared hit=66306 dirtied=3\n -> Hash Join (cost=112185.86..115948.83 rows=29 width=1306) (actual time=1894.728..2274.198 rows=282 loops=2)\n Hash Cond: (cachedgroupmembers_4.groupid = acl_3.principalid)\n Buffers: shared hit=66306 dirtied=3\n -> Parallel Hash Join (cost=112165.88..115850.62 rows=3897 width=1310) (actual time=1879.642..2158.184 rows=1294258 loops=2)\n Hash Cond: (main.id = principals_1.id)\n Buffers: shared hit=66262 dirtied=3\n -> Parallel Seq Scan on users main (cost=0.00..3399.00 rows=73600 width=1298) (actual time=0.014..8.917 rows=62564 loops=2)\n Buffers: shared hit=2663\n -> Parallel Hash (cost=111510.39..111510.39 rows=52439 width=12) (actual time=1878.946..1878.946 rows=1294262 loops=2)\n Buckets: 4194304 (originally 262144) Batches: 1 (originally 1) Memory Usage: 184960kB\n Buffers: shared hit=63599 dirtied=3\n -> Parallel Hash Join (cost=44295.31..111510.39 rows=52439 width=12) (actual time=232.801..1399.686 rows=1294262 loops=2)\n Hash Cond: (cachedgroupmembers_4.memberid = principals_1.id)\n Buffers: shared hit=63599 dirtied=3\n -> Parallel Seq Scan on cachedgroupmembers cachedgroupmembers_4 (cost=0.00..62869.68 rows=1655392 width=8) (actual time=0.023..557.151 rows=3309488 loops=2)\n Filter: (disabled = '0'::smallint)\n Rows Removed by Filter: 26\n Buffers: shared hit=42177\n -> Parallel Hash (cost=43789.21..43789.21 rows=40488 width=4) (actual time=231.914..231.914 rows=61984 loops=2)\n Buckets: 131072 Batches: 1 Memory Usage: 5920kB\n Buffers: shared hit=21422 dirtied=3\n -> Parallel Seq Scan on principals principals_1 (cost=0.00..43789.21 rows=40488 width=4) (actual time=0.021..212.001 rows=61984 loops=2)\n Filter: ((id <> 1) AND (disabled = '0'::smallint) AND ((principaltype)::text = 'User'::text))\n Rows Removed by Filter: 1919263\n Buffers: shared hit=21422 dirtied=3\n -> Hash (cost=19.80..19.80 rows=14 width=4) (actual time=14.786..14.786 rows=3 loops=2)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=17\n -> Bitmap Heap Scan on acl acl_3 (cost=4.41..19.80 rows=14 width=4) (actual time=14.766..14.778 rows=3 loops=2)\n Recheck Cond: ((((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::Ticket'::text) AND (objectid = 950423) AND ((principaltype)::text = 'Group'::text)) O\nR (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::Queue'::text) AND (objectid = 1) AND ((principaltype)::text = 'Group'::text)) OR (((rightname)::text = 'OwnTicket'::text) AND ((objecttyp\ne)::text = 'RT::System'::text) AND (objectid = 1) AND ((principaltype)::text = 'Group'::text)))\n Heap Blocks: exact=2\n Buffers: shared hit=17\n -> BitmapOr (cost=4.41..4.41 rows=14 width=0) (actual time=0.072..0.072 rows=0 loops=2)\n Buffers: shared hit=13\n -> Bitmap Index Scan on acl1 (cost=0.00..1.40 rows=1 width=0) (actual time=0.044..0.044 rows=0 loops=2)\n Index Cond: (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::Ticket'::text) AND (objectid = 950423) AND ((principaltype)::text = 'Group':\n:text))\n Buffers: shared hit=5\n -> Bitmap Index Scan on acl1 (cost=0.00..1.59 rows=14 width=0) (actual time=0.016..0.016 rows=2 loops=2)\n Index Cond: (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::Queue'::text) AND (objectid = 1) AND ((principaltype)::text = 'Group'::text)\n)\n Buffers: shared hit=4\n -> Bitmap Index Scan on acl1 (cost=0.00..1.40 rows=1 width=0) (actual time=0.009..0.010 rows=1 loops=2)\n Index Cond: (((rightname)::text = 'OwnTicket'::text) AND ((objecttype)::text = 'RT::System'::text) AND (objectid = 1) AND ((principaltype)::text = 'Group'::text\n))\n Buffers: shared hit=4\n -> Hash (cost=15.43..15.43 rows=530 width=4) (actual time=39.769..39.769 rows=522 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\n Buffers: shared hit=13\n -> Index Only Scan using disgroumem on cachedgroupmembers cachedgroupmembers_2 (cost=0.43..15.43 rows=530 width=4) (actual time=39.504..39.670 rows=522 loops=1)\n Index Cond: ((groupid = 4) AND (disabled = '0'::smallint))\n Heap Fetches: 7\n Buffers: shared hit=13\n Planning Time: 5.112 ms\n JIT:\n Functions: 74\n Options: Inlining false, Optimization false, Expressions true, Deforming true\n Timing: Generation 10.510 ms, Inlining 0.000 ms, Optimization 2.889 ms, Emission 65.088 ms, Total 78.487 ms\n Execution Time: 2383.883 ms\n(69 rows)\n\nTime: 2391.552 ms (00:02.392)\n\nAny suggestions? I have a workaround to avoid the problem query but it loses some functionality.\n\nRegards,\nKen\n\n\n\n", "msg_date": "Thu, 28 May 2020 10:56:59 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL performance problem moving from 9.6.17 to 12.3" }, { "msg_contents": "Kenneth Marshall <[email protected]> writes:\n> I have a system that was running version 9.6.17 running on a system with\n> 48gb of memory and spinning disks front-ed by a HW RAID controller with\n> NVRAM cache. We moved to a new box running version 12.3 on a system with\n> 64gb of memory and NVME SSD drives. Here are the system config options:\n\n> OLD:\n> shared_buffers = 2048MB # min 128kB\n> work_mem = 128MB # min 64kB\n> maintenance_work_mem = 1024MB # min 1MB\n> effective_io_concurrency = 8 # 1-1000; 0 disables prefetching\n> max_parallel_workers_per_gather = 0 # taken from max_worker_processes\n> effective_cache_size = 24GB\n> default_statistics_target = 500 # range 1-10000\n> from_collapse_limit = 30\n> join_collapse_limit = 30 # 1 disables collapsing of explicit\n> seq_page_cost = 1.0\t\t\t# measured on an arbitrary scale\n> random_page_cost = 4.0\t\t\t# same scale as above\n\n> NEW:\n> shared_buffers = 12GB # min 128kB\n> work_mem = 128MB # min 64kB\n> maintenance_work_mem = 2GB # min 1MB\n> effective_io_concurrency = 200 # 1-1000; 0 disables prefetching\n> max_worker_processes = 24 # (change requires restart)\n> max_parallel_workers_per_gather = 4 # taken from max_parallel_workers\n> max_parallel_workers = 24 # maximum number of max_worker_processes that\n> seq_page_cost = 1.0 # measured on an arbitrary scale\n> random_page_cost = 1.1 # same scale as above for SSDs\n> effective_cache_size = 36GB\n> default_statistics_target = 500 # range 1-10000\n> from_collapse_limit = 30\n> join_collapse_limit = 30 # 1 disables collapsing of explicit\n\nMaybe you should be changing fewer variables at one time ...\n\nIn particular, decreasing random_page_cost as you've done here is\ngoing to encourage the planner to rely on nestloop-with-inner-indexscan\njoins. Does undoing that change improve matters?\n\nI personally think that v12 is way too enthusiastic about invoking\nJIT compilation, too. You might want to play with the parameters\nfor that as well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 28 May 2020 12:42:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem moving from 9.6.17 to 12.3" } ]
[ { "msg_contents": "Hi ,\n Can you help to tune the below plan\nLimit (cost=0.87..336777.92 rows=100 width=57) (actual time=599302.173..599481.552 rows=100 loops=1) Buffers: shared hit=78496066 -> Nested Loop (cost=0.87..11005874.67 rows=3268 width=57) (actual time=599302.170..599481.506 rows=100 loops=1) Buffers: shared hit=78496066 -> Index Scan using inx_callprocessingstatus_modifieddate on callprocessingstatus contactsta1_ (cost=0.44..2008486.89 rows=15673696 width=16) (actual time=0.356..66774.105 rows=15651059 loops=1) Index Cond: (modifieddate < now()) Filter: ((overallstatus)::text = 'COMPLETED'::text) Rows Removed by Filter: 275880 Buffers: shared hit=15803632 -> Index Scan using \"INX_callinfo_Callid\" on callinfo contact0_ (cost=0.43..0.57 rows=1 width=49) (actual time=0.033..0.033 rows=0 loops=15651059) Index Cond: (callid = contactsta1_.callid) Filter: ((combinationkey IS NULL) AND (mod(callid, '2'::bigint) = 0)) Rows Removed by Filter: 1 Buffers: shared hit=62692434Planning Time: 1.039 msExecution Time: 599481.758 ms\n\nHi , Can you help to tune the below planLimit (cost=0.87..336777.92 rows=100 width=57) (actual time=599302.173..599481.552 rows=100 loops=1)\n Buffers: shared hit=78496066\n -> Nested Loop (cost=0.87..11005874.67 rows=3268 width=57) (actual time=599302.170..599481.506 rows=100 loops=1)\n Buffers: shared hit=78496066\n -> Index Scan using inx_callprocessingstatus_modifieddate on callprocessingstatus contactsta1_ (cost=0.44..2008486.89 rows=15673696 width=16) (actual time=0.356..66774.105 rows=15651059 loops=1)\n Index Cond: (modifieddate < now())\n Filter: ((overallstatus)::text = 'COMPLETED'::text)\n Rows Removed by Filter: 275880\n Buffers: shared hit=15803632\n -> Index Scan using \"INX_callinfo_Callid\" on callinfo contact0_ (cost=0.43..0.57 rows=1 width=49) (actual time=0.033..0.033 rows=0 loops=15651059)\n Index Cond: (callid = contactsta1_.callid)\n Filter: ((combinationkey IS NULL) AND (mod(callid, '2'::bigint) = 0))\n Rows Removed by Filter: 1\n Buffers: shared hit=62692434\nPlanning Time: 1.039 ms\nExecution Time: 599481.758 ms", "msg_date": "Sat, 30 May 2020 07:36:49 +0000 (UTC)", "msg_from": "sugnathi hai <[email protected]>", "msg_from_op": true, "msg_subject": "Performance tunning" }, { "msg_contents": "Hi\n\nso 30. 5. 2020 v 9:37 odesílatel sugnathi hai <[email protected]> napsal:\n\n> Hi ,\n>\n> Can you help to tune the below plan\n>\n> Limit (cost=0.87..336777.92 rows=100 width=57) (actual\n> time=599302.173..599481.552 rows=100 loops=1) Buffers: shared hit=78496066\n> -> Nested Loop (cost=0.87..11005874.67 rows=3268 width=57) (actual\n> time=599302.170..599481.506 rows=100 loops=1) Buffers: shared hit=78496066\n> -> Index Scan using inx_callprocessingstatus_modifieddate on\n> callprocessingstatus contactsta1_ (cost=0.44..2008486.89 rows=15673696\n> width=16) (actual time=0.356..66774.105 rows=15651059 loops=1) Index Cond:\n> (modifieddate < now()) Filter: ((overallstatus)::text = 'COMPLETED'::text)\n> Rows Removed by Filter: 275880 Buffers: shared hit=15803632 -> Index Scan\n> using \"INX_callinfo_Callid\" on callinfo contact0_ (cost=0.43..0.57 rows=1\n> width=49) (actual time=0.033..0.033 rows=0 loops=15651059) Index Cond:\n> (callid = contactsta1_.callid) Filter: ((combinationkey IS NULL) AND\n> (mod(callid, '2'::bigint) = 0)) Rows Removed by Filter: 1 Buffers: shared\n> hit=62692434 Planning Time: 1.039 ms Execution Time: 599481.758 ms\n>\n\nCan you show a query related to this plan?\n\nHiso 30. 5. 2020 v 9:37 odesílatel sugnathi hai <[email protected]> napsal:Hi , Can you help to tune the below planLimit (cost=0.87..336777.92 rows=100 width=57) (actual time=599302.173..599481.552 rows=100 loops=1)\n Buffers: shared hit=78496066\n -> Nested Loop (cost=0.87..11005874.67 rows=3268 width=57) (actual time=599302.170..599481.506 rows=100 loops=1)\n Buffers: shared hit=78496066\n -> Index Scan using inx_callprocessingstatus_modifieddate on callprocessingstatus contactsta1_ (cost=0.44..2008486.89 rows=15673696 width=16) (actual time=0.356..66774.105 rows=15651059 loops=1)\n Index Cond: (modifieddate < now())\n Filter: ((overallstatus)::text = 'COMPLETED'::text)\n Rows Removed by Filter: 275880\n Buffers: shared hit=15803632\n -> Index Scan using \"INX_callinfo_Callid\" on callinfo contact0_ (cost=0.43..0.57 rows=1 width=49) (actual time=0.033..0.033 rows=0 loops=15651059)\n Index Cond: (callid = contactsta1_.callid)\n Filter: ((combinationkey IS NULL) AND (mod(callid, '2'::bigint) = 0))\n Rows Removed by Filter: 1\n Buffers: shared hit=62692434\nPlanning Time: 1.039 ms\nExecution Time: 599481.758 msCan you show a query related to this plan?", "msg_date": "Sat, 30 May 2020 09:43:43 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tunning" }, { "msg_contents": "On Sat, May 30, 2020 at 09:43:43AM +0200, Pavel Stehule wrote:\n> so 30. 5. 2020 v 9:37 odes�latel sugnathi hai <[email protected]> napsal:\n> > Can you help to tune the below plan\n\nCould you also send it so line breaks aren't lost, as seen here:\nhttps://www.postgresql.org/message-id/975278223.51863.1590824209351%40mail.yahoo.com\n\nProbably best to send a link to the plan at https://explain.depesz.com/\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 30 May 2020 09:19:51 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tunning" }, { "msg_contents": "On Sat, May 30, 2020 at 3:37 AM sugnathi hai <[email protected]> wrote:\n\n> Hi ,\n>\n> Can you help to tune the below plan\n>\n\n\nIt looks like your query (which you should show us) has something like\n\n ORDER BY modifieddate LIMIT 100\n\nIt thinks it can walk the index in order, then stop once it collects 100\nqualifying rows. But since almost all rows are removed by the join\nconditions, it ends up walking a large chunk of the index before finding\n100 of them which qualify.\n\nYou could try forcing it out of this plan by doing:\n\n ORDER BY modifieddate + interval '0 second' LIMIT 100\n\n Cheers,\n\nJeff\n\nOn Sat, May 30, 2020 at 3:37 AM sugnathi hai <[email protected]> wrote:Hi , Can you help to tune the below planIt looks like your query (which you should show us) has something like  ORDER BY modifieddate LIMIT 100It thinks it can walk the index in order, then stop once it collects 100 qualifying rows.  But since almost all rows are removed by the join conditions, it ends up walking a large chunk of the index before finding 100 of them which qualify.You could try forcing it out of this plan by doing:  ORDER BY modifieddate + interval '0 second' LIMIT 100 Cheers,Jeff", "msg_date": "Sat, 30 May 2020 10:55:36 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance tunning" } ]
[ { "msg_contents": "Hi ,\nIn PGtune I can able to get configuration changes based on RAM and Disk and No of Connection.\nbut if we want to recommend RAM, DISK, and No of connection based on DB size. any calculation is there\n\nFor Example, 1 TB Database how much RAM and DISK Space required for better performance\nthe DB size will increase per day 20 GBFrequent delete and insert will happen\n\nThanks in Advance\n\n\nHi ,In PGtune I can able to get configuration changes based on RAM and Disk and No of Connection.but if we want to recommend RAM, DISK, and No of connection based on DB size. any calculation is thereFor Example, 1 TB Database how much RAM and DISK Space required for better performancethe DB size will increase per day 20 GBFrequent delete and insert will happenThanks in Advance", "msg_date": "Mon, 1 Jun 2020 12:09:33 +0000 (UTC)", "msg_from": "sugnathi hai <[email protected]>", "msg_from_op": true, "msg_subject": "Configuration" }, { "msg_contents": "On Mon, Jun 1, 2020 at 2:09 PM sugnathi hai <[email protected]> wrote:\n\n> In PGtune I can able to get configuration changes based on RAM and Disk and No of Connection.\n>\n> but if we want to recommend RAM, DISK, and No of connection based on DB size. any calculation is there\n>\n> For Example, 1 TB Database how much RAM and DISK Space required for better performance\n>\n> the DB size will increase per day 20 GB\n> Frequent delete and insert will happen\n\nThe database size on itself is not enough to provide any sensible\nrecommendation for RAM. The connection count and usage patterns are\ncritical.\n\nThere are 1TB databases which could work really fine with as little as\n40GB RAM, if connection number is limited, all queries are\nindex-based, and the active data set is fairly small.\n\nOn the other hand, if you have many connections and non-indexed\naccess, you might need 10x or 20x more RAM for a sustained\nperformance.\n\nThat's why PgTune configurator requires you enter RAM, connection\ncount and DB access pattern class (OLTP/Web/DWH)\n\nAnyway, what PgTune gives is just and approximated \"blind guess\"\nrecommendation. If auto-configuration was easy, we would have it in\ncore postgres long time ago. It could be nice to have a configuration\nadvisor based on active data set size... but I doubt it will be\ncreated - for several reasons, 1st - it would be still a \"blind\nguess\". 2nd - current version pgtune is not-that-nice for contributors\n(fairly ugly JS code).\n\n\n", "msg_date": "Mon, 1 Jun 2020 15:24:42 +0200", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuration" } ]
[ { "msg_contents": "Hi!\n\nI was reading up on declarative partitioning[1] and I'm not sure what could\nbe a possible application of Hash partitioning.\n\nIs anyone actually using it? What are typical use cases? What benefits\ndoes such a partitioning scheme provide?\n\nOn its face, it seems that it can only give you a number of tables which\nare smaller than the un-partitioned one, but I fail to see how it would\nprovide any of the potential advantages listed in the documentation.\n\nWith a reasonable hash function, the distribution of rows across partitions\nshould be more or less equal, so I wouldn't expect any of the following to\nhold true:\n- \"...most of the heavily accessed rows of the table are in a single\npartition or a small number of partitions.\"\n- \"Bulk loads and deletes can be accomplished by adding or removing\npartitions...\",\netc.\n\nThat *might* turn out to be the case with a small number of distinct values\nin the partitioning column(s), but then why rely on hash assignment instead\nof using PARTITION BY LIST in the first place?\n\nRegards,\n-- \nAlex\n\n[1] https://www.postgresql.org/docs/12/ddl-partitioning.html\n\nHi!I was reading up on declarative partitioning[1] and I'm not sure what could be a possible application of Hash partitioning.Is anyone actually using it?  What are typical use cases?  What benefits does such a partitioning scheme provide?On its face, it seems that it can only give you a number of tables which are smaller than the un-partitioned one, but I fail to see how it would provide any of the potential advantages listed in the documentation.With a reasonable hash function, the distribution of rows across partitions should be more or less equal, so I wouldn't expect any of the following to hold true:- \"...most of the heavily accessed rows of the table are in a single partition or a small number of partitions.\"- \"Bulk loads and deletes can be accomplished by adding or removing partitions...\",etc.That *might* turn out to be the case with a small number of distinct values in the partitioning column(s), but then why rely on hash assignment instead of using PARTITION BY LIST in the first place?Regards,-- Alex[1] https://www.postgresql.org/docs/12/ddl-partitioning.html", "msg_date": "Tue, 2 Jun 2020 19:17:11 +0200", "msg_from": "Oleksandr Shulgin <[email protected]>", "msg_from_op": true, "msg_subject": "When to use PARTITION BY HASH?" }, { "msg_contents": "> To: [email protected], [email protected]\n\nPlease don't cross post to multiple lists.\n\nOn Tue, Jun 02, 2020 at 07:17:11PM +0200, Oleksandr Shulgin wrote:\n> I was reading up on declarative partitioning[1] and I'm not sure what could\n> be a possible application of Hash partitioning.\n\nIt's a good question. See Tom's complaint here.\nhttps://www.postgresql.org/message-id/31605.1586112900%40sss.pgh.pa.us\n\nIt *does* provide the benefit of smaller indexes and smaller tables, which\nmight allow seq scans to outpeform index scans.\n\nIt's maybe only useful for equality conditions on the partition key, and not\nfor ranges. Here, it scans a single partition:\n\npostgres=# CREATE TABLE t(i int) PARTITION BY HASH(i); CREATE TABLE t1 PARTITION OF t FOR VALUES WITH (REMAINDER 0, MODULUS 3);\npostgres=# CREATE TABLE t2 PARTITION OF t FOR VALUES WITH (MODULUS 3, REMAINDER 1);\npostgres=# CREATE TABLE t3 PARTITION OF t FOR VALUES WITH (MODULUS 3, REMAINDER 2);\npostgres=# INSERT INTO t SELECT i%9 FROM generate_series(1,9999)i; ANALYZE t;\npostgres=# explain analyze SELECT * FROM t WHERE i=3;\n Seq Scan on t2 (cost=0.00..75.55 rows=2222 width=4) (actual time=0.021..0.518 rows=2222 loops=1)\n Filter: (i = 3)\n Rows Removed by Filter: 2222\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 2 Jun 2020 12:33:54 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "Hi,\n\nI use it quite often, since I'm dealing with partitioning keys that have \nhigh cardinality, ie, high number of different values.  If your \ncardinality is very high, but your spacing between values is not \nuniform, HASH will balance your partitioned tables naturally.  If your \nspacing between values is consistent, perhaps RANGE partitioning would \nbe better.\n\nRegards,\nMichael Vitale\n\nOleksandr Shulgin wrote on 6/2/2020 1:17 PM:\n> Hi!\n>\n> I was reading up on declarative partitioning[1] and I'm not sure what \n> could be a possible application of Hash partitioning.\n>\n> Is anyone actually using it? What are typical use cases?  What \n> benefits does such a partitioning scheme provide?\n>\n> On its face, it seems that it can only give you a number of tables \n> which are smaller than the un-partitioned one, but I fail to see how \n> it would provide any of the potential advantages listed in the \n> documentation.\n>\n> With a reasonable hash function, the distribution of rows across \n> partitions should be more or less equal, so I wouldn't expect any of \n> the following to hold true:\n> - \"...most of the heavily accessed rows of the table are in a single \n> partition or a small number of partitions.\"\n> - \"Bulk loads and deletes can be accomplished by adding or removing \n> partitions...\",\n> etc.\n>\n> That *might* turn out to be the case with a small number of distinct \n> values in the partitioning column(s), but then why rely on hash \n> assignment instead of using PARTITION BY LIST in the first place?\n>\n> Regards,\n> -- \n> Alex\n>\n> [1] https://www.postgresql.org/docs/12/ddl-partitioning.html\n>\n\n\n\n", "msg_date": "Tue, 2 Jun 2020 13:39:40 -0400", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Tue, Jun 2, 2020 at 10:17 AM Oleksandr Shulgin <\[email protected]> wrote:\n\n> That *might* turn out to be the case with a small number of distinct\n> values in the partitioning column(s), but then why rely on hash\n> assignment instead of using PARTITION BY LIST in the first place?\n>\n> [1] https://www.postgresql.org/docs/12/ddl-partitioning.html\n>\n\nWhy the cross-posting? (-performance is oriented toward problem solving,\nnot theory, so -general is the one and only PostgreSQL list this should\nhave been sent to)\n\nAnyway, quoting the documentation you linked to:\n\n\"When choosing how to partition your table, it's also important to consider\nwhat changes may occur in the future. For example, if you choose to have\none partition per customer and you currently have a small number of large\ncustomers, consider the implications if in several years you instead find\nyourself with a large number of small customers. In this case, it may be\nbetter to choose to partition by HASH and choose a reasonable number of\npartitions rather than trying to partition by LIST and hoping that the\nnumber of customers does not increase beyond what it is practical to\npartition the data by.\"\n\nHashing does indeed preclude some of the benefits and introduces others.\n\nI suspect that having a hash function that turns its input into a different\noutput and checking for equality on the output would be better than trying\nto \"OR\" a partition list together in order to combine multiple inputs onto\nthe same table.\n\nDavid J.\n\nOn Tue, Jun 2, 2020 at 10:17 AM Oleksandr Shulgin <[email protected]> wrote:That *might* turn out to be the case with a small number of distinct values in the partitioning column(s), but then why rely on hash assignment instead of using PARTITION BY LIST in the first place?[1] https://www.postgresql.org/docs/12/ddl-partitioning.htmlWhy the cross-posting? (-performance is oriented toward problem solving, not theory, so -general is the one and only PostgreSQL list this should have been sent to)Anyway, quoting the documentation you linked to:\"When choosing how to partition your table, it's also important to consider what changes may occur in the future. For example, if you choose to have one partition per customer and you currently have a small number of large customers, consider the implications if in several years you instead find yourself with a large number of small customers. In this case, it may be better to choose to partition by HASH and choose a reasonable number of partitions rather than trying to partition by LIST and hoping that the number of customers does not increase beyond what it is practical to partition the data by.\"Hashing does indeed preclude some of the benefits and introduces others.I suspect that having a hash function that turns its input into a different output and checking for equality on the output would be better than trying to \"OR\" a partition list together in order to combine multiple inputs onto the same table.David J.", "msg_date": "Tue, 2 Jun 2020 10:43:02 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Tue, Jun 2, 2020 at 10:17 AM Oleksandr Shulgin <\[email protected]> wrote:\n\n> Hi!\n>\n> I was reading up on declarative partitioning[1] and I'm not sure what\n> could be a possible application of Hash partitioning.\n>\n> Is anyone actually using it? What are typical use cases? What benefits\n> does such a partitioning scheme provide?\n>\n> On its face, it seems that it can only give you a number of tables which\n> are smaller than the un-partitioned one, but I fail to see how it would\n> provide any of the potential advantages listed in the documentation.\n>\n\nI'm sure there will be many delightful answers to your question, and I look\nforward to them! From my point of view, hash partitioning is very useful\nfor spreading out high insert/update load. Yes its' true you end up with\nmore smaller tables than one big large one, but remember the indexes are\n(often) tree data structures. Smaller trees are faster than bigger trees.\nBy making the indexes smaller they are faster. Since the planner can knows\nto only examine the specific index it needs, this ends up being a lot\nfaster.\n\nPostgres can also parallelize queries on partitions. This is different\nfrom a parallel sequential scan, which can also happen per-partition, so\nthere are multiple levels of parallel opportunity.\n\nAnd last that I can think of, you can put the different partitions in\ndifferent tablespaces, improving the total IO bandwidth.\n\n-Michel\n\n\n\n> With a reasonable hash function, the distribution of rows across\n> partitions should be more or less equal, so I wouldn't expect any of the\n> following to hold true:\n> - \"...most of the heavily accessed rows of the table are in a single\n> partition or a small number of partitions.\"\n> - \"Bulk loads and deletes can be accomplished by adding or removing\n> partitions...\",\n> etc.\n>\n> That *might* turn out to be the case with a small number of distinct\n> values in the partitioning column(s), but then why rely on hash\n> assignment instead of using PARTITION BY LIST in the first place?\n>\n> Regards,\n> --\n> Alex\n>\n> [1] https://www.postgresql.org/docs/12/ddl-partitioning.html\n>\n>\n\nOn Tue, Jun 2, 2020 at 10:17 AM Oleksandr Shulgin <[email protected]> wrote:Hi!I was reading up on declarative partitioning[1] and I'm not sure what could be a possible application of Hash partitioning.Is anyone actually using it?  What are typical use cases?  What benefits does such a partitioning scheme provide?On its face, it seems that it can only give you a number of tables which are smaller than the un-partitioned one, but I fail to see how it would provide any of the potential advantages listed in the documentation.I'm sure there will be many delightful answers to your question, and I look forward to them!  From my point of view, hash partitioning is very useful for spreading out high insert/update load.  Yes its' true you end up with more smaller tables than one big large one, but remember the indexes are (often) tree data structures.  Smaller trees are faster than bigger trees.  By making the indexes smaller they are faster.  Since the planner can knows to only examine the specific index it needs, this ends up being a lot faster.Postgres can also parallelize queries on partitions.  This is different from a parallel sequential scan, which can also happen per-partition, so there are multiple levels of parallel opportunity.And last that I can think of, you can put the different partitions in different tablespaces, improving the total IO bandwidth.-Michel With a reasonable hash function, the distribution of rows across partitions should be more or less equal, so I wouldn't expect any of the following to hold true:- \"...most of the heavily accessed rows of the table are in a single partition or a small number of partitions.\"- \"Bulk loads and deletes can be accomplished by adding or removing partitions...\",etc.That *might* turn out to be the case with a small number of distinct values in the partitioning column(s), but then why rely on hash assignment instead of using PARTITION BY LIST in the first place?Regards,-- Alex[1] https://www.postgresql.org/docs/12/ddl-partitioning.html", "msg_date": "Tue, 2 Jun 2020 10:45:12 -0700", "msg_from": "Michel Pelletier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "Greetings,\n\nPlease don't cross post to multiple lists without any particular reason\nfor doing so- pick whichever list makes sense and post to that.\n\n* Oleksandr Shulgin ([email protected]) wrote:\n> I was reading up on declarative partitioning[1] and I'm not sure what could\n> be a possible application of Hash partitioning.\n\nYeah, I tend to agree with this.\n\n> Is anyone actually using it? What are typical use cases? What benefits\n> does such a partitioning scheme provide?\n\nI'm sure folks are using it but that doesn't make it a good solution.\n\n> On its face, it seems that it can only give you a number of tables which\n> are smaller than the un-partitioned one, but I fail to see how it would\n> provide any of the potential advantages listed in the documentation.\n\nHaving smaller tables can be helpful when it comes to dealing with\nthings like VACUUM (particularly since, even though we can avoid having\nto scan the entire heap, we have to go through the indexes in order to\nclean them up and generally larger tables have larger indexes),\nhowever..\n\n> With a reasonable hash function, the distribution of rows across partitions\n> should be more or less equal, so I wouldn't expect any of the following to\n> hold true:\n> - \"...most of the heavily accessed rows of the table are in a single\n> partition or a small number of partitions.\"\n> - \"Bulk loads and deletes can be accomplished by adding or removing\n> partitions...\",\n> etc.\n> \n> That *might* turn out to be the case with a small number of distinct values\n> in the partitioning column(s), but then why rely on hash assignment instead\n> of using PARTITION BY LIST in the first place?\n\nYou're entirely correct with this- there's certainly no small number of\nsituations where you end up with a 'hot' partition when using hashing\n(which is true in other RDBMS's too, of course...) and that ends up\nbeing pretty painful to deal with.\n\nAlso, you're right that you don't get to do bulk load/drop when using\nhash partitioning, which is absolutely one of the largest benefits to\npartitioning in the first place, so, yeah, their usefullness is.. rather\nlimited. Better to do your own partitioning based on actual usage\npatterns that you know and the database's hash function certainly\ndoesn't.\n\nThanks,\n\nStephen", "msg_date": "Tue, 2 Jun 2020 13:47:12 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Tue, Jun 2, 2020 at 7:47 PM Stephen Frost <[email protected]> wrote:\n\n>\n> Please don't cross post to multiple lists without any particular reason\n> for doing so- pick whichever list makes sense and post to that.\n>\n\nSorry for the trouble, I should've checked it more carefully.\nWhen posting I did think it may be relevant to the performance list as well.\n\nAt the same time, wouldn't it make sense to document this policy explicitly?\n/me resists the urge of cross-posting to pgsql-www ;)\n\nCheers,\n--\nAlex\n\nOn Tue, Jun 2, 2020 at 7:47 PM Stephen Frost <[email protected]> wrote:\nPlease don't cross post to multiple lists without any particular reason\nfor doing so- pick whichever list makes sense and post to that.Sorry for the trouble, I should've checked it more carefully.When posting I did think it may be relevant to the performance list as well.At the same time, wouldn't it make sense to document this policy explicitly?/me resists the urge of cross-posting to pgsql-www ;)Cheers,--Alex", "msg_date": "Wed, 3 Jun 2020 09:38:49 +0200", "msg_from": "Oleksandr Shulgin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Tue, Jun 2, 2020 at 7:33 PM Justin Pryzby <[email protected]> wrote:\n\n> > To: [email protected],\n> [email protected]\n>\n> Please don't cross post to multiple lists.\n>\n> On Tue, Jun 02, 2020 at 07:17:11PM +0200, Oleksandr Shulgin wrote:\n> > I was reading up on declarative partitioning[1] and I'm not sure what\n> could\n> > be a possible application of Hash partitioning.\n>\n> It's a good question. See Tom's complaint here.\n> https://www.postgresql.org/message-id/31605.1586112900%40sss.pgh.pa.us\n>\n> It *does* provide the benefit of smaller indexes and smaller tables, which\n> might allow seq scans to outpeform index scans.\n>\n> It's maybe only useful for equality conditions on the partition key, and\n> not\n> for ranges. Here, it scans a single partition:\n>\n> postgres=# CREATE TABLE t(i int) PARTITION BY HASH(i); CREATE TABLE t1\n> PARTITION OF t FOR VALUES WITH (REMAINDER 0, MODULUS 3);\n> postgres=# CREATE TABLE t2 PARTITION OF t FOR VALUES WITH (MODULUS 3,\n> REMAINDER 1);\n> postgres=# CREATE TABLE t3 PARTITION OF t FOR VALUES WITH (MODULUS 3,\n> REMAINDER 2);\n> postgres=# INSERT INTO t SELECT i%9 FROM generate_series(1,9999)i; ANALYZE\n> t;\n> postgres=# explain analyze SELECT * FROM t WHERE i=3;\n> Seq Scan on t2 (cost=0.00..75.55 rows=2222 width=4) (actual\n> time=0.021..0.518 rows=2222 loops=1)\n> Filter: (i = 3)\n> Rows Removed by Filter: 2222\n>\n\nI see. So it works with low cardinality in the partitioned column. With\nhigh cardinality an index scan on an unpartitioned table would be\npreferable I guess.\n\nThe documentation page I've linked only contains examples around\npartitioning BY RANGE. I believe it'd be helpful to extend it with some\nmeaningful examples for LIST and HASH partitioning.\n\nRegards,\n-- \nAlex\n\nOn Tue, Jun 2, 2020 at 7:33 PM Justin Pryzby <[email protected]> wrote:> To: [email protected], [email protected]\n\nPlease don't cross post to multiple lists.\n\nOn Tue, Jun 02, 2020 at 07:17:11PM +0200, Oleksandr Shulgin wrote:\n> I was reading up on declarative partitioning[1] and I'm not sure what could\n> be a possible application of Hash partitioning.\n\nIt's a good question.  See Tom's complaint here.\nhttps://www.postgresql.org/message-id/31605.1586112900%40sss.pgh.pa.us\n\nIt *does* provide the benefit of smaller indexes and smaller tables, which\nmight allow seq scans to outpeform index scans.\n\nIt's maybe only useful for equality conditions on the partition key, and not\nfor ranges.  Here, it scans a single partition:\n\npostgres=# CREATE TABLE t(i int) PARTITION BY HASH(i); CREATE TABLE t1 PARTITION OF t FOR VALUES WITH (REMAINDER 0, MODULUS 3);\npostgres=# CREATE TABLE t2 PARTITION OF t FOR VALUES WITH (MODULUS 3, REMAINDER 1);\npostgres=# CREATE TABLE t3 PARTITION OF t FOR VALUES WITH (MODULUS 3, REMAINDER 2);\npostgres=# INSERT INTO t SELECT i%9 FROM generate_series(1,9999)i; ANALYZE t;\npostgres=# explain analyze SELECT * FROM t WHERE i=3;\n Seq Scan on t2  (cost=0.00..75.55 rows=2222 width=4) (actual time=0.021..0.518 rows=2222 loops=1)\n   Filter: (i = 3)\n   Rows Removed by Filter: 2222I see.  So it works with low cardinality in the partitioned column.  With high cardinality an index scan on an unpartitioned table would be preferable I guess.The documentation page I've linked only contains examples around partitioning BY RANGE.  I believe it'd be helpful to extend it with some meaningful examples for LIST and HASH partitioning.Regards,-- Alex", "msg_date": "Wed, 3 Jun 2020 09:45:48 +0200", "msg_from": "Oleksandr Shulgin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "(sticking to pgsql-general)\n\nOn Tue, Jun 2, 2020 at 7:45 PM Michel Pelletier <[email protected]>\nwrote:\n\n>\n> On Tue, Jun 2, 2020 at 10:17 AM Oleksandr Shulgin <\n> [email protected]> wrote:\n>\n>>\n>> I was reading up on declarative partitioning[1] and I'm not sure what\n>> could be a possible application of Hash partitioning.\n>>\n>> Is anyone actually using it? What are typical use cases? What benefits\n>> does such a partitioning scheme provide?\n>>\n>> On its face, it seems that it can only give you a number of tables which\n>> are smaller than the un-partitioned one, but I fail to see how it would\n>> provide any of the potential advantages listed in the documentation.\n>>\n>\n>\n\n> From my point of view, hash partitioning is very useful for spreading out\n> high insert/update load.\n>\n\nDo you also assign the partitions to different tablespaces as you've\nhinted below or do you see performance improvement from partitioning\nalone? How does that work? Does it give better results than using a RAID\nto spread the disk IO, for example?\n\nYes its' true you end up with more smaller tables than one big large one,\n> but remember the indexes are (often) tree data structures. Smaller trees\n> are faster than bigger trees. By making the indexes smaller they are\n> faster. Since the planner can knows to only examine the specific index it\n> needs, this ends up being a lot faster.\n>\n\nThat sounds logical, but can it be demonstrated? If the index(es) fit in\nmemory fully, it doesn't make a measurable difference, I guess?\n\nWith hash partitioning you are not expected, in general, to end up with a\nsmall number of partitions being accessed more heavily than the rest. So\nyour indexes will also not fit into memory.\n\nI have the feeling that using a hash function to distribute rows simply\ncontradicts the basic assumption of when you would think of partitioning\nyour table at all: that is to make sure the most active part of the table\nand indexes is small enough to be cached in memory.\n\nRegards,\n--\nAlex\n\n(sticking to pgsql-general)On Tue, Jun 2, 2020 at 7:45 PM Michel Pelletier <[email protected]> wrote:On Tue, Jun 2, 2020 at 10:17 AM Oleksandr Shulgin <[email protected]> wrote:I was reading up on declarative partitioning[1] and I'm not sure what could be a possible application of Hash partitioning.Is anyone actually using it?  What are typical use cases?  What benefits does such a partitioning scheme provide?On its face, it seems that it can only give you a number of tables which are smaller than the un-partitioned one, but I fail to see how it would provide any of the potential advantages listed in the documentation. From my point of view, hash partitioning is very useful for spreading out high insert/update load.Do you also assign the partitions to different tablespaces as you've hinted below or do you see performance improvement from partitioning alone?  How does that work?  Does it give better  results than using a RAID to spread the disk IO, for example?Yes its' true you end up with more smaller tables than one big large one, but remember the indexes are (often) tree data structures.  Smaller trees are faster than bigger trees.  By making the indexes smaller they are faster.  Since the planner can knows to only examine the specific index it needs, this ends up being a lot faster.That sounds logical, but can it be demonstrated?  If the index(es) fit in memory fully, it doesn't make a measurable difference, I guess?With hash partitioning you are not expected, in general, to end up with a small number of partitions being accessed more heavily than the rest.  So your indexes will also not fit into memory.I have the feeling that using a hash function to distribute rows simply contradicts the basic assumption of when you would think of partitioning your table at all: that is to make sure the most active part of the table and indexes is small enough to be cached in memory.Regards,--Alex", "msg_date": "Wed, 3 Jun 2020 13:55:25 +0200", "msg_from": "Oleksandr Shulgin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Wed, Jun 03, 2020 at 09:45:48AM +0200, Oleksandr Shulgin wrote:\n> I see. So it works with low cardinality in the partitioned column. With\n> high cardinality an index scan on an unpartitioned table would be\n> preferable I guess.\n> \n> The documentation page I've linked only contains examples around\n> partitioning BY RANGE. I believe it'd be helpful to extend it with some\n> meaningful examples for LIST and HASH partitioning.\n\nI agree. I think it would also be useful to mention the \"benefits\" which\naren't likely to apply to hash partitioning.\n\nWould you want to propose an example to include ?\nEventually it needs to be submitted as a patch to -hackers.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 3 Jun 2020 10:09:59 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Wed, Jun 3, 2020 at 7:55 AM Oleksandr Shulgin <\[email protected]> wrote:\n\nWith hash partitioning you are not expected, in general, to end up with a\n> small number of partitions being accessed more heavily than the rest. So\n> your indexes will also not fit into memory.\n>\n> I have the feeling that using a hash function to distribute rows simply\n> contradicts the basic assumption of when you would think of partitioning\n> your table at all: that is to make sure the most active part of the table\n> and indexes is small enough to be cached in memory.\n>\n\nWhile hash partitioning doesn't appeal to me, I think this may be overly\npessimistic. It would not be all that unusual for your customers to take\nturns being highly active and less active. Especially if you do occasional\nbulk loads all with the same customer_id for any given load, for example.\nSo while you might not have a permanently hot partition, you could have\npartitions which are hot in turn. Of course you could get the same benefit\n(and probably better) with list or range partitioning rather than hash, but\nthen you have to maintain those lists or ranges when you add new customers.\n\nCheers,\n\nJeff\n\nOn Wed, Jun 3, 2020 at 7:55 AM Oleksandr Shulgin <[email protected]> wrote:With hash partitioning you are not expected, in general, to end up with a small number of partitions being accessed more heavily than the rest.  So your indexes will also not fit into memory.I have the feeling that using a hash function to distribute rows simply contradicts the basic assumption of when you would think of partitioning your table at all: that is to make sure the most active part of the table and indexes is small enough to be cached in memory.While hash partitioning doesn't appeal to me, I think this may be overly pessimistic.  It would not be all that unusual for your customers to take turns being highly active and less active.  Especially if you do occasional bulk loads all with the same customer_id for any given load, for example.  So while you might not have a permanently hot partition, you could have partitions which are hot in turn.  Of course you could get the same benefit (and probably better) with list or range partitioning rather than hash, but then you have to maintain those lists or ranges when you add new customers.Cheers,Jeff", "msg_date": "Thu, 4 Jun 2020 10:32:42 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Thu, Jun 4, 2020 at 4:32 PM Jeff Janes <[email protected]> wrote:\n\n> On Wed, Jun 3, 2020 at 7:55 AM Oleksandr Shulgin <\n> [email protected]> wrote:\n>\n> With hash partitioning you are not expected, in general, to end up with a\n>> small number of partitions being accessed more heavily than the rest. So\n>> your indexes will also not fit into memory.\n>>\n>> I have the feeling that using a hash function to distribute rows simply\n>> contradicts the basic assumption of when you would think of partitioning\n>> your table at all: that is to make sure the most active part of the table\n>> and indexes is small enough to be cached in memory.\n>>\n>\n> While hash partitioning doesn't appeal to me, I think this may be overly\n> pessimistic. It would not be all that unusual for your customers to take\n> turns being highly active and less active. Especially if you do occasional\n> bulk loads all with the same customer_id for any given load, for example.\n>\n\nFor a bulk load you'd likely want to go with an empty partition w/o indexes\nand build them later, after loading the tuples. While it might not be\npossible with any given partitioning scheme either, using hash partitioning\nmost certainly precludes that.\n\n\n> So while you might not have a permanently hot partition, you could have\n> partitions which are hot in turn. Of course you could get the same benefit\n> (and probably better) with list or range partitioning rather than hash, but\n> then you have to maintain those lists or ranges when you add new customers.\n>\n\nWhy are LRU eviction from the shared buffers and OS disk cache not good\nenough to handle this?\n\nThis actually applies to any partitioning scheme: the hot dataset could be\nrecognized by these caching layers. Does it not happen in practice?\n\n--\nAlex\n\nOn Thu, Jun 4, 2020 at 4:32 PM Jeff Janes <[email protected]> wrote:On Wed, Jun 3, 2020 at 7:55 AM Oleksandr Shulgin <[email protected]> wrote:With hash partitioning you are not expected, in general, to end up with a small number of partitions being accessed more heavily than the rest.  So your indexes will also not fit into memory.I have the feeling that using a hash function to distribute rows simply contradicts the basic assumption of when you would think of partitioning your table at all: that is to make sure the most active part of the table and indexes is small enough to be cached in memory.While hash partitioning doesn't appeal to me, I think this may be overly pessimistic.  It would not be all that unusual for your customers to take turns being highly active and less active.  Especially if you do occasional bulk loads all with the same customer_id for any given load, for example.For a bulk load you'd likely want to go with an empty partition w/o indexes and build them later, after loading the tuples.  While it might not be possible with any given partitioning scheme either, using hash partitioning most certainly precludes that. So while you might not have a permanently hot partition, you could have partitions which are hot in turn.  Of course you could get the same benefit (and probably better) with list or range partitioning rather than hash, but then you have to maintain those lists or ranges when you add new customers.Why are LRU eviction from the shared buffers and OS disk cache not good enough to handle this?This actually applies to any partitioning scheme: the hot dataset could be recognized by these caching layers.  Does it not happen in practice?--Alex", "msg_date": "Fri, 5 Jun 2020 12:11:54 +0200", "msg_from": "Oleksandr Shulgin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "> \"Bulk loads ...\",\n\nAs I see - There is an interesting bulkload benchmark:\n\n\"How Bulkload performance is affected by table partitioning in PostgreSQL\"\nby Beena Emerson (Enterprisedb, December 4, 2019 )\n\n\n\n\n\n*SUMMARY: This article covers how benchmark tests can be used to\ndemonstrate the effect of table partitioning on performance. Tests using\nrange- and hash-partitioned tables are compared and the reasons for their\ndifferent results are explained: 1. Range partitions\n 2. Hash partitions 3. Combination graphs\n 4. Explaining the behavior 5. Conclusion*\n\n*\"For the hash-partitioned table, the first value is inserted in the first\npartition, the second number in the second partition and so on till all the\npartitions are reached before it loops back to the first partition again\nuntil all the data is exhausted. Thus it exhibits the worst-case scenario\nwhere the partition is repeatedly switched for every value inserted. As a\nresult, the number of times the partition is switched in a\nrange-partitioned table is equal to the number of partitions, while in a\nhash-partitioned table, the number of times the partition has switched is\nequal to the amount of data being inserted. This causes the massive\ndifference in timing for the two partition types.\"*\n\nhttps://www.enterprisedb.com/postgres-tutorials/how-bulkload-performance-affected-table-partitioning-postgresql\n\nRegards,\n Imre\n\n\nOleksandr Shulgin <[email protected]> ezt írta (időpont: 2020.\njún. 2., K, 19:17):\n\n> Hi!\n>\n> I was reading up on declarative partitioning[1] and I'm not sure what\n> could be a possible application of Hash partitioning.\n>\n> Is anyone actually using it? What are typical use cases? What benefits\n> does such a partitioning scheme provide?\n>\n> On its face, it seems that it can only give you a number of tables which\n> are smaller than the un-partitioned one, but I fail to see how it would\n> provide any of the potential advantages listed in the documentation.\n>\n> With a reasonable hash function, the distribution of rows across\n> partitions should be more or less equal, so I wouldn't expect any of the\n> following to hold true:\n> - \"...most of the heavily accessed rows of the table are in a single\n> partition or a small number of partitions.\"\n> - \"Bulk loads and deletes can be accomplished by adding or removing\n> partitions...\",\n> etc.\n>\n> That *might* turn out to be the case with a small number of distinct\n> values in the partitioning column(s), but then why rely on hash\n> assignment instead of using PARTITION BY LIST in the first place?\n>\n> Regards,\n> --\n> Alex\n>\n> [1] https://www.postgresql.org/docs/12/ddl-partitioning.html\n>\n>\n\n> \"Bulk loads ...\",As I see - There is an interesting bulkload benchmark:    \"How Bulkload performance is affected by table partitioning in PostgreSQL\" by Beena Emerson (Enterprisedb, December 4, 2019 )SUMMARY: This article covers how benchmark tests can be used to demonstrate the effect of table partitioning on performance. Tests using range- and hash-partitioned tables are compared and the reasons for their different results are explained:                  1. Range partitions                 2. Hash partitions                 3. Combination graphs                 4. Explaining the behavior                 5. Conclusion\"For the hash-partitioned table, the first value is inserted in the first partition, the second number in the second partition and so on till all the partitions are reached before it loops back to the first partition again until all the data is exhausted. Thus it exhibits the worst-case scenario where the partition is repeatedly switched for every value inserted. As a result, the number of times the partition is switched in a range-partitioned table is equal to the number of partitions, while in a hash-partitioned table, the number of times the partition has switched is equal to the amount of data being inserted. This causes the massive difference in timing for the two partition types.\"https://www.enterprisedb.com/postgres-tutorials/how-bulkload-performance-affected-table-partitioning-postgresqlRegards, ImreOleksandr Shulgin <[email protected]> ezt írta (időpont: 2020. jún. 2., K, 19:17):Hi!I was reading up on declarative partitioning[1] and I'm not sure what could be a possible application of Hash partitioning.Is anyone actually using it?  What are typical use cases?  What benefits does such a partitioning scheme provide?On its face, it seems that it can only give you a number of tables which are smaller than the un-partitioned one, but I fail to see how it would provide any of the potential advantages listed in the documentation.With a reasonable hash function, the distribution of rows across partitions should be more or less equal, so I wouldn't expect any of the following to hold true:- \"...most of the heavily accessed rows of the table are in a single partition or a small number of partitions.\"- \"Bulk loads and deletes can be accomplished by adding or removing partitions...\",etc.That *might* turn out to be the case with a small number of distinct values in the partitioning column(s), but then why rely on hash assignment instead of using PARTITION BY LIST in the first place?Regards,-- Alex[1] https://www.postgresql.org/docs/12/ddl-partitioning.html", "msg_date": "Fri, 5 Jun 2020 13:48:19 +0200", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Fri, Jun 5, 2020 at 6:12 AM Oleksandr Shulgin <\[email protected]> wrote:\n\n> On Thu, Jun 4, 2020 at 4:32 PM Jeff Janes <[email protected]> wrote:\n>\n>> On Wed, Jun 3, 2020 at 7:55 AM Oleksandr Shulgin <\n>> [email protected]> wrote:\n>>\n>> With hash partitioning you are not expected, in general, to end up with a\n>>> small number of partitions being accessed more heavily than the rest. So\n>>> your indexes will also not fit into memory.\n>>>\n>>> I have the feeling that using a hash function to distribute rows simply\n>>> contradicts the basic assumption of when you would think of partitioning\n>>> your table at all: that is to make sure the most active part of the table\n>>> and indexes is small enough to be cached in memory.\n>>>\n>>\n>> While hash partitioning doesn't appeal to me, I think this may be overly\n>> pessimistic. It would not be all that unusual for your customers to take\n>> turns being highly active and less active. Especially if you do occasional\n>> bulk loads all with the same customer_id for any given load, for example.\n>>\n>\n> For a bulk load you'd likely want to go with an empty partition w/o\n> indexes and build them later, after loading the tuples.\n>\n\nThat only works if the bulk load is starting from zero. If you are adding\na million rows to something that already has 100 million, you would\nprobably spend more time rebuilding the indexes than you saved by dropping\nthem. And of course to go with an empty partition, you have to be using\npartitioning of some kind to start with; and then you need to be futzing\naround creating/detaching and indexing and attaching. With hash\npartitioning, you might get much of the benefit with none of the futzing.\n\n\n> So while you might not have a permanently hot partition, you could have\n>> partitions which are hot in turn. Of course you could get the same benefit\n>> (and probably better) with list or range partitioning rather than hash, but\n>> then you have to maintain those lists or ranges when you add new customers.\n>>\n>\n> Why are LRU eviction from the shared buffers and OS disk cache not good\n> enough to handle this?\n>\n\nData density. If the rows are spread out randomly throughout the table,\nthe density of currently relevant tuples per MB of RAM is much lower than\nif they are in partitions which align with current relevance. Of course\nyou could CLUSTER the table on what would otherwise be the partition key,\nbut clustered tables don't stay clustered, while partitioned ones stay\npartitioned. Also, clustering the table wouldn't help with the relevant\ndata density in the indexes (other than the index being clustered on, or\nother ones highly correlated with that one). This can be particularly\nimportant for index maintenance and with HDD, as the OS disk cache is in my\nexperince pretty bad at deciding when to write dirty blocks which have been\nhanded to it, versus retain them in the hopes they will be re-dirtied soon,\nor have adjacent blocks dirtied and then combined into one write.\n\n\n>\n> This actually applies to any partitioning scheme: the hot dataset could be\n> recognized by these caching layers. Does it not happen in practice?\n>\n\nCaching only happens at the page level, not the tuple level. So if your\nhot tuples are interspersed with cold ones, you can get poor caching\neffectiveness.\n\nCheers,\n\nJeff\n\nOn Fri, Jun 5, 2020 at 6:12 AM Oleksandr Shulgin <[email protected]> wrote:On Thu, Jun 4, 2020 at 4:32 PM Jeff Janes <[email protected]> wrote:On Wed, Jun 3, 2020 at 7:55 AM Oleksandr Shulgin <[email protected]> wrote:With hash partitioning you are not expected, in general, to end up with a small number of partitions being accessed more heavily than the rest.  So your indexes will also not fit into memory.I have the feeling that using a hash function to distribute rows simply contradicts the basic assumption of when you would think of partitioning your table at all: that is to make sure the most active part of the table and indexes is small enough to be cached in memory.While hash partitioning doesn't appeal to me, I think this may be overly pessimistic.  It would not be all that unusual for your customers to take turns being highly active and less active.  Especially if you do occasional bulk loads all with the same customer_id for any given load, for example.For a bulk load you'd likely want to go with an empty partition w/o indexes and build them later, after loading the tuples.  That only works if the bulk load is starting from zero.  If you are adding a million rows to something that already has 100 million, you would probably spend more time rebuilding the indexes than you saved by dropping them.  And of course to go with an empty partition, you have to be using partitioning of some kind to start with; and then you need to be futzing around creating/detaching and indexing and attaching.  With hash partitioning, you might get much of the benefit with none of the futzing. So while you might not have a permanently hot partition, you could have partitions which are hot in turn.  Of course you could get the same benefit (and probably better) with list or range partitioning rather than hash, but then you have to maintain those lists or ranges when you add new customers.Why are LRU eviction from the shared buffers and OS disk cache not good enough to handle this?Data density.  If the rows are spread out randomly throughout the table, the density of currently relevant tuples per MB of RAM is much lower than if they are in partitions which align with current relevance.  Of course you could CLUSTER the table on what would otherwise be the partition key, but clustered tables don't stay clustered, while partitioned ones stay partitioned.  Also, clustering the table wouldn't help with the relevant data density in the indexes (other than the index being clustered on, or other ones highly correlated with that one).  This can be particularly important for index maintenance and with HDD, as the OS disk cache is in my experince pretty bad at deciding when to write dirty blocks which have been handed to it, versus retain them in the hopes they will be re-dirtied soon, or have adjacent blocks dirtied and then combined into one write.   This actually applies to any partitioning scheme: the hot dataset could be recognized by these caching layers.  Does it not happen in practice?Caching only happens at the page level, not the tuple level.  So if your hot tuples are interspersed with cold ones, you can get poor caching effectiveness.Cheers,Jeff", "msg_date": "Fri, 5 Jun 2020 09:51:29 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Wed, Jun 3, 2020 at 4:55 AM Oleksandr Shulgin <\[email protected]> wrote:\n\n>\n> Do you also assign the partitions to different tablespaces as you've\n> hinted below or do you see performance improvement from partitioning\n> alone? How does that work? Does it give better results than using a RAID\n> to spread the disk IO, for example?\n>\n\nIn general you could find write throughput improvements from all three,\npartitioning, tablespacing, and disk striping. It depends on your problem.\n Hash partitioning is a common feature in other databases as well. The\nhash strategy works for many distributed access patterns.\n\n\n> Yes its' true you end up with more smaller tables than one big large one,\n>> but remember the indexes are (often) tree data structures. Smaller trees\n>> are faster than bigger trees. By making the indexes smaller they are\n>> faster. Since the planner can knows to only examine the specific index it\n>> needs, this ends up being a lot faster.\n>>\n>\n> That sounds logical, but can it be demonstrated? If the index(es) fit in\n> memory fully, it doesn't make a measurable difference, I guess?\n>\n\nWell lets take a step back here and look at the question, hash partitioning\nexists in Postgres, is it useful? While I appreciate the need to see a\nfact demonstrated, and generally avoiding argument by authority, it is true\nthat many of the very smartest database people in the world conceived of,\ndiscussed, implemented and documented this feature for us. It stands to\nreason that it is useful, or it wouldn't exist. So maybe this is more\nabout finding or needing better partitioning documentation.\n\n\n> With hash partitioning you are not expected, in general, to end up with a\n> small number of partitions being accessed more heavily than the rest. So\n> your indexes will also not fit into memory.\n>\n\nIndexes are not (usually) constant time structures, they take more time the\nbigger they get. So partitioned indexes will be smaller, quicker to insert\ninto, and quicker to vacuum, and also gain possible pruning advantages on\nquery when you split them up. If the planner can, knowing the key, exclude\nall but one partition, it won't even look at the other tables, so if you\nhash partition by primary key, you reduce the search space to 1/N\nimmediately.\n\nIndexes with high update activity also suffer from a problem called \"index\nbloat\" where spares \"holes\" get punched in the buckets of btree indexes\nfrom updates and delete deletes. These holes are minimized by vacuuming\nbut the bigger the index gets, the harder that process is to maintain.\nSmaller indexes suffer less from index bloat, and remedying the situation\nis easier because you can reindex partitions independently of each other.\nYour not just reducing the query load to an Nth, you're reducing the\nmaintenance load.\n\nI have the feeling that using a hash function to distribute rows simply\n> contradicts the basic assumption of when you would think of partitioning\n> your table at all: that is to make sure the most active part of the table\n> and indexes is small enough to be cached in memory.\n>\n\nI think you might be framing this with a specific data pattern in mind, not\nall data distributions have a \"most active\" or power law distribution of\ndata. For example i work with a lot of commercial airline position data\nthat services both real-time queries and ad-hoc analytical queries over\narbitrary airframe identifiers. There is no advantage trying to have a\n\"most active\" data strategy because all airframes in the air at any given\ntime are by definition most active. A medium sized drone may send out as\nmany pings as a jumbo jet in a given interval of time.\n\n\n-Michel\n\n\n>\n> Regards,\n> --\n> Alex\n>\n>\n\nOn Wed, Jun 3, 2020 at 4:55 AM Oleksandr Shulgin <[email protected]> wrote:Do you also assign the partitions to different tablespaces as you've hinted below or do you see performance improvement from partitioning alone?  How does that work?  Does it give better  results than using a RAID to spread the disk IO, for example?In general you could find write throughput improvements from all three, partitioning, tablespacing, and disk striping.  It depends on your problem.   Hash partitioning is a common feature in other databases as well. The hash strategy works for many distributed access patterns. Yes its' true you end up with more smaller tables than one big large one, but remember the indexes are (often) tree data structures.  Smaller trees are faster than bigger trees.  By making the indexes smaller they are faster.  Since the planner can knows to only examine the specific index it needs, this ends up being a lot faster.That sounds logical, but can it be demonstrated?  If the index(es) fit in memory fully, it doesn't make a measurable difference, I guess?Well lets take a step back here and look at the question, hash partitioning exists in Postgres, is it useful?  While I appreciate the need to see a fact demonstrated, and generally avoiding argument by authority, it is true that many of the very smartest database people in the world conceived of, discussed, implemented and documented this feature for us.   It stands to reason that it is useful, or it wouldn't exist.  So maybe this is more about finding or needing better partitioning documentation. With hash partitioning you are not expected, in general, to end up with a small number of partitions being accessed more heavily than the rest.  So your indexes will also not fit into memory.Indexes are not (usually) constant time structures, they take more time the bigger they get.  So partitioned indexes will be smaller, quicker to insert into, and quicker to vacuum, and also gain possible pruning advantages on query when you split them up.  If the planner can, knowing the key, exclude all but one partition, it won't even look at the other tables, so if you hash partition by primary key, you reduce the search space to 1/N immediately.  Indexes with high update activity also suffer from a problem called \"index bloat\" where spares \"holes\" get punched in the buckets of btree indexes from updates and delete deletes.  These holes are minimized by vacuuming but the bigger the index gets, the harder that process is to maintain.  Smaller indexes suffer less from index bloat, and remedying the situation is easier because you can reindex partitions independently of each other.  Your not just reducing the query load to an Nth, you're reducing the maintenance load. I have the feeling that using a hash function to distribute rows simply contradicts the basic assumption of when you would think of partitioning your table at all: that is to make sure the most active part of the table and indexes is small enough to be cached in memory.I think you might be framing this with a specific data pattern in mind, not all data distributions have a \"most active\" or power law distribution of data.  For example i work with a lot of commercial airline position data that services both real-time queries and ad-hoc analytical queries over arbitrary airframe identifiers.   There is no advantage trying to have a \"most active\" data strategy because all airframes in the air at any given time are by definition most active.   A medium sized drone may send out as many pings as a jumbo jet in a given interval of time.-Michel Regards,--Alex", "msg_date": "Sat, 6 Jun 2020 09:13:25 -0700", "msg_from": "Michel Pelletier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On 6/5/20 8:51 AM, Jeff Janes wrote:\n> On Fri, Jun 5, 2020 at 6:12 AM Oleksandr Shulgin \n> <[email protected] <mailto:[email protected]>> wrote:\n[snip]\n>\n> For a bulk load you'd likely want to go with an empty partition w/o\n> indexes and build them later, after loading the tuples.\n>\n>\n> That only works if the bulk load is starting from zero. If you are adding \n> a million rows to something that already has 100 million, you would \n> probably spend more time rebuilding the indexes than you saved by dropping \n> them.\n\nIt's too bad that Postgres doesn't have \"deferred index updates\" during bulk \n(but still transactional) loads, where the index nodes are updated /en \nmasse/ every \"commit count\" number of rows. That's *really useful* in this \nsituation, but I've only seen it in one legacy RDBMS.\n\n-- \nAngular momentum makes the world go 'round.\n\n\n\n\n\n\n On 6/5/20 8:51 AM, Jeff Janes wrote:\n\n\n\nOn Fri, Jun 5, 2020 at 6:12 AM Oleksandr Shulgin\n <[email protected]>\n wrote:\n\n\n\n [snip]\n\n\n\n\n\n\nFor a bulk load you'd likely want to go with an\n empty partition w/o indexes and build them later,\n after loading the tuples.  \n\n\n\n\n\nThat only works if the bulk load is starting from zero. \n If you are adding a million rows to something that\n already has 100 million, you would probably spend more time\n rebuilding the indexes than you saved by dropping them.\n\n\n\n\n\n It's too bad that Postgres doesn't have \"deferred index updates\"\n during bulk (but still transactional) loads, where the index nodes\n are updated en masse every \"commit count\" number of rows. \n That's really useful in this situation, but I've only seen\n it in one legacy RDBMS.\n\n-- \n Angular momentum makes the world go 'round.", "msg_date": "Sat, 6 Jun 2020 20:58:08 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "The article referenced below assumes a worst case scenario for \nbulk-loading with hash partitioned tables.  It assumes that the values \nbeing inserted are in strict ascending or descending order with no gaps \n(like a sequence number incrementing by 1), thereby ensuring every \npartition is hit in order before repeating the process.  If the values \nbeing inserted are not strictly sequential with no gaps, then the \nperformance is much better.  Obviously, what part of the tables and \nindexes are in memory has a lot to do with it as well.\n\nRegards,\nMichael Vitale\n\nImre Samu wrote on 6/5/2020 7:48 AM:\n> > \"Bulk loads ...\",\n>\n> As I see - There is an interesting bulkload benchmark:\n>\n> \"How Bulkload performance is affected by table partitioning in \n> PostgreSQL\" by Beena Emerson (Enterprisedb, December 4, 2019 )\n> /SUMMARY: This article covers how benchmark tests can be used to \n> demonstrate the effect of table partitioning on performance. Tests \n> using range- and hash-partitioned tables are compared and the reasons \n> for their different results are explained:\n>                  1. Range partitions\n>            2. Hash partitions\n>                  3. Combination graphs\n>                4. Explaining the behavior\n>                  5. Conclusion/\n> /\n> /\n> /\"For the hash-partitioned table, the first value is inserted in the \n> first partition, the second number in the second partition and so on \n> till all the partitions are reached before it loops back to the first \n> partition again until all the data is exhausted. Thus it exhibits the \n> worst-case scenario where the partition is repeatedly switched for \n> every value inserted. As a result, the number of times the partition \n> is switched in a range-partitioned table is equal to the number of \n> partitions, while in a hash-partitioned table, the number of times the \n> partition has switched is equal to the amount of data being inserted. \n> This causes the massive difference in timing for the two partition \n> types.\"/\n>\n> https://www.enterprisedb.com/postgres-tutorials/how-bulkload-performance-affected-table-partitioning-postgresql\n>\n> Regards,\n>  Imre\n>\n\n\n\nThe article referenced below assumes a worst\n case scenario for bulk-loading with hash partitioned tables.  It \nassumes that the values being inserted are in strict ascending or \ndescending order with no gaps (like a sequence number incrementing by \n1), thereby ensuring every partition is hit in order before repeating \nthe process.  If the values being inserted are not strictly sequential \nwith no gaps, then the performance is much better.  Obviously, what part\n of the tables and indexes are in memory has a lot to do with it as \nwell.\n\nRegards,\nMichael Vitale\n\nImre Samu wrote on 6/5/2020 7:48 AM:\n\n\n> \"Bulk loads ...\",As I see - \nThere is an interesting bulkload benchmark:    \"How\n Bulkload performance is affected by table partitioning in PostgreSQL\" \nby Beena Emerson (Enterprisedb, December 4, 2019 )SUMMARY:\n This article covers how benchmark tests can be used to demonstrate the \neffect of table partitioning on performance. Tests using range- and \nhash-partitioned tables are compared and the reasons for their different\n results are explained:                  1. Range partitions     \n            2. Hash partitions                 3. Combination graphs \n                4. Explaining the behavior                 5. \nConclusion\"For the \nhash-partitioned table, the first value is inserted in the first \npartition, the second number in the second partition and so on till all \nthe partitions are reached before it loops back to the first partition \nagain until all the data is exhausted. Thus it exhibits the worst-case \nscenario where the partition is repeatedly switched for every value \ninserted. As a result, the number of times the partition is switched in a\n range-partitioned table is equal to the number of partitions, while in a\n hash-partitioned table, the number of times the partition has switched \nis equal to the amount of data being inserted. This causes the massive \ndifference in timing for the two partition types.\"https://www.enterprisedb.com/postgres-tutorials/how-bulkload-performance-affected-table-partitioning-postgresqlRegards, Imre", "msg_date": "Sun, 7 Jun 2020 07:41:28 -0400", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Sun, 7 Jun 2020 at 23:41, MichaelDBA <[email protected]> wrote:\n> The article referenced below assumes a worst case scenario for bulk-loading with hash partitioned tables. It assumes that the values being inserted are in strict ascending or descending order with no gaps (like a sequence number incrementing by 1), thereby ensuring every partition is hit in order before repeating the process. If the values being inserted are not strictly sequential with no gaps, then the performance is much better. Obviously, what part of the tables and indexes are in memory has a lot to do with it as well.\n\nIn PostgreSQL 12, COPY was modified to support bulk-inserts for\npartitioned tables. This did speed up many scenarios. Internally, how\nthis works is that we maintain a series of multi insert buffers, one\nper partition. We generally only flush those buffers to the table when\nthe buffer for the partition fills. However, there is a sort of\nsanity limit [1] on the number of multi insert buffers we maintain at\nonce and currently, that is 32. Technically we could increase that\nlimit, but there would still need to be a limit. Unfortunately, for\nthis particular case, since we're most likely touching between 199-799\nother partitions before hitting the first one again, that will mean\nthat we really don't get any multi-inserts, which is likely the reason\nwhy the performance is worse for hash partitioning.\n\nWith PG12 and for this particular case, you're likely to see COPY\nperformance drop quite drastically when going from 32 to 33\npartitions. The code was more designed for hitting partitions more\nrandomly rather than in this sort-of round-robin way that we're likely\nto get from hash partitioning on a serial column.\n\nDavid\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/commands/copy.c#L2569\n\n\n", "msg_date": "Mon, 8 Jun 2020 09:23:10 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Sat, Jun 6, 2020 at 6:14 PM Michel Pelletier <[email protected]>\nwrote:\n\n>\n> Well lets take a step back here and look at the question, hash\n> partitioning exists in Postgres, is it useful? While I appreciate the need\n> to see a fact demonstrated, and generally avoiding argument by authority,\n> it is true that many of the very smartest database people in the world\n> conceived of, discussed, implemented and documented this feature for us.\n> It stands to reason that it is useful, or it wouldn't exist. So maybe this\n> is more about finding or needing better partitioning documentation.\n>\n\nFair point.\n\nI've found the original commit adding this feature in version 11:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1aba8e651ac3e37e1d2d875842de1e0ed22a651e\nIt says:\n\n\"Hash partitioning is useful when you want to partition a growing data\nset evenly. This can be useful to keep table sizes reasonable, which\nmakes maintenance operations such as VACUUM faster, or to enable\npartition-wise join.\"\n\nIt also includes a link to discussion, though that starts in the middle of\na long thread.\nThe original thread is here:\nhttps://www.postgresql.org/message-id/flat/20170228233313.fc14d8b6.nagata%40sraoss.co.jp\n\nHowever, these threads only argue about implementation details and it's not\neasy to find a discussion of motivation for this particular partitioning\nscheme support.\nI guess it was quite obvious to the participants at that point already.\n\nWith hash partitioning you are not expected, in general, to end up with a\n>> small number of partitions being accessed more heavily than the rest. So\n>> your indexes will also not fit into memory.\n>>\n>\n> Indexes are not (usually) constant time structures, they take more time\n> the bigger they get. So partitioned indexes will be smaller, quicker to\n> insert into, and quicker to vacuum, and also gain possible pruning\n> advantages on query when you split them up. If the planner can, knowing\n> the key, exclude all but one partition, it won't even look at the other\n> tables, so if you hash partition by primary key, you reduce the search\n> space to 1/N immediately.\n>\n> Indexes with high update activity also suffer from a problem called \"index\n> bloat\" where spares \"holes\" get punched in the buckets of btree indexes\n> from updates and delete deletes. These holes are minimized by vacuuming\n> but the bigger the index gets, the harder that process is to maintain.\n> Smaller indexes suffer less from index bloat, and remedying the situation\n> is easier because you can reindex partitions independently of each other.\n> Your not just reducing the query load to an Nth, you're reducing the\n> maintenance load.\n>\n\nThanks for taking your time to explain it in detail. Though I do not tend\nto believe the insert/scan performance benefit is measurable without trying\nit, I do see the benefits for maintenance.\n\nI have the feeling that using a hash function to distribute rows simply\n>> contradicts the basic assumption of when you would think of partitioning\n>> your table at all: that is to make sure the most active part of the table\n>> and indexes is small enough to be cached in memory.\n>>\n>\n> I think you might be framing this with a specific data pattern in mind,\n> not all data distributions have a \"most active\" or power law distribution\n> of data.\n>\n\nI'm just referring to the first bullet-point in the docs:\n\n\"Query performance can be improved dramatically in certain situations,\nparticularly when most of the heavily accessed rows of the table are in a\nsingle partition or a small number of partitions. The partitioning\nsubstitutes for leading columns of indexes, reducing index size and making\nit more likely that the heavily-used parts of the indexes fit in memory.\"\n\nI think it does not apply to hash partitioning in the general case.\n\n--\nAlex\n\nOn Sat, Jun 6, 2020 at 6:14 PM Michel Pelletier <[email protected]> wrote:Well lets take a step back here and look at the question, hash partitioning exists in Postgres, is it useful?  While I appreciate the need to see a fact demonstrated, and generally avoiding argument by authority, it is true that many of the very smartest database people in the world conceived of, discussed, implemented and documented this feature for us.   It stands to reason that it is useful, or it wouldn't exist.  So maybe this is more about finding or needing better partitioning documentation. Fair point.I've found the original commit adding this feature in version 11: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1aba8e651ac3e37e1d2d875842de1e0ed22a651eIt says:\"Hash partitioning is useful when you want to partition a growing dataset evenly.  This can be useful to keep table sizes reasonable, whichmakes maintenance operations such as VACUUM faster, or to enablepartition-wise join.\"It also includes a link to discussion, though that starts in the middle of a long thread.The original thread is here: https://www.postgresql.org/message-id/flat/20170228233313.fc14d8b6.nagata%40sraoss.co.jpHowever, these threads only argue about implementation details and it's not easy to find a discussion of motivation for this particular partitioning scheme support.I guess it was quite obvious to the participants at that point already.With hash partitioning you are not expected, in general, to end up with a small number of partitions being accessed more heavily than the rest.  So your indexes will also not fit into memory.Indexes are not (usually) constant time structures, they take more time the bigger they get.  So partitioned indexes will be smaller, quicker to insert into, and quicker to vacuum, and also gain possible pruning advantages on query when you split them up.  If the planner can, knowing the key, exclude all but one partition, it won't even look at the other tables, so if you hash partition by primary key, you reduce the search space to 1/N immediately.  Indexes with high update activity also suffer from a problem called \"index bloat\" where spares \"holes\" get punched in the buckets of btree indexes from updates and delete deletes.  These holes are minimized by vacuuming but the bigger the index gets, the harder that process is to maintain.  Smaller indexes suffer less from index bloat, and remedying the situation is easier because you can reindex partitions independently of each other.  Your not just reducing the query load to an Nth, you're reducing the maintenance load.Thanks for taking your time to explain it in detail.  Though I do not tend to believe the insert/scan performance benefit is measurable without trying it, I do see the benefits for maintenance.I have the feeling that using a hash function to distribute rows simply contradicts the basic assumption of when you would think of partitioning your table at all: that is to make sure the most active part of the table and indexes is small enough to be cached in memory.I think you might be framing this with a specific data pattern in mind, not all data distributions have a \"most active\" or power law distribution of data.I'm just referring to the first bullet-point in the docs:\"Query performance can be improved dramatically in certain situations, particularly when most of the heavily accessed rows of the table are in a single partition or a small number of partitions. The partitioning substitutes for leading columns of indexes, reducing index size and making it more likely that the heavily-used parts of the indexes fit in memory.\"I think it does not apply to hash partitioning in the general case.--Alex", "msg_date": "Mon, 8 Jun 2020 10:40:15 +0200", "msg_from": "Oleksandr Shulgin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "Wow! That is good to know!\n\nSent from my iPad\n\n> On Jun 7, 2020, at 5:23 PM, David Rowley <[email protected]> wrote:\n> \n>> On Sun, 7 Jun 2020 at 23:41, MichaelDBA <[email protected]> wrote:\n>> The article referenced below assumes a worst case scenario for bulk-loading with hash partitioned tables. It assumes that the values being inserted are in strict ascending or descending order with no gaps (like a sequence number incrementing by 1), thereby ensuring every partition is hit in order before repeating the process. If the values being inserted are not strictly sequential with no gaps, then the performance is much better. Obviously, what part of the tables and indexes are in memory has a lot to do with it as well.\n> \n> In PostgreSQL 12, COPY was modified to support bulk-inserts for\n> partitioned tables. This did speed up many scenarios. Internally, how\n> this works is that we maintain a series of multi insert buffers, one\n> per partition. We generally only flush those buffers to the table when\n> the buffer for the partition fills. However, there is a sort of\n> sanity limit [1] on the number of multi insert buffers we maintain at\n> once and currently, that is 32. Technically we could increase that\n> limit, but there would still need to be a limit. Unfortunately, for\n> this particular case, since we're most likely touching between 199-799\n> other partitions before hitting the first one again, that will mean\n> that we really don't get any multi-inserts, which is likely the reason\n> why the performance is worse for hash partitioning.\n> \n> With PG12 and for this particular case, you're likely to see COPY\n> performance drop quite drastically when going from 32 to 33\n> partitions. The code was more designed for hitting partitions more\n> randomly rather than in this sort-of round-robin way that we're likely\n> to get from hash partitioning on a serial column.\n> \n> David\n> \n> [1] https://github.com/postgres/postgres/blob/master/src/backend/commands/copy.c#L2569\n\n\n\n", "msg_date": "Mon, 8 Jun 2020 05:50:35 -0400", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On 6/8/20 3:40 AM, Oleksandr Shulgin wrote:\n[snip]\n> I've found the original commit adding this feature in version 11: \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1aba8e651ac3e37e1d2d875842de1e0ed22a651e\n> It says:\n>\n> \"Hash partitioning is useful when you want to partition a growing data\n> set evenly.  This can be useful to keep table sizes reasonable, which\n> makes maintenance operations such as VACUUM faster, or to enable\n> partition-wise join.\"\n\nHow does hashed (meaning \"randomly?) distribution of records make \npartition-wise joins more efficient?\n\nOr -- since I interpret that as having to do with \"locality of data\" -- am I \nmisunderstanding the meaning of \"partition-wise joins\"?\n\n-- \nAngular momentum makes the world go 'round.\n\n\n\n\n\n\n On 6/8/20 3:40 AM, Oleksandr Shulgin wrote:\n [snip]\n\n\n\nI've found the original commit adding this feature in\n version 11: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1aba8e651ac3e37e1d2d875842de1e0ed22a651e\nIt says:\n\n\n\"Hash partitioning is useful when you want to partition a\n growing data\n set evenly.  This can be useful to keep table sizes\n reasonable, which\n makes maintenance operations such as VACUUM faster, or to\n enable\n partition-wise join.\"\n\n\n\n\n\n How does hashed (meaning \"randomly?) distribution of records make\n partition-wise joins more efficient?\n\n Or -- since I interpret that as having to do with \"locality of data\"\n -- am I misunderstanding the meaning of \"partition-wise joins\"?\n\n-- \n Angular momentum makes the world go 'round.", "msg_date": "Mon, 8 Jun 2020 08:07:04 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" }, { "msg_contents": "On Tue, 9 Jun 2020 at 01:07, Ron <[email protected]> wrote:\n>\n> On 6/8/20 3:40 AM, Oleksandr Shulgin wrote:\n> [snip]\n>\n> I've found the original commit adding this feature in version 11: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1aba8e651ac3e37e1d2d875842de1e0ed22a651e\n> It says:\n>\n> \"Hash partitioning is useful when you want to partition a growing data\n> set evenly. This can be useful to keep table sizes reasonable, which\n> makes maintenance operations such as VACUUM faster, or to enable\n> partition-wise join.\"\n>\n>\n> How does hashed (meaning \"randomly?) distribution of records make partition-wise joins more efficient?\n\nHash partitioning certainly does not mean putting the tuple in some\nrandom partition. It means putting the tuple in the partition with the\ncorrect remainder value after dividing the hash value by the largest\npartition modulus.\n\n> Or -- since I interpret that as having to do with \"locality of data\" -- am I misunderstanding the meaning of \"partition-wise joins\"?\n\nIf it was not a partitioned table before then partition-wise joins\nwouldn't be possible. Having partition-wise joins could make joining\ntwo identically partitioned tables faster. We need only look in the\ncorresponding partition on the other side of the join for join\npartners for each tuple. For hash joins, hash tables can be smaller,\nwhich can mean not having to batch, and possibly having the hash table\nfit better into a CPU cache. For merge joins, sorts, having the data\npartially pre-sorted in chunks means fewer operations for qsort which\ncan result in speedups.\n\nDavid\n\n\n", "msg_date": "Tue, 9 Jun 2020 09:25:12 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When to use PARTITION BY HASH?" } ]
[ { "msg_contents": "Hi all,\nI’ve been experimenting with some performance tuning on a particular query, and I observed a result that I don’t understand. \n\nI’ve been setting max_parallel_workers_per_gather to values the range 1-6 and then running EXPLAIN ANALYZE to see how much benefit we get from more parallelization. My data is organized by year, so the year is a parameter in the query’s WHERE clause.\n\nFor my 2018 data, Postgres launches as many workers as max_parallel_workers_per_gather permits, and the execution time decreases nicely, from 280 seconds with 1 worker all the way down to 141s with 6 workers. So far, so good.\n\nWhen I run the same query for our 2022 data, I get the same behavior (improvement) for max_parallel_workers_per_gather values of 1-4. But with max_parallel_workers_per_gather set to 5 or 6, Postgres only uses 1 worker, and the execution time increases dramatically, even worse than when I deliberately limit the number of workers to 1 —\n\n- max_parallel_workers_per_gather=1, runtime = 1061s\n- max_parallel_workers_per_gather=2, runtime = 770s\n- max_parallel_workers_per_gather=3, runtime = 637s\n- max_parallel_workers_per_gather=4, runtime = 573s\n- max_parallel_workers_per_gather=5, runtime = 1468s\n- max_parallel_workers_per_gather=6, runtime = 1469s\n\nOur 2022 data set is several times larger than our 2018 data, so I suspect some resource is getting exhausted, but I’m not sure what. So far, this result has been 100% re-creatable. I’m on a dedicated test server with 16 virtual CPUs and 128Gb RAM; no one else is competing with me for Postgres processes. max_worker_processes and max_parallel_workers are both set to 12.\n\nCan anyone help me understand why this happens, or where I might look for clues? \n\nThanks,\nPhilip\n\n", "msg_date": "Wed, 3 Jun 2020 16:04:13 -0400", "msg_from": "Philip Semanchuk <[email protected]>", "msg_from_op": true, "msg_subject": "increased max_parallel_workers_per_gather results in fewer workers?" }, { "msg_contents": "On Wed, Jun 03, 2020 at 04:04:13PM -0400, Philip Semanchuk wrote:\n> Can anyone help me understand why this happens, or where I might look for clues? \n\nWhat version postgres ?\n\nCan you reproduce if you do:\nALTER SYSTEM SET max_parallel_workers_per_gather=0; SELECT pg_reload_conf();\n.. and then within the session do: SET max_parallel_workers_per_gather=12;\n\nI guess you should show an explain analyze, specifically \"Workers\nPlanned/Launched\", maybe by linking to explain.depesz.com\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 3 Jun 2020 16:15:26 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" }, { "msg_contents": "\n\n> On Jun 3, 2020, at 5:15 PM, Justin Pryzby <[email protected]> wrote:\n> \n> On Wed, Jun 03, 2020 at 04:04:13PM -0400, Philip Semanchuk wrote:\n>> Can anyone help me understand why this happens, or where I might look for clues? \n> \n> What version postgres ?\n\nSorry, I should have posted that in my initial email.\n\nselect version();\n+-----------------------------------------------------------------------------+\n| version |\n|-----------------------------------------------------------------------------|\n| PostgreSQL 11.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bit |\n+-----------------------------------------------------------------------------+\n\nThis is AWS’ version of Postgres 11.6 (“Aurora”) which of course might make a difference.\n\n\n> Can you reproduce if you do:\n> ALTER SYSTEM SET max_parallel_workers_per_gather=0; SELECT pg_reload_conf();\n> .. and then within the session do: SET max_parallel_workers_per_gather=12;\n\nUnfortunately under Aurora I’m not superuser so I can’t run ALTER SYSTEM, but I can change the config via AWS’ config interface, so I set max_parallel_workers_per_gather=0 there.\n\nshow max_parallel_workers_per_gather\n+-----------------------------------+\n| max_parallel_workers_per_gather |\n|-----------------------------------|\n| 0 |\n+-----------------------------------+\nSHOW\nTime: 0.034s\npostgres@philip-2020-05-19-cluster:wylan>\nSET max_parallel_workers_per_gather=12\nSET\nTime: 0.028s\npostgres@philip-2020-05-19-cluster:wylan>\nshow max_parallel_workers_per_gather\n+-----------------------------------+\n| max_parallel_workers_per_gather |\n|-----------------------------------|\n| 12 |\n+-----------------------------------+\nSHOW\n\nI then ran the EXPLAIN ANALYZE and got the same slow runtime (1473s) and 1 worker in the EXPLAIN ANALYZE output. \n\n\n> I guess you should show an explain analyze, specifically \"Workers\n> Planned/Launched\", maybe by linking to explain.depesz.com\n\nOut of an abundance of caution, our company has a policy of not pasting our plans to public servers. However, I can confirm that when I set max_parallel_workers_per_gather > 4 and the runtime increases, this is what’s in the EXPLAIN ANALYZE output:\n\n Workers Planned: 1\n Workers Launched: 1\n\nFWIW, the Planning Time reported in EXPLAIN ANALYZE output doesn’t vary significantly, only from 411-443ms, and the variation within that range correlates only very weakly with max_parallel_workers_per_gather.\n\n\nthank you \nPhilip\n\n\n\n\n", "msg_date": "Wed, 3 Jun 2020 18:23:57 -0400", "msg_from": "Philip Semanchuk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" }, { "msg_contents": "On Wed, Jun 03, 2020 at 06:23:57PM -0400, Philip Semanchuk wrote:\n> > On Jun 3, 2020, at 5:15 PM, Justin Pryzby <[email protected]> wrote:\n> > What version postgres ?\n> \n> This is AWS’ version of Postgres 11.6 (“Aurora”) which of course might make a difference.\n\n> > I guess you should show an explain analyze, specifically \"Workers\n> > Planned/Launched\", maybe by linking to explain.depesz.com\n> \n> Out of an abundance of caution, our company has a policy of not pasting our plans to public servers. However, I can confirm that when I set max_parallel_workers_per_gather > 4 and the runtime increases, this is what’s in the EXPLAIN ANALYZE output:\n> \n> Workers Planned: 1\n> Workers Launched: 1\n\nAre you referring to a parallel scan/aggregate/hash/??\n\nAre you able to show a plan for a toy query like SELECT count(col) FROM tbl ,\npreferably including a CREATE TABLE tbl AS... ; VACUUM ANALYZE tbl;\n\nAre you able to reproduce with an unpatched postgres ?\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 3 Jun 2020 17:36:41 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" }, { "msg_contents": "Hi Philip,\r\n\r\n> On 4. Jun 2020, at 00:23, Philip Semanchuk <[email protected]> wrote:\r\n> \r\n>> I guess you should show an explain analyze, specifically \"Workers\r\n>> Planned/Launched\", maybe by linking to explain.depesz.com\r\n> \r\n> Out of an abundance of caution, our company has a policy of not pasting our plans to public servers. However, I can confirm that when I set max_parallel_workers_per_gather > 4 and the runtime increases, this is what’s in the EXPLAIN ANALYZE output:\r\n> \r\n> Workers Planned: 1\r\n> Workers Launched: 1\r\n\r\nCan you please verify the amount of max_parallel_workers and max_worker_processes? It should be roughly max_worker_processes > max_parallel_workers > max_parallel_workers_per_gather, for instance:\r\n\r\nmax_worker_processes = 24\r\nmax_parallel_workers = 18\r\nmax_parallel_workers_per_gather = 6\r\n\r\nAlso, there are more configuration settings related to parallel queries you might want to look into. Most notably:\r\n\r\nparallel_setup_cost\r\nparallel_tuple_cost\r\nmin_parallel_table_scan_size\r\n\r\nEspecially the last one is a typical dealbreaker, you can try to set it to 0 for the beginning. Good starters for the others are 500 and 0.1 respectively.\r\n\r\n> FWIW, the Planning Time reported in EXPLAIN ANALYZE output doesn’t vary significantly, only from 411-443ms, and the variation within that range correlates only very weakly with max_parallel_workers_per_gather.\r\n\r\n\r\nIt can happen, that more parallelism does not help the query but slows it down beyond a specific amount of parallel workers. You can see this in EXPLAIN when there is for instance a BITMAP HEAP INDEX SCAN or similar involved.\r\n\r\nCheers,\r\nSebastian", "msg_date": "Thu, 4 Jun 2020 06:28:34 +0000", "msg_from": "Sebastian Dressler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" }, { "msg_contents": "Hi,\n on top of the settings that Sebastian suggested, you can also try disabling the participation of the leader (i.e. the main backend process for your connection) in the distribution of the parallel workload:\n\nSET parallel_leader_participation TO false\n\n Depending on your workload the leader could be saturated if it has to do a share of the workload and aggregate the results of all the workers.\n\nCheers\nLuis\n\n\n\n\n\n\n\n\nHi,\n\n   on top of the settings that Sebastian suggested, you can also try disabling the participation of the leader (i.e. the main backend process for your connection) in the distribution of the parallel workload:\n\n\n\n\nSET parallel_leader_participation TO false\n\n\n\n\n  Depending on your workload the leader could be saturated if it has to do a share of the workload and aggregate the results of all the workers.\n\n\n\n\nCheers\nLuis", "msg_date": "Thu, 4 Jun 2020 06:41:27 +0000", "msg_from": "Luis Carril <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" }, { "msg_contents": "On Wed, Jun 03, 2020 at 06:23:57PM -0400, Philip Semanchuk wrote:\n>\n> ...\n>\n>I then ran the EXPLAIN ANALYZE and got the same slow runtime (1473s) and 1 worker in the EXPLAIN ANALYZE output.\n>\n>\n>> I guess you should show an explain analyze, specifically \"Workers\n>> Planned/Launched\", maybe by linking to explain.depesz.com\n>\n>Out of an abundance of caution, our company has a policy of not pasting our plans to public servers. However, I can confirm that when I set max_parallel_workers_per_gather > 4 and the runtime increases, this is what’s in the EXPLAIN ANALYZE output:\n>\n> Workers Planned: 1\n> Workers Launched: 1\n>\n>FWIW, the Planning Time reported in EXPLAIN ANALYZE output doesn’t vary significantly, only from 411-443ms, and the variation within that range correlates only very weakly with max_parallel_workers_per_gather.\n>\n\nWell, that policy is stupid and it makes it unnecessarily harder to\nanswer your questions. We really need to see the plans, it's much harder\nto give you any advices without it. We can only speculate about what's\ngoing on.\n\nIt's understandable there may be sensitive information in the plan\n(parameter values, ...) but that can be sanitized before posting.\n\nWe need to see plans for the good and bad case, so that we can compare\nthem, look at the plan general shapes, costs, etc.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n", "msg_date": "Thu, 4 Jun 2020 10:56:04 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" }, { "msg_contents": "On Thu, Jun 4, 2020 at 12:24 AM Philip Semanchuk <\[email protected]> wrote:\n\n>\n>\n> > On Jun 3, 2020, at 5:15 PM, Justin Pryzby <[email protected]> wrote:\n> >\n> > On Wed, Jun 03, 2020 at 04:04:13PM -0400, Philip Semanchuk wrote:\n> >> Can anyone help me understand why this happens, or where I might look\n> for clues?\n> >\n> > What version postgres ?\n>\n> Sorry, I should have posted that in my initial email.\n>\n> select version();\n>\n> +-----------------------------------------------------------------------------+\n> | version\n> |\n>\n> |-----------------------------------------------------------------------------|\n> | PostgreSQL 11.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3,\n> 64-bit |\n>\n> +-----------------------------------------------------------------------------+\n>\n> This is AWS’ version of Postgres 11.6 (“Aurora”) which of course might\n> make a difference.\n>\n\nYes, it definitely makes a difference. For Aurora questions you are more\nlikely to get good answers in the AWS forums rather than the PostgreSQL\nones. It's different from PostgreSQL in too many ways, and those\ndifferences are not fully known outside of AWS.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Jun 4, 2020 at 12:24 AM Philip Semanchuk <[email protected]> wrote:\n\n> On Jun 3, 2020, at 5:15 PM, Justin Pryzby <[email protected]> wrote:\n> \n> On Wed, Jun 03, 2020 at 04:04:13PM -0400, Philip Semanchuk wrote:\n>> Can anyone help me understand why this happens, or where I might look for clues? \n> \n> What version postgres ?\n\nSorry, I should have posted that in my initial email.\n\nselect version();\n+-----------------------------------------------------------------------------+\n| version                                                                     |\n|-----------------------------------------------------------------------------|\n| PostgreSQL 11.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bit |\n+-----------------------------------------------------------------------------+\n\nThis is AWS’ version of Postgres 11.6 (“Aurora”) which of course might make a difference.Yes, it definitely makes a difference. For Aurora questions you are more likely to get good answers in the AWS forums rather than the PostgreSQL ones.  It's different from PostgreSQL in too many ways, and those differences are not fully known outside of AWS.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 4 Jun 2020 11:30:54 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" }, { "msg_contents": "\n\n> On Jun 4, 2020, at 2:28 AM, Sebastian Dressler <[email protected]> wrote:\n> \n> Hi Philip,\n> \n>> On 4. Jun 2020, at 00:23, Philip Semanchuk <[email protected]> wrote:\n>> \n>>> I guess you should show an explain analyze, specifically \"Workers\n>>> Planned/Launched\", maybe by linking to explain.depesz.com\n>> \n>> Out of an abundance of caution, our company has a policy of not pasting our plans to public servers. However, I can confirm that when I set max_parallel_workers_per_gather > 4 and the runtime increases, this is what’s in the EXPLAIN ANALYZE output:\n>> \n>> Workers Planned: 1\n>> Workers Launched: 1\n> \n> Can you please verify the amount of max_parallel_workers and max_worker_processes? It should be roughly max_worker_processes > max_parallel_workers > max_parallel_workers_per_gather, for instance:\n> \n> max_worker_processes = 24\n> max_parallel_workers = 18\n> max_parallel_workers_per_gather = 6\n\n\nI changed my settings to these exact values and can still recreate the situation where I unexpectedly get a single worker query.\n\n\n> Also, there are more configuration settings related to parallel queries you might want to look into. Most notably:\n> \n> parallel_setup_cost\n> parallel_tuple_cost\n> min_parallel_table_scan_size\n> \n> Especially the last one is a typical dealbreaker, you can try to set it to 0 for the beginning. Good starters for the others are 500 and 0.1 respectively.\n\nAha! By setting min_parallel_table_scan_size=0, Postgres uses the 6 workers I expect, and the execution time decreases nicely. \n\nI posted a clumsily-anonymized plan for the “bad” scenario here --\nhttps://gist.github.com/osvenskan/ea00aa71abaa9697ade0ab7c1f3b705b\n\nThere are 3 sort nodes in the plan. When I get the “bad” behavior, the sorts have one worker, when I get the good behavior, they have multiple workers (e.g. 6).\n\nThis brings up a couple of questions —\n1) I’ve read that this is Postgres’ formula for the max # of workers it will consider for a table —\n\n max_workers = log3(table size / min_parallel_table_scan_size)\n\nDoes that use the raw table size, or does the planner use statistics to estimate the size of the subset of the table that will be read before allocating workers?\n\n2) There are 9 tables in this query ranging in size from 72Kb to 17Gb. Does Postgres decide on a table-by-table basis to allocate multiple workers, or…?\n\nThank you so much for the suggestion, I feel un-stuck now that I have an idea to experiment with.\n\nCheers\nPhilip\n\n\n\n", "msg_date": "Thu, 4 Jun 2020 12:41:35 -0400", "msg_from": "Philip Semanchuk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" }, { "msg_contents": "Hi Philip,\r\n\r\nOn 4. Jun 2020, at 18:41, Philip Semanchuk <[email protected]<mailto:[email protected]>> wrote:\r\n[...]\r\n\r\nAlso, there are more configuration settings related to parallel queries you might want to look into. Most notably:\r\n\r\nparallel_setup_cost\r\nparallel_tuple_cost\r\nmin_parallel_table_scan_size\r\n\r\nEspecially the last one is a typical dealbreaker, you can try to set it to 0 for the beginning. Good starters for the others are 500 and 0.1 respectively.\r\n\r\nAha! By setting min_parallel_table_scan_size=0, Postgres uses the 6 workers I expect, and the execution time decreases nicely.\r\n\r\nI posted a clumsily-anonymized plan for the “bad” scenario here --\r\nhttps://gist.github.com/osvenskan/ea00aa71abaa9697ade0ab7c1f3b705b\r\n\r\nThere are 3 sort nodes in the plan. When I get the “bad” behavior, the sorts have one worker, when I get the good behavior, they have multiple workers (e.g. 6).\r\n\r\nI also think, what Luis pointed out earlier might be a good option for you, i.e. setting\r\n\r\n parallel_leader_participation = off;\r\n\r\nAnd by the way, this 1 worker turns actually into 2 workers in total with leader participation enabled.\r\n\r\nThis brings up a couple of questions —\r\n1) I’ve read that this is Postgres’ formula for the max # of workers it will consider for a table —\r\n\r\n max_workers = log3(table size / min_parallel_table_scan_size)\r\n\r\nDoes that use the raw table size, or does the planner use statistics to estimate the size of the subset of the table that will be read before allocating workers?\r\n\r\n\"table size\" is the number of PSQL pages, i.e. relation-size / 8 kB. This comes from statistics.\r\n\r\n2) There are 9 tables in this query ranging in size from 72Kb to 17Gb. Does Postgres decide on a table-by-table basis to allocate multiple workers, or…?\r\n\r\nAFAIK, it will do it per-table initially but then the final result depends on the chosen gather node.\r\n\r\nThank you so much for the suggestion, I feel un-stuck now that I have an idea to experiment with.\r\n\r\nYou are welcome, we are actually about to publish a blog post which has some more suggestions for parallelism in.\r\n\r\nCheers,\r\nSebastian\r\n\n\n\n\n\n\r\nHi Philip,\n\n\nOn 4. Jun 2020, at 18:41, Philip Semanchuk <[email protected]> wrote:\r\n[...]\n\n\r\nAlso, there are more configuration settings related to parallel queries you might want to look into. Most notably:\n\r\nparallel_setup_cost\r\nparallel_tuple_cost\r\nmin_parallel_table_scan_size\n\r\nEspecially the last one is a typical dealbreaker, you can try to set it to 0 for the beginning. Good starters for the others are 500 and 0.1 respectively.\n\n\nAha!\r\n By setting min_parallel_table_scan_size=0, Postgres uses the 6 workers I expect, and the execution time decreases nicely. \n\nI\r\n posted a clumsily-anonymized plan for the “bad” scenario here --\nhttps://gist.github.com/osvenskan/ea00aa71abaa9697ade0ab7c1f3b705b\n\nThere\r\n are 3 sort nodes in the plan. When I get the “bad” behavior, the sorts have one worker, when I get the good behavior, they have multiple workers (e.g. 6).\n\n\n\n\nI also think, what Luis pointed out earlier might be a good option for you, i.e. setting\n\n\n    parallel_leader_participation = off;\n\n\nAnd by the way, this 1 worker turns actually into 2 workers in total with leader participation enabled.\n\n\nThis\r\n brings up a couple of questions —\n1)\r\n I’ve read that this is Postgres’ formula for the max # of workers it will consider for a table —\n\n  max_workers\r\n = log3(table size / min_parallel_table_scan_size)\n\nDoes\r\n that use the raw table size, or does the planner use statistics to estimate the size of the subset of the table that will be read before allocating workers?\n\n\n\n\n\"table size\" is the number of PSQL pages, i.e. relation-size / 8 kB. This comes from statistics.\n\n\n2)\r\n There are 9 tables in this query ranging in size from 72Kb to 17Gb. Does Postgres decide on a table-by-table basis to allocate multiple workers, or…?\n\n\n\n\r\nAFAIK, it will do it per-table initially but then the final result depends on the chosen gather node.\n\n\nThank\r\n you so much for the suggestion, I feel un-stuck now that I have an idea to experiment with.\n\n\n\n\nYou are welcome, we are actually about to publish a blog post which has some more suggestions for parallelism in.\n\n\nCheers,\nSebastian", "msg_date": "Thu, 4 Jun 2020 17:45:53 +0000", "msg_from": "Sebastian Dressler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" }, { "msg_contents": "\n\n> On Jun 4, 2020, at 1:45 PM, Sebastian Dressler <[email protected]> wrote:\n> \n> Hi Philip,\n> \n>> On 4. Jun 2020, at 18:41, Philip Semanchuk <[email protected]> wrote:\n>> [...]\n>> \n>>> Also, there are more configuration settings related to parallel queries you might want to look into. Most notably:\n>>> \n>>> parallel_setup_cost\n>>> parallel_tuple_cost\n>>> min_parallel_table_scan_size\n>>> \n>>> Especially the last one is a typical dealbreaker, you can try to set it to 0 for the beginning. Good starters for the others are 500 and 0.1 respectively.\n>> \n>> Aha! By setting min_parallel_table_scan_size=0, Postgres uses the 6 workers I expect, and the execution time decreases nicely. \n>> \n>> I posted a clumsily-anonymized plan for the “bad” scenario here --\n>> https://gist.github.com/osvenskan/ea00aa71abaa9697ade0ab7c1f3b705b\n>> \n>> There are 3 sort nodes in the plan. When I get the “bad” behavior, the sorts have one worker, when I get the good behavior, they have multiple workers (e.g. 6).\n> \n> I also think, what Luis pointed out earlier might be a good option for you, i.e. setting\n> \n> parallel_leader_participation = off;\n> \n> And by the way, this 1 worker turns actually into 2 workers in total with leader participation enabled.\n\nI’ll try that out, thanks.\n\n\n> \n>> This brings up a couple of questions —\n>> 1) I’ve read that this is Postgres’ formula for the max # of workers it will consider for a table —\n>> \n>> max_workers = log3(table size / min_parallel_table_scan_size)\n>> \n>> Does that use the raw table size, or does the planner use statistics to estimate the size of the subset of the table that will be read before allocating workers?\n> \n> \"table size\" is the number of PSQL pages, i.e. relation-size / 8 kB. This comes from statistics.\n\nOK, so it sounds like the planner does *not* use the values in pg_stats when planning workers, true? \n\nI’m still trying to understand one thing I’ve observed. I can run the query that produced the plan in the gist I linked to above with max_parallel_workers_per_gather=6 and the year param = 2018, and I get 6 workers. When I set the year param=2022 I get only one worker. Same tables, same query, different parameter. That suggests to me that the planner is using pg_stats when allocating workers, but I can imagine there might be other things going on that I don’t understand. (I haven’t ruled out that this might be an AWS-specific quirk, either.)\n\n\nCheers\nPhilip\n\n\n\n", "msg_date": "Thu, 4 Jun 2020 14:37:40 -0400", "msg_from": "Philip Semanchuk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" }, { "msg_contents": "Hi Philip,\r\n\r\nOn 4. Jun 2020, at 20:37, Philip Semanchuk <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n[...]\r\n\r\nThis brings up a couple of questions —\r\n1) I’ve read that this is Postgres’ formula for the max # of workers it will consider for a table —\r\n\r\n max_workers = log3(table size / min_parallel_table_scan_size)\r\n\r\nDoes that use the raw table size, or does the planner use statistics to estimate the size of the subset of the table that will be read before allocating workers?\r\n\r\n\"table size\" is the number of PSQL pages, i.e. relation-size / 8 kB. This comes from statistics.\r\n\r\nOK, so it sounds like the planner does *not* use the values in pg_stats when planning workers, true?\r\n\r\nFull disclosure: I am not too deep into these internals, likely others on the list know much more about it. But with respect to the relation size, I think this is tracked elsewhere, it might be affected by other parameters though like vacuuming and probably, the estimated amount of how much of the table is scanned also plays a role.\r\n\r\nI’m still trying to understand one thing I’ve observed. I can run the query that produced the plan in the gist I linked to above with max_parallel_workers_per_gather=6 and the year param = 2018, and I get 6 workers. When I set the year param=2022 I get only one worker. Same tables, same query, different parameter. That suggests to me that the planner is using pg_stats when allocating workers, but I can imagine there might be other things going on that I don’t understand. (I haven’t ruled out that this might be an AWS-specific quirk, either.)\r\n\r\nI think it would be helpful, if you could post again both plans. The ideal would be to use https://explain.dalibo.com/ and share the links. You will have to generate them with JSON format, but still can anonymize them.\r\n\r\nObviously, the plan changes when changing these two parameters, comparing both plans very likely unveils why that is the case. My guess would be, that something in the estimated amount of rows changes causing PG to prefer a different plan with lower cost.\r\n\r\nAlso, maybe on that occasion, check the default_statistics_target parameter which is default wise at 100, but for analytical case like - I assume - yours higher values tend to improve the planning. You can try with for instance 1000 or 2500. In contrast to changing this parameter globally, you can also adjust it per table (ALTER TABLE SET STATISTICS).\r\n\r\nCheers,\r\nSebastian\r\n\r\n--\r\n\r\nSebastian Dressler, Solution Architect\r\n+49 30 994 0496 72 | [email protected]<mailto:[email protected]>\r\n\r\nSwarm64 AS\r\nParkveien 41 B | 0258 Oslo | Norway\r\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\r\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck\r\n\r\nSwarm64 AS Zweigstelle Hive\r\nUllsteinstr. 120 | 12109 Berlin | Germany\r\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B\r\n\r\n[cid:[email protected]]", "msg_date": "Thu, 4 Jun 2020 19:03:26 +0000", "msg_from": "Sebastian Dressler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" }, { "msg_contents": "\n\n> On Jun 4, 2020, at 3:03 PM, Sebastian Dressler <[email protected]> wrote:\n> \n> Hi Philip,\n> \n>> On 4. Jun 2020, at 20:37, Philip Semanchuk <[email protected]> wrote:\n>> \n>> [...]\n>>> \n>>>> This brings up a couple of questions —\n>>>> 1) I’ve read that this is Postgres’ formula for the max # of workers it will consider for a table —\n>>>> \n>>>> max_workers = log3(table size / min_parallel_table_scan_size)\n>>>> \n>>>> Does that use the raw table size, or does the planner use statistics to estimate the size of the subset of the table that will be read before allocating workers?\n>>> \n>>> \"table size\" is the number of PSQL pages, i.e. relation-size / 8 kB. This comes from statistics.\n>> \n>> OK, so it sounds like the planner does *not* use the values in pg_stats when planning workers, true?\n> \n> Full disclosure: I am not too deep into these internals, likely others on the list know much more about it. But with respect to the relation size, I think this is tracked elsewhere, it might be affected by other parameters though like vacuuming and probably, the estimated amount of how much of the table is scanned also plays a role.\n\nI’m not too familiar with the internals either, but if I interpret this line of code correctly, it’s seems that pg_stats is not involved, and the worker allocation is based strictly on pages in the relation --\nhttps://github.com/postgres/postgres/blob/master/src/backend/optimizer/path/allpaths.c#L800\n\nThat means I still don’t have a reason for why this query gets a different number of workers depending on the WHERE clause, but I can experiment with that more on my own. \n\n\n>> I’m still trying to understand one thing I’ve observed. I can run the query that produced the plan in the gist I linked to above with max_parallel_workers_per_gather=6 and the year param = 2018, and I get 6 workers. When I set the year param=2022 I get only one worker. Same tables, same query, different parameter. That suggests to me that the planner is using pg_stats when allocating workers, but I can imagine there might be other things going on that I don’t understand. (I haven’t ruled out that this might be an AWS-specific quirk, either.)\n> \n> I think it would be helpful, if you could post again both plans. The ideal would be to use https://explain.dalibo.com/ and share the links. You will have to generate them with JSON format, but still can anonymize them.\n\nI really appreciate all the help you and others have already given. I think I’m good for now. \n\nThank you so much,\nPhilip\n\n\n\n", "msg_date": "Thu, 4 Jun 2020 17:29:57 -0400", "msg_from": "Philip Semanchuk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: increased max_parallel_workers_per_gather results in fewer\n workers?" } ]
[ { "msg_contents": "I have problem with one of my Postgres production server. Server works fine\nalmost always, but sometimes without any increase of transactions or\nstatements amount, machine gets stuck. Cores goes up to 100%, load up to\n160%. When it happens then there are problems with connect to database and\neven it will succeed, simple queries works several seconds instead of\nmilliseconds.Problem sometimes stops after a period a time (e.g. 35 min),\nsometimes we must restart Postgres, Linux, or even KVM (which exists as\nvirtualization host).\n\nMy hardware\n56 cores (Intel Core Processor (Skylake, IBRS))\n400 GB RAM\nRAID10 with about 40k IOPS\n\nOs\nCentOS Linux release 7.7.1908\nkernel 3.10.0-1062.18.1.el7.x86_64\n\nDatabasesize 100 GB (entirely fit in memory :) )\nserver_version 10.12\neffective_cache_size 192000 MB\nmaintenance_work_mem 2048 MB\nmax_connections 150\nshared_buffers 64000 MB\nwork_mem 96 MB\n\nOn normal state, i have about 500 tps, 5% usage of cores, about 3% of load,\nwhole database fits in memory, no reads from disk, only writes on about 500\nIOPS level, sometimes in spikes on 1500 IOPS level, but on this hardware\nthere is no problem with this values (no iowaits on cores). In normal state\nthis machine does \"nothing\". Connections to database are created by two app\nservers based on Java, through connection pools, so connections count is\nlimited by configuration of pools and max is 120, is lower value than in\nPostgres configuration (150). On normal state there is about 20\nconnections, when stuck goes into max (120).\n\nIn correlation with stucks i see informations in kernel log about\nNMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]\nbut i don't know this is reason or effect of problem\nI made investigation with pgBadger and ... nothing strange happens, just\nnormal statements\n\nAny ideas?\n\nThanks,\nKris\n\nI have problem with one of my Postgres production server. Server works fine almost always, but sometimes without any increase of transactions or statements amount, machine gets stuck. Cores goes up to 100%, load up to 160%. When it happens then there are problems with connect to database and even it will succeed, simple queries works several seconds instead of milliseconds.Problem sometimes stops after a period a time (e.g. 35 min), sometimes we must restart Postgres, Linux, or even KVM (which exists as virtualization host).My hardware56 cores (Intel Core Processor (Skylake, IBRS))400 GB RAMRAID10 with about 40k IOPSOsCentOS Linux release 7.7.1908kernel 3.10.0-1062.18.1.el7.x86_64 Databasesize 100 GB (entirely fit in memory :) )server_version\t10.12effective_cache_size\t192000 MBmaintenance_work_mem\t2048 MBmax_connections\t150 shared_buffers\t64000 MBwork_mem\t96 MBOn normal state, i have about 500 tps, 5% usage of cores, about 3% of load, whole database fits in memory, no reads from disk, only writes on about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this hardware there is no problem with this values (no iowaits on cores). In normal state this machine does \"nothing\". Connections to database are created by two app servers based on Java, through connection pools, so connections count is limited by configuration of pools and max is 120, is lower value than in Postgres configuration (150). On normal state there is about 20 connections, when stuck goes into max (120).In correlation with stucks i see informations in kernel log aboutNMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]but i don't know this is reason or effect of problemI made investigation with pgBadger and ... nothing strange happens, just normal statements Any ideas? Thanks,Kris", "msg_date": "Fri, 5 Jun 2020 12:07:02 +0200", "msg_from": "Krzysztof Olszewski <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql server gets stuck at low load" }, { "msg_contents": "De: \"Krzysztof Olszewski\" <[email protected]> \nPara: [email protected] \nEnviadas: Sexta-feira, 5 de junho de 2020 7:07:02 \nAssunto: Postgresql server gets stuck at low load \n\n\n\n\n\nBQ_BEGIN\n\nI have problem with one of my Postgres production server. Server works fine almost always, but sometimes without any increase of transactions or statements amount, machine gets stuck. Cores goes up to 100%, load up to 160%. When it happens then there are problems with connect to database and even it will succeed, simple queries works several seconds instead of milliseconds.Problem sometimes stops after a period a time (e.g. 35 min), sometimes we must restart Postgres, Linux, or even KVM (which exists as virtualization host). \nMy hardware56 cores (Intel Core Processor (Skylake, IBRS))400 GB RAMRAID10 with about 40k IOPS \nOs \nCentOS Linux release 7.7.1908 \nkernel 3.10.0-1062.18.1.el7.x86_64 Databasesize 100 GB (entirely fit in memory :) )server_version 10.12effective_cache_size 192000 MBmaintenance_work_mem 2048 MBmax_connections 150 shared_buffers 64000 MBwork_mem 96 MBOn normal state, i have about 500 tps, 5% usage of cores, about 3% of load, whole database fits in memory, no reads from disk, only writes on about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this hardware there is no problem with this values (no iowaits on cores). In normal state this machine does \"nothing\". Connections to database are created by two app servers based on Java, through connection pools, so connections count is limited by configuration of pools and max is 120, is lower value than in Postgres configuration (150). On normal state there is about 20 connections, when stuck goes into max (120).In correlation with stucks i see informations in kernel log aboutNMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]but i don't know this is reason or effect of problemI made investigation with pgBadger and ... nothing strange happens, just normal statements Any ideas? Thanks, \nKris \n\nBQ_END\n\nHi Krzysztof! \n\nI would enable pg_stat_statements extension and check if there are long running queries that should be quick. \n\nDe: \"Krzysztof Olszewski\" <[email protected]>Para: [email protected]: Sexta-feira, 5 de junho de 2020 7:07:02Assunto: Postgresql server gets stuck at low loadI have problem with one of my Postgres production server. Server works fine almost always, but sometimes without any increase of transactions or statements amount, machine gets stuck. Cores goes up to 100%, load up to 160%. When it happens then there are problems with connect to database and even it will succeed, simple queries works several seconds instead of milliseconds.Problem sometimes stops after a period a time (e.g. 35 min), sometimes we must restart Postgres, Linux, or even KVM (which exists as virtualization host).My hardware56 cores (Intel Core Processor (Skylake, IBRS))400 GB RAMRAID10 with about 40k IOPSOsCentOS Linux release 7.7.1908kernel 3.10.0-1062.18.1.el7.x86_64 Databasesize 100 GB (entirely fit in memory :) )server_version 10.12effective_cache_size 192000 MBmaintenance_work_mem 2048 MBmax_connections 150 shared_buffers 64000 MBwork_mem 96 MBOn normal state, i have about 500 tps, 5% usage of cores, about 3% of load, whole database fits in memory, no reads from disk, only writes on about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this hardware there is no problem with this values (no iowaits on cores). In normal state this machine does \"nothing\". Connections to database are created by two app servers based on Java, through connection pools, so connections count is limited by configuration of pools and max is 120, is lower value than in Postgres configuration (150). On normal state there is about 20 connections, when stuck goes into max (120).In correlation with stucks i see informations in kernel log aboutNMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]but i don't know this is reason or effect of problemI made investigation with pgBadger and ... nothing strange happens, just normal statements Any ideas? Thanks,KrisHi Krzysztof!I would enable pg_stat_statements extension and check if there are long running queries that should be quick.", "msg_date": "Fri, 5 Jun 2020 08:16:42 -0300 (BRT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Postgresql server gets stuck at low load" }, { "msg_contents": "pá 5. 6. 2020 v 12:07 odesílatel Krzysztof Olszewski <[email protected]>\nnapsal:\n\n> I have problem with one of my Postgres production server. Server works\n> fine almost always, but sometimes without any increase of transactions or\n> statements amount, machine gets stuck. Cores goes up to 100%, load up to\n> 160%. When it happens then there are problems with connect to database and\n> even it will succeed, simple queries works several seconds instead of\n> milliseconds.Problem sometimes stops after a period a time (e.g. 35 min),\n> sometimes we must restart Postgres, Linux, or even KVM (which exists as\n> virtualization host).\n>\n> My hardware\n> 56 cores (Intel Core Processor (Skylake, IBRS))\n> 400 GB RAM\n> RAID10 with about 40k IOPS\n>\n> Os\n> CentOS Linux release 7.7.1908\n> kernel 3.10.0-1062.18.1.el7.x86_64\n>\n> Databasesize 100 GB (entirely fit in memory :) )\n> server_version 10.12\n> effective_cache_size 192000 MB\n> maintenance_work_mem 2048 MB\n> max_connections 150\n> shared_buffers 64000 MB\n> work_mem 96 MB\n>\n> On normal state, i have about 500 tps, 5% usage of cores, about 3% of\n> load, whole database fits in memory, no reads from disk, only writes on\n> about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this\n> hardware there is no problem with this values (no iowaits on cores). In\n> normal state this machine does \"nothing\". Connections to database are\n> created by two app servers based on Java, through connection pools, so\n> connections count is limited by configuration of pools and max is 120, is\n> lower value than in Postgres configuration (150). On normal state there is\n> about 20 connections, when stuck goes into max (120).\n>\n> In correlation with stucks i see informations in kernel log about\n> NMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]\n> but i don't know this is reason or effect of problem\n> I made investigation with pgBadger and ... nothing strange happens, just\n> normal statements\n>\n> Any ideas?\n>\n\nyou can try to install perf + debug symbols for postgres. When you will\nhave this problem again run \"perf top\". You can see what routines eat your\nCPU.\n\nMaybe it can be a spinlock problem\n\nhttps://www.postgresql.org/message-id/CAHyXU0yAsVxoab2PcyoCuPjqymtnaE93v7bN4ctv2aNi92fefA%40mail.gmail.com\n\nCan be interesting a reply on Merlin's question from mail/.\n\ncat /sys/kernel/mm/redhat_transparent_hugepage/enabled\ncat /sys/kernel/mm/redhat_transparent_hugepage/defrag\n\nRegards\n\nPavel\n\n\n>\n> Thanks,\n> Kris\n>\n>\n>\n\npá 5. 6. 2020 v 12:07 odesílatel Krzysztof Olszewski <[email protected]> napsal:I have problem with one of my Postgres production server. Server works fine almost always, but sometimes without any increase of transactions or statements amount, machine gets stuck. Cores goes up to 100%, load up to 160%. When it happens then there are problems with connect to database and even it will succeed, simple queries works several seconds instead of milliseconds.Problem sometimes stops after a period a time (e.g. 35 min), sometimes we must restart Postgres, Linux, or even KVM (which exists as virtualization host).My hardware56 cores (Intel Core Processor (Skylake, IBRS))400 GB RAMRAID10 with about 40k IOPSOsCentOS Linux release 7.7.1908kernel 3.10.0-1062.18.1.el7.x86_64 Databasesize 100 GB (entirely fit in memory :) )server_version\t10.12effective_cache_size\t192000 MBmaintenance_work_mem\t2048 MBmax_connections\t150 shared_buffers\t64000 MBwork_mem\t96 MBOn normal state, i have about 500 tps, 5% usage of cores, about 3% of load, whole database fits in memory, no reads from disk, only writes on about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this hardware there is no problem with this values (no iowaits on cores). In normal state this machine does \"nothing\". Connections to database are created by two app servers based on Java, through connection pools, so connections count is limited by configuration of pools and max is 120, is lower value than in Postgres configuration (150). On normal state there is about 20 connections, when stuck goes into max (120).In correlation with stucks i see informations in kernel log aboutNMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]but i don't know this is reason or effect of problemI made investigation with pgBadger and ... nothing strange happens, just normal statements Any ideas? you can try to install perf + debug symbols for postgres. When you will have this problem again run \"perf top\". You can see what routines eat your CPU. Maybe it can be a spinlock problem https://www.postgresql.org/message-id/CAHyXU0yAsVxoab2PcyoCuPjqymtnaE93v7bN4ctv2aNi92fefA%40mail.gmail.comCan be interesting a reply on Merlin's question from mail/.cat /sys/kernel/mm/redhat_transparent_hugepage/enabledcat /sys/kernel/mm/redhat_transparent_hugepage/defragRegardsPavel\n Thanks,Kris", "msg_date": "Fri, 5 Jun 2020 13:37:48 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql server gets stuck at low load" }, { "msg_contents": "I had log_min_duration_statement set to 0 for a short period, just before\nstuck and just after, so I have full list of SQL statements, next analyzed\nin pgBadger, there is no increase of amount of statements, and I can see,\nall statements are longer processed than before stuck. But following Your\nadvice I'll check the results from pg_stat_statements.\n\npt., 5 cze 2020 o 13:16 <[email protected]> napisał(a):\n\n>\n> *De: *\"Krzysztof Olszewski\" <[email protected]>\n> *Para: *[email protected]\n> *Enviadas: *Sexta-feira, 5 de junho de 2020 7:07:02\n> *Assunto: *Postgresql server gets stuck at low load\n>\n> I have problem with one of my Postgres production server. Server works\n> fine almost always, but sometimes without any increase of transactions or\n> statements amount, machine gets stuck. Cores goes up to 100%, load up to\n> 160%. When it happens then there are problems with connect to database and\n> even it will succeed, simple queries works several seconds instead of\n> milliseconds.Problem sometimes stops after a period a time (e.g. 35 min),\n> sometimes we must restart Postgres, Linux, or even KVM (which exists as\n> virtualization host).\n> My hardware56 cores (Intel Core Processor (Skylake, IBRS))400 GB RAMRAID10\n> with about 40k IOPS\n> Os\n> CentOS Linux release 7.7.1908\n> kernel 3.10.0-1062.18.1.el7.x86_64 Databasesize 100 GB (entirely fit in\n> memory :) )server_version 10.12effective_cache_size 192000\n> MBmaintenance_work_mem 2048 MBmax_connections 150 shared_buffers 64000\n> MBwork_mem 96 MBOn normal state, i have about 500 tps, 5% usage of cores,\n> about 3% of load, whole database fits in memory, no reads from disk, only\n> writes on about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but\n> on this hardware there is no problem with this values (no iowaits on\n> cores). In normal state this machine does \"nothing\". Connections to\n> database are created by two app servers based on Java, through connection\n> pools, so connections count is limited by configuration of pools and max is\n> 120, is lower value than in Postgres configuration (150). On normal state\n> there is about 20 connections, when stuck goes into max (120).In\n> correlation with stucks i see informations in kernel log aboutNMI watchdog:\n> BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]but i don't know\n> this is reason or effect of problemI made investigation with pgBadger and\n> ... nothing strange happens, just normal statements Any ideas? Thanks,\n> Kris\n>\n> Hi Krzysztof!\n>\n> I would enable pg_stat_statements extension and check if there are long\n> running queries that should be quick.\n>\n\nI had \tlog_min_duration_statement set to 0 for a short period, just before stuck and just after, so I have full list of SQL statements, next analyzed in pgBadger, there is no increase of amount of statements, and I can see, all statements are longer processed than before stuck. But following Your advice I'll check the results from \npg_stat_statements.pt., 5 cze 2020 o 13:16 <[email protected]> napisał(a):De: \"Krzysztof Olszewski\" <[email protected]>Para: [email protected]: Sexta-feira, 5 de junho de 2020 7:07:02Assunto: Postgresql server gets stuck at low loadI have problem with one of my Postgres production server. Server works fine almost always, but sometimes without any increase of transactions or statements amount, machine gets stuck. Cores goes up to 100%, load up to 160%. When it happens then there are problems with connect to database and even it will succeed, simple queries works several seconds instead of milliseconds.Problem sometimes stops after a period a time (e.g. 35 min), sometimes we must restart Postgres, Linux, or even KVM (which exists as virtualization host).My hardware56 cores (Intel Core Processor (Skylake, IBRS))400 GB RAMRAID10 with about 40k IOPSOsCentOS Linux release 7.7.1908kernel 3.10.0-1062.18.1.el7.x86_64 Databasesize 100 GB (entirely fit in memory :) )server_version 10.12effective_cache_size 192000 MBmaintenance_work_mem 2048 MBmax_connections 150 shared_buffers 64000 MBwork_mem 96 MBOn normal state, i have about 500 tps, 5% usage of cores, about 3% of load, whole database fits in memory, no reads from disk, only writes on about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this hardware there is no problem with this values (no iowaits on cores). In normal state this machine does \"nothing\". Connections to database are created by two app servers based on Java, through connection pools, so connections count is limited by configuration of pools and max is 120, is lower value than in Postgres configuration (150). On normal state there is about 20 connections, when stuck goes into max (120).In correlation with stucks i see informations in kernel log aboutNMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]but i don't know this is reason or effect of problemI made investigation with pgBadger and ... nothing strange happens, just normal statements Any ideas? Thanks,KrisHi Krzysztof!I would enable pg_stat_statements extension and check if there are long running queries that should be quick.", "msg_date": "Tue, 9 Jun 2020 13:52:55 +0200", "msg_from": "Krzysztof Olszewski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql server gets stuck at low load" }, { "msg_contents": "I had hugepage's off and on, problems still occurs,\nthanx for \"perf top\" suggestion,\n\nRetards\nKris\n\n\n\npt., 5 cze 2020 o 13:38 Pavel Stehule <[email protected]> napisał(a):\n\n>\n>\n> pá 5. 6. 2020 v 12:07 odesílatel Krzysztof Olszewski <[email protected]>\n> napsal:\n>\n>> I have problem with one of my Postgres production server. Server works\n>> fine almost always, but sometimes without any increase of transactions or\n>> statements amount, machine gets stuck. Cores goes up to 100%, load up to\n>> 160%. When it happens then there are problems with connect to database and\n>> even it will succeed, simple queries works several seconds instead of\n>> milliseconds.Problem sometimes stops after a period a time (e.g. 35 min),\n>> sometimes we must restart Postgres, Linux, or even KVM (which exists as\n>> virtualization host).\n>>\n>> My hardware\n>> 56 cores (Intel Core Processor (Skylake, IBRS))\n>> 400 GB RAM\n>> RAID10 with about 40k IOPS\n>>\n>> Os\n>> CentOS Linux release 7.7.1908\n>> kernel 3.10.0-1062.18.1.el7.x86_64\n>>\n>> Databasesize 100 GB (entirely fit in memory :) )\n>> server_version 10.12\n>> effective_cache_size 192000 MB\n>> maintenance_work_mem 2048 MB\n>> max_connections 150\n>> shared_buffers 64000 MB\n>> work_mem 96 MB\n>>\n>> On normal state, i have about 500 tps, 5% usage of cores, about 3% of\n>> load, whole database fits in memory, no reads from disk, only writes on\n>> about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this\n>> hardware there is no problem with this values (no iowaits on cores). In\n>> normal state this machine does \"nothing\". Connections to database are\n>> created by two app servers based on Java, through connection pools, so\n>> connections count is limited by configuration of pools and max is 120, is\n>> lower value than in Postgres configuration (150). On normal state there is\n>> about 20 connections, when stuck goes into max (120).\n>>\n>> In correlation with stucks i see informations in kernel log about\n>> NMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]\n>> but i don't know this is reason or effect of problem\n>> I made investigation with pgBadger and ... nothing strange happens, just\n>> normal statements\n>>\n>> Any ideas?\n>>\n>\n> you can try to install perf + debug symbols for postgres. When you will\n> have this problem again run \"perf top\". You can see what routines eat your\n> CPU.\n>\n> Maybe it can be a spinlock problem\n>\n>\n> https://www.postgresql.org/message-id/CAHyXU0yAsVxoab2PcyoCuPjqymtnaE93v7bN4ctv2aNi92fefA%40mail.gmail.com\n>\n> Can be interesting a reply on Merlin's question from mail/.\n>\n> cat /sys/kernel/mm/redhat_transparent_hugepage/enabled\n> cat /sys/kernel/mm/redhat_transparent_hugepage/defrag\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Thanks,\n>> Kris\n>>\n>>\n>>\n\n\nI had hugepage's off and on, problems still occurs,thanx for \n \"perf top\" suggestion, RetardsKris\npt., 5 cze 2020 o 13:38 Pavel Stehule <[email protected]> napisał(a):pá 5. 6. 2020 v 12:07 odesílatel Krzysztof Olszewski <[email protected]> napsal:I have problem with one of my Postgres production server. Server works fine almost always, but sometimes without any increase of transactions or statements amount, machine gets stuck. Cores goes up to 100%, load up to 160%. When it happens then there are problems with connect to database and even it will succeed, simple queries works several seconds instead of milliseconds.Problem sometimes stops after a period a time (e.g. 35 min), sometimes we must restart Postgres, Linux, or even KVM (which exists as virtualization host).My hardware56 cores (Intel Core Processor (Skylake, IBRS))400 GB RAMRAID10 with about 40k IOPSOsCentOS Linux release 7.7.1908kernel 3.10.0-1062.18.1.el7.x86_64 Databasesize 100 GB (entirely fit in memory :) )server_version\t10.12effective_cache_size\t192000 MBmaintenance_work_mem\t2048 MBmax_connections\t150 shared_buffers\t64000 MBwork_mem\t96 MBOn normal state, i have about 500 tps, 5% usage of cores, about 3% of load, whole database fits in memory, no reads from disk, only writes on about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this hardware there is no problem with this values (no iowaits on cores). In normal state this machine does \"nothing\". Connections to database are created by two app servers based on Java, through connection pools, so connections count is limited by configuration of pools and max is 120, is lower value than in Postgres configuration (150). On normal state there is about 20 connections, when stuck goes into max (120).In correlation with stucks i see informations in kernel log aboutNMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]but i don't know this is reason or effect of problemI made investigation with pgBadger and ... nothing strange happens, just normal statements Any ideas? you can try to install perf + debug symbols for postgres. When you will have this problem again run \"perf top\". You can see what routines eat your CPU. Maybe it can be a spinlock problem https://www.postgresql.org/message-id/CAHyXU0yAsVxoab2PcyoCuPjqymtnaE93v7bN4ctv2aNi92fefA%40mail.gmail.comCan be interesting a reply on Merlin's question from mail/.cat /sys/kernel/mm/redhat_transparent_hugepage/enabledcat /sys/kernel/mm/redhat_transparent_hugepage/defragRegardsPavel\n Thanks,Kris", "msg_date": "Tue, 9 Jun 2020 13:54:21 +0200", "msg_from": "Krzysztof Olszewski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql server gets stuck at low load" }, { "msg_contents": "Hi,\n\nOn Fri, Jun 5, 2020 at 7:07 AM Krzysztof Olszewski <[email protected]>\nwrote:\n\n> I have problem with one of my Postgres production server. Server works\n> fine almost always, but sometimes without any increase of transactions or\n> statements amount, machine gets stuck. Cores goes up to 100%, load up to\n> 160%. When it happens then there are problems with connect to database and\n> even it will succeed, simple queries works several seconds instead of\n> milliseconds.Problem sometimes stops after a period a time (e.g. 35 min),\n> sometimes we must restart Postgres, Linux, or even KVM (which exists as\n> virtualization host).\n>\n> My hardware\n> 56 cores (Intel Core Processor (Skylake, IBRS))\n> 400 GB RAM\n> RAID10 with about 40k IOPS\n>\n> Os\n> CentOS Linux release 7.7.1908\n> kernel 3.10.0-1062.18.1.el7.x86_64\n>\n> Databasesize 100 GB (entirely fit in memory :) )\n> server_version 10.12\n> effective_cache_size 192000 MB\n> maintenance_work_mem 2048 MB\n> max_connections 150\n> shared_buffers 64000 MB\n> work_mem 96 MB\n>\nWhat is the value set to random_page_cost ?\nSet to 1 (same as default seq_page_cost) for a moment and try it.\n\n>\n> On normal state, i have about 500 tps, 5% usage of cores, about 3% of\n> load, whole database fits in memory, no reads from disk, only writes on\n> about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this\n> hardware there is no problem with this values (no iowaits on cores). In\n> normal state this machine does \"nothing\". Connections to database are\n> created by two app servers based on Java, through connection pools, so\n> connections count is limited by configuration of pools and max is 120, is\n> lower value than in Postgres configuration (150). On normal state there is\n> about 20 connections, when stuck goes into max (120).\n>\n> In correlation with stucks i see informations in kernel log about\n> NMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]\n> but i don't know this is reason or effect of problem\n> I made investigation with pgBadger and ... nothing strange happens, just\n> normal statements\n>\n> Any ideas?\n>\n> Thanks,\n> Kris\n>\n>\n>\n\n-- \nRegards,\nAvinash Vallarapu\n\nHi,On Fri, Jun 5, 2020 at 7:07 AM Krzysztof Olszewski <[email protected]> wrote:I have problem with one of my Postgres production server. Server works fine almost always, but sometimes without any increase of transactions or statements amount, machine gets stuck. Cores goes up to 100%, load up to 160%. When it happens then there are problems with connect to database and even it will succeed, simple queries works several seconds instead of milliseconds.Problem sometimes stops after a period a time (e.g. 35 min), sometimes we must restart Postgres, Linux, or even KVM (which exists as virtualization host).My hardware56 cores (Intel Core Processor (Skylake, IBRS))400 GB RAMRAID10 with about 40k IOPSOsCentOS Linux release 7.7.1908kernel 3.10.0-1062.18.1.el7.x86_64 Databasesize 100 GB (entirely fit in memory :) )server_version\t10.12effective_cache_size\t192000 MBmaintenance_work_mem\t2048 MBmax_connections\t150 shared_buffers\t64000 MBwork_mem\t96 MBWhat is the value set to random_page_cost ? Set to 1 (same as default seq_page_cost) for a moment and try it. On normal state, i have about 500 tps, 5% usage of cores, about 3% of load, whole database fits in memory, no reads from disk, only writes on about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this hardware there is no problem with this values (no iowaits on cores). In normal state this machine does \"nothing\". Connections to database are created by two app servers based on Java, through connection pools, so connections count is limited by configuration of pools and max is 120, is lower value than in Postgres configuration (150). On normal state there is about 20 connections, when stuck goes into max (120).In correlation with stucks i see informations in kernel log aboutNMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]but i don't know this is reason or effect of problemI made investigation with pgBadger and ... nothing strange happens, just normal statements Any ideas? Thanks,Kris\n-- Regards,Avinash Vallarapu", "msg_date": "Tue, 9 Jun 2020 09:01:12 -0300", "msg_from": "Avinash Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql server gets stuck at low load" }, { "msg_contents": "random_page_cost == 1.1\n\nwt., 9 cze 2020 o 14:01 Avinash Kumar <[email protected]>\nnapisał(a):\n\n> Hi,\n>\n> On Fri, Jun 5, 2020 at 7:07 AM Krzysztof Olszewski <[email protected]>\n> wrote:\n>\n>> I have problem with one of my Postgres production server. Server works\n>> fine almost always, but sometimes without any increase of transactions or\n>> statements amount, machine gets stuck. Cores goes up to 100%, load up to\n>> 160%. When it happens then there are problems with connect to database and\n>> even it will succeed, simple queries works several seconds instead of\n>> milliseconds.Problem sometimes stops after a period a time (e.g. 35 min),\n>> sometimes we must restart Postgres, Linux, or even KVM (which exists as\n>> virtualization host).\n>>\n>> My hardware\n>> 56 cores (Intel Core Processor (Skylake, IBRS))\n>> 400 GB RAM\n>> RAID10 with about 40k IOPS\n>>\n>> Os\n>> CentOS Linux release 7.7.1908\n>> kernel 3.10.0-1062.18.1.el7.x86_64\n>>\n>> Databasesize 100 GB (entirely fit in memory :) )\n>> server_version 10.12\n>> effective_cache_size 192000 MB\n>> maintenance_work_mem 2048 MB\n>> max_connections 150\n>> shared_buffers 64000 MB\n>> work_mem 96 MB\n>>\n> What is the value set to random_page_cost ?\n> Set to 1 (same as default seq_page_cost) for a moment and try it.\n>\n>>\n>> On normal state, i have about 500 tps, 5% usage of cores, about 3% of\n>> load, whole database fits in memory, no reads from disk, only writes on\n>> about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this\n>> hardware there is no problem with this values (no iowaits on cores). In\n>> normal state this machine does \"nothing\". Connections to database are\n>> created by two app servers based on Java, through connection pools, so\n>> connections count is limited by configuration of pools and max is 120, is\n>> lower value than in Postgres configuration (150). On normal state there is\n>> about 20 connections, when stuck goes into max (120).\n>>\n>> In correlation with stucks i see informations in kernel log about\n>> NMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]\n>> but i don't know this is reason or effect of problem\n>> I made investigation with pgBadger and ... nothing strange happens, just\n>> normal statements\n>>\n>> Any ideas?\n>>\n>> Thanks,\n>> Kris\n>>\n>>\n>>\n>\n> --\n> Regards,\n> Avinash Vallarapu\n>\n\n\trandom_page_cost \t== 1.1wt., 9 cze 2020 o 14:01 Avinash Kumar <[email protected]> napisał(a):Hi,On Fri, Jun 5, 2020 at 7:07 AM Krzysztof Olszewski <[email protected]> wrote:I have problem with one of my Postgres production server. Server works fine almost always, but sometimes without any increase of transactions or statements amount, machine gets stuck. Cores goes up to 100%, load up to 160%. When it happens then there are problems with connect to database and even it will succeed, simple queries works several seconds instead of milliseconds.Problem sometimes stops after a period a time (e.g. 35 min), sometimes we must restart Postgres, Linux, or even KVM (which exists as virtualization host).My hardware56 cores (Intel Core Processor (Skylake, IBRS))400 GB RAMRAID10 with about 40k IOPSOsCentOS Linux release 7.7.1908kernel 3.10.0-1062.18.1.el7.x86_64 Databasesize 100 GB (entirely fit in memory :) )server_version\t10.12effective_cache_size\t192000 MBmaintenance_work_mem\t2048 MBmax_connections\t150 shared_buffers\t64000 MBwork_mem\t96 MBWhat is the value set to random_page_cost ? Set to 1 (same as default seq_page_cost) for a moment and try it. On normal state, i have about 500 tps, 5% usage of cores, about 3% of load, whole database fits in memory, no reads from disk, only writes on about 500 IOPS level, sometimes in spikes on 1500 IOPS level, but on this hardware there is no problem with this values (no iowaits on cores). In normal state this machine does \"nothing\". Connections to database are created by two app servers based on Java, through connection pools, so connections count is limited by configuration of pools and max is 120, is lower value than in Postgres configuration (150). On normal state there is about 20 connections, when stuck goes into max (120).In correlation with stucks i see informations in kernel log aboutNMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]but i don't know this is reason or effect of problemI made investigation with pgBadger and ... nothing strange happens, just normal statements Any ideas? Thanks,Kris\n-- Regards,Avinash Vallarapu", "msg_date": "Tue, 9 Jun 2020 14:05:10 +0200", "msg_from": "Krzysztof Olszewski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql server gets stuck at low load" }, { "msg_contents": "On Tue, Jun 09, 2020 at 01:54:21PM +0200, Krzysztof Olszewski wrote:\n> I had hugepage's off and on, problems still occurs,\n> thanx for \"perf top\" suggestion,\n\n> pt., 5 cze 2020 o 13:38 Pavel Stehule <[email protected]> napisał(a):\n> > pá 5. 6. 2020 v 12:07 odesílatel Krzysztof Olszewski <[email protected]> napsal:\n> >\n> >> I have problem with one of my Postgres production server. Server works\n> >> fine almost always, but sometimes without any increase of transactions or\n> >> statements amount, machine gets stuck. Cores goes up to 100%, load up to\n> >> 160%. When it happens then there are problems with connect to database and\n> >> even it will succeed, simple queries works several seconds instead of\n> >> milliseconds.Problem sometimes stops after a period a time (e.g. 35 min),\n> >> sometimes we must restart Postgres, Linux, or even KVM (which exists as\n> >> virtualization host).\n> >>\n> >> My hardware\n> >> 56 cores (Intel Core Processor (Skylake, IBRS))\n> >> 400 GB RAM\n> >> RAID10 with about 40k IOPS\n> >>\n> >> shared_buffers 64000 MB\n> >>\n> >> In correlation with stucks i see informations in kernel log about\n> >> NMI watchdog: BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]\n> >\n> > https://www.postgresql.org/message-id/CAHyXU0yAsVxoab2PcyoCuPjqymtnaE93v7bN4ctv2aNi92fefA%40mail.gmail.com\n> >\n> > Can be interesting a reply on Merlin's question from mail/.\n> >\n> > cat /sys/kernel/mm/redhat_transparent_hugepage/enabled\n> > cat /sys/kernel/mm/redhat_transparent_hugepage/defrag\n\ntry this:\necho 2 |sudo /sys/kernel/mm/ksm/run\n\nhttps://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 9 Jun 2020 07:09:59 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql server gets stuck at low load" } ]
[ { "msg_contents": "Hi all,\n\nI have a query that runs much slower in Postgres on Windows than on \nLinux, and I'm so far unable to explain why - the execution plans are \nidentical and the hardware is reasonably the same caliber.\n\nUsing explain analyze on the database running on Windows I get\n\n-> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1 \nwidth=295) (actual time=0.075..0.075 rows=0 loops=229227)\n\nThe server is Postgres 12, and for reasons outside of my control it runs \non Windows 2012 on a virtual server. It has 4 cores and 32 GB ram \nallocated on a Xeon E5 4660 v4, and running winsat shows satisfactory \ndisk and memory bandwidth and CPU performance.\n\nIf I copy the database to my laptop running Linux (Postgres 12 on Fedora \n32, i7-9750H, 16 GB ram) I get the exact same execution plan. Explain \nanalyze says\n\n-> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1 \nwidth=295) (actual time=0.008..0.008 rows=0 loops=229227)\n\nNote that the index scans are more than 9 times faster on my laptop, and \nthe entire query executes about 12 times faster. I realize that each \ncore in the laptop CPU is faster than a server core and that \nvirtualization doesn't help performance, but I wouldn't expect that to \nmake the Windows box 10 times slower.\n\nThe table is freshly vacuumed. It has about 10M rows and takes about \n2.6G disk space; the index is about 600M. Everything is cached; there's \nbasically no disk I/O happening while the query is executing.\n\nThe only Postgres configuration difference between the Windows and Linux \nenvironments is shared_buffers, which is 4G on my laptop and 512M on the \nWindows server, and effective_cache_size which are 8G on the laptop and \n16G on the server.\n\nI suspect that something is rotten in for example the provisioning of \nthe virtualization environment, but before I start pestering the \noperations people I would really appreciate any comments on whether the \nperformance difference is to be expected or if there's some obvious \ntuning to try.\n\nBest regards & thanks,\n Mikkel Lauritsen\n\n\n", "msg_date": "Wed, 10 Jun 2020 21:28:51 +0200", "msg_from": "Mikkel Lauritsen <[email protected]>", "msg_from_op": true, "msg_subject": "Windows slowness?" }, { "msg_contents": "mikkel,\n\nsorry for being so stupid: did you exclude antivirus/firewall related issue?\n\nLe mer. 10 juin 2020 à 21:41, Mikkel Lauritsen <[email protected]> a écrit :\n\n> Hi all,\n>\n> I have a query that runs much slower in Postgres on Windows than on\n> Linux, and I'm so far unable to explain why - the execution plans are\n> identical and the hardware is reasonably the same caliber.\n>\n> Using explain analyze on the database running on Windows I get\n>\n> -> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1\n> width=295) (actual time=0.075..0.075 rows=0 loops=229227)\n>\n> The server is Postgres 12, and for reasons outside of my control it runs\n> on Windows 2012 on a virtual server. It has 4 cores and 32 GB ram\n> allocated on a Xeon E5 4660 v4, and running winsat shows satisfactory\n> disk and memory bandwidth and CPU performance.\n>\n> If I copy the database to my laptop running Linux (Postgres 12 on Fedora\n> 32, i7-9750H, 16 GB ram) I get the exact same execution plan. Explain\n> analyze says\n>\n> -> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1\n> width=295) (actual time=0.008..0.008 rows=0 loops=229227)\n>\n> Note that the index scans are more than 9 times faster on my laptop, and\n> the entire query executes about 12 times faster. I realize that each\n> core in the laptop CPU is faster than a server core and that\n> virtualization doesn't help performance, but I wouldn't expect that to\n> make the Windows box 10 times slower.\n>\n> The table is freshly vacuumed. It has about 10M rows and takes about\n> 2.6G disk space; the index is about 600M. Everything is cached; there's\n> basically no disk I/O happening while the query is executing.\n>\n> The only Postgres configuration difference between the Windows and Linux\n> environments is shared_buffers, which is 4G on my laptop and 512M on the\n> Windows server, and effective_cache_size which are 8G on the laptop and\n> 16G on the server.\n>\n> I suspect that something is rotten in for example the provisioning of\n> the virtualization environment, but before I start pestering the\n> operations people I would really appreciate any comments on whether the\n> performance difference is to be expected or if there's some obvious\n> tuning to try.\n>\n> Best regards & thanks,\n> Mikkel Lauritsen\n>\n>\n>\n\nmikkel,sorry for being so stupid: did you exclude antivirus/firewall related issue?Le mer. 10 juin 2020 à 21:41, Mikkel Lauritsen <[email protected]> a écrit :Hi all,\n\nI have a query that runs much slower in Postgres on Windows than on \nLinux, and I'm so far unable to explain why - the execution plans are \nidentical and the hardware is reasonably the same caliber.\n\nUsing explain analyze on the database running on Windows I get\n\n->  Index Scan using event_pkey on event t1  (cost=0.56..0.95 rows=1 \nwidth=295) (actual time=0.075..0.075 rows=0 loops=229227)\n\nThe server is Postgres 12, and for reasons outside of my control it runs \non Windows 2012 on a virtual server. It has 4 cores and 32 GB ram \nallocated on a Xeon E5 4660 v4, and running winsat shows satisfactory \ndisk and memory bandwidth and CPU performance.\n\nIf I copy the database to my laptop running Linux (Postgres 12 on Fedora \n32, i7-9750H, 16 GB ram) I get the exact same execution plan. Explain \nanalyze says\n\n->  Index Scan using event_pkey on event t1  (cost=0.56..0.95 rows=1 \nwidth=295) (actual time=0.008..0.008 rows=0 loops=229227)\n\nNote that the index scans are more than 9 times faster on my laptop, and \nthe entire query executes about 12 times faster. I realize that each \ncore in the laptop CPU is faster than a server core and that \nvirtualization doesn't help performance, but I wouldn't expect that to \nmake the Windows box 10 times slower.\n\nThe table is freshly vacuumed. It has about 10M rows and takes about \n2.6G disk space; the index is about 600M. Everything is cached; there's \nbasically no disk I/O happening while the query is executing.\n\nThe only Postgres configuration difference between the Windows and Linux \nenvironments is shared_buffers, which is 4G on my laptop and 512M on the \nWindows server, and effective_cache_size which are 8G on the laptop and \n16G on the server.\n\nI suspect that something is rotten in for example the provisioning of \nthe virtualization environment, but before I start pestering the \noperations people I would really appreciate any comments on whether the \nperformance difference is to be expected or if there's some obvious \ntuning to try.\n\nBest regards & thanks,\n   Mikkel Lauritsen", "msg_date": "Wed, 10 Jun 2020 22:35:45 +0200", "msg_from": "mountain the blue <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows slowness?" }, { "msg_contents": "On Thu, 11 Jun 2020 at 07:41, Mikkel Lauritsen <[email protected]> wrote:\n> I have a query that runs much slower in Postgres on Windows than on\n> Linux\n\n> Using explain analyze on the database running on Windows I get\n>\n> -> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1\n> width=295) (actual time=0.075..0.075 rows=0 loops=229227)\n\n> If I copy the database to my laptop running Linux (Postgres 12 on Fedora\n> 32, i7-9750H, 16 GB ram) I get the exact same execution plan. Explain\n> analyze says\n>\n> -> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1\n> width=295) (actual time=0.008..0.008 rows=0 loops=229227)\n>\n\n> The table is freshly vacuumed. It has about 10M rows and takes about\n> 2.6G disk space; the index is about 600M. Everything is cached; there's\n> basically no disk I/O happening while the query is executing.\n\nCan you confirm what: SELECT pg_relation_size('event_pkey'),\npg_relation_size('event'); says on each\n\n> The only Postgres configuration difference between the Windows and Linux\n> environments is shared_buffers, which is 4G on my laptop and 512M on the\n> Windows server, and effective_cache_size which are 8G on the laptop and\n> 16G on the server.\n\nThere is some slight advantage to having the buffers directly in\nshared buffers. Having them in the kernel's page cache does still\nrequire getting them into shared buffers. Going by these sizes it\nseems much more likely that the Linux instance could have all buffers\nin shared_buffers, but it seems likely the Windows instance won't. I\ncan't imagine that counts for 10x, but it surely must count for\nsomething.\n\nIt would be good to see:\n\nSET track_io_timing = on;\nEXPLAIN (ANALYZE, BUFFERS) <the query>\n\nDavid\n\n\n", "msg_date": "Thu, 11 Jun 2020 09:08:28 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows slowness?" }, { "msg_contents": "Hi David,\n\nMany thanks for your response - you wrote:\n\nOn 2020-06-10 23:08, David Rowley wrote:\n> On Thu, 11 Jun 2020 at 07:41, Mikkel Lauritsen <[email protected]> wrote:\n>> I have a query that runs much slower in Postgres on Windows than on\n>> Linux\n> \n>> Using explain analyze on the database running on Windows I get\n>> \n>> -> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1\n>> width=295) (actual time=0.075..0.075 rows=0 loops=229227)\n> \n>> If I copy the database to my laptop running Linux (Postgres 12 on \n>> Fedora\n>> 32, i7-9750H, 16 GB ram) I get the exact same execution plan. Explain\n>> analyze says\n>> \n>> -> Index Scan using event_pkey on event t1 (cost=0.56..0.95 rows=1\n>> width=295) (actual time=0.008..0.008 rows=0 loops=229227)\n\n--- snip ---\n\n> Can you confirm what: SELECT pg_relation_size('event_pkey'),\n> pg_relation_size('event'); says on each\n\n1011384320 and 2753077248, respectively.\n\n--- snip ---\n\n> It would be good to see:\n> \n> SET track_io_timing = on;\n> EXPLAIN (ANALYZE, BUFFERS) <the query>\n\nI wasn't aware of that tracing option - thanks! For this particular plan \nentry the output is\n\n Buffers: shared hit=896304 read=257234\n I/O Timings: read=11426.745\n\nSome rows have been added to the table since my initial mail, so the \nnumbers may be slightly off.\n\nAs another reply has suggested I need to verify that somebody hasn't \naccidentally misconfigured an antivirus client to scan the database \nfiles. If that turns out to be the case I guess it's embarrassment of \nthe year for me :-/\n\nBest regards,\n Mikkel Lauritsen\n\n\n", "msg_date": "Thu, 11 Jun 2020 07:05:50 +0200", "msg_from": "Mikkel Lauritsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Windows slowness?" } ]
[ { "msg_contents": "I'm facing performance issues migrating from postgres 10 to 12 (also from 11\nto 12) even with a new DB.\nTh performance difference is huge 300ms in pg10 vs 3 minutes in pg12.\n\nI have a view that abstracts the data in the database:\n\nCREATE OR REPLACE VIEW public.my_constraints\nAS SELECT lower(tc.constraint_name) AS constraint_name,\n tc.constraint_type,\n tc.table_schema,\n lower(tc.table_name) AS table_name,\n lower(kcu.column_name) AS column_name,\n ccu.table_schema AS reference_table_schema,\n lower(ccu.table_name) AS reference_table_name,\n lower(ccu.column_name) AS reference_column_name,\n rc.update_rule,\n rc.delete_rule\n FROM information_schema.table_constraints tc\n LEFT JOIN information_schema.key_column_usage kcu ON\ntc.constraint_catalog = kcu.constraint_catalog AND tc.constraint_schema =\nkcu.constraint_schema AND tc.constraint_name = kcu.constraint_name\n LEFT JOIN information_schema.referential_constraints rc ON\ntc.constraint_catalog = rc.constraint_catalog AND tc.constraint_schema =\nrc.constraint_schema AND tc.constraint_name = rc.constraint_name\n LEFT JOIN information_schema.constraint_column_usage ccu ON\nrc.unique_constraint_catalog = ccu.constraint_catalog AND\nrc.unique_constraint_schema = ccu.constraint_schema AND\nrc.unique_constraint_name = ccu.constraint_name\n WHERE tc.constraint_catalog = 'my_catalog' AND tc.constraint_schema =\n'public' AND tc.constraint_type <> 'CHECK';\n\nThe simple query: select * from my_constraints is normal but as soon as I\nadd where constraint_type = 'FOREIGN KEY' it takes a lot of time.\nI don't have data in my tables at the moment, I have around 600 tables in my\nschema.\n\nI've analyzed the query but can't figure out what's wrong, this is the query\nwith the filter without the view:\n\n select * from (SELECT lower(tc.constraint_name) AS constraint_name,\n tc.constraint_type,\n tc.table_schema,\n lower(tc.table_name) AS table_name,\n lower(kcu.column_name) AS column_name,\n ccu.table_schema AS reference_table_schema,\n lower(ccu.table_name) AS reference_table_name,\n lower(ccu.column_name) AS reference_column_name,\n rc.update_rule,\n rc.delete_rule\n FROM information_schema.table_constraints tc\n LEFT JOIN information_schema.key_column_usage kcu ON\ntc.constraint_catalog = kcu.constraint_catalog AND tc.constraint_schema =\nkcu.constraint_schema AND tc.constraint_name = kcu.constraint_name\n LEFT JOIN information_schema.referential_constraints rc ON\ntc.constraint_catalog = rc.constraint_catalog AND tc.constraint_schema =\nrc.constraint_schema AND tc.constraint_name = rc.constraint_name\n LEFT JOIN information_schema.constraint_column_usage ccu ON\nrc.unique_constraint_catalog = ccu.constraint_catalog AND\nrc.unique_constraint_schema = ccu.constraint_schema AND\nrc.unique_constraint_name = ccu.constraint_name\n WHERE tc.constraint_catalog = 'my_catalog' AND tc.constraint_schema =\n'public' AND tc.constraint_type <> 'CHECK'\n ) as a\n where constraint_type = 'FOREIGN KEY'\n\n\npostgres 10 plan\nhttps://explain.depesz.com/s/mEmv\n\npostgres 12 plan\nhttps://explain.depesz.com/s/lovP\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n", "msg_date": "Fri, 12 Jun 2020 08:21:18 -0700 (MST)", "msg_from": "regrog <[email protected]>", "msg_from_op": true, "msg_subject": "view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "> view reading information_schema is slow in PostgreSQL 12\n\nHi,\nWhat is the PG version?\n\nIF PG < 12.3 THEN maybe related to this ?\nhttps://www.postgresql.org/docs/release/12.3/ ( Repair performance\nregression in information_schema.triggers view )\n\nImre\n\nregrog <[email protected]> ezt írta (időpont: 2020. jún. 12., P,\n20:26):\n\n> I'm facing performance issues migrating from postgres 10 to 12 (also from\n> 11\n> to 12) even with a new DB.\n> Th performance difference is huge 300ms in pg10 vs 3 minutes in pg12.\n>\n> I have a view that abstracts the data in the database:\n>\n> CREATE OR REPLACE VIEW public.my_constraints\n> AS SELECT lower(tc.constraint_name) AS constraint_name,\n> tc.constraint_type,\n> tc.table_schema,\n> lower(tc.table_name) AS table_name,\n> lower(kcu.column_name) AS column_name,\n> ccu.table_schema AS reference_table_schema,\n> lower(ccu.table_name) AS reference_table_name,\n> lower(ccu.column_name) AS reference_column_name,\n> rc.update_rule,\n> rc.delete_rule\n> FROM information_schema.table_constraints tc\n> LEFT JOIN information_schema.key_column_usage kcu ON\n> tc.constraint_catalog = kcu.constraint_catalog AND tc.constraint_schema =\n> kcu.constraint_schema AND tc.constraint_name = kcu.constraint_name\n> LEFT JOIN information_schema.referential_constraints rc ON\n> tc.constraint_catalog = rc.constraint_catalog AND tc.constraint_schema =\n> rc.constraint_schema AND tc.constraint_name = rc.constraint_name\n> LEFT JOIN information_schema.constraint_column_usage ccu ON\n> rc.unique_constraint_catalog = ccu.constraint_catalog AND\n> rc.unique_constraint_schema = ccu.constraint_schema AND\n> rc.unique_constraint_name = ccu.constraint_name\n> WHERE tc.constraint_catalog = 'my_catalog' AND tc.constraint_schema =\n> 'public' AND tc.constraint_type <> 'CHECK';\n>\n> The simple query: select * from my_constraints is normal but as soon as I\n> add where constraint_type = 'FOREIGN KEY' it takes a lot of time.\n> I don't have data in my tables at the moment, I have around 600 tables in\n> my\n> schema.\n>\n> I've analyzed the query but can't figure out what's wrong, this is the\n> query\n> with the filter without the view:\n>\n> select * from (SELECT lower(tc.constraint_name) AS constraint_name,\n> tc.constraint_type,\n> tc.table_schema,\n> lower(tc.table_name) AS table_name,\n> lower(kcu.column_name) AS column_name,\n> ccu.table_schema AS reference_table_schema,\n> lower(ccu.table_name) AS reference_table_name,\n> lower(ccu.column_name) AS reference_column_name,\n> rc.update_rule,\n> rc.delete_rule\n> FROM information_schema.table_constraints tc\n> LEFT JOIN information_schema.key_column_usage kcu ON\n> tc.constraint_catalog = kcu.constraint_catalog AND tc.constraint_schema =\n> kcu.constraint_schema AND tc.constraint_name = kcu.constraint_name\n> LEFT JOIN information_schema.referential_constraints rc ON\n> tc.constraint_catalog = rc.constraint_catalog AND tc.constraint_schema =\n> rc.constraint_schema AND tc.constraint_name = rc.constraint_name\n> LEFT JOIN information_schema.constraint_column_usage ccu ON\n> rc.unique_constraint_catalog = ccu.constraint_catalog AND\n> rc.unique_constraint_schema = ccu.constraint_schema AND\n> rc.unique_constraint_name = ccu.constraint_name\n> WHERE tc.constraint_catalog = 'my_catalog' AND tc.constraint_schema =\n> 'public' AND tc.constraint_type <> 'CHECK'\n> ) as a\n> where constraint_type = 'FOREIGN KEY'\n>\n>\n> postgres 10 plan\n> https://explain.depesz.com/s/mEmv\n>\n> postgres 12 plan\n> https://explain.depesz.com/s/lovP\n>\n>\n>\n> --\n> Sent from:\n> https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n>\n>\n>\n\n> view reading information_schema is slow in PostgreSQL 12Hi, What is the PG version?   IF  PG < 12.3  THEN maybe related to this ?https://www.postgresql.org/docs/release/12.3/  ( Repair performance regression in information_schema.triggers view )Imreregrog <[email protected]> ezt írta (időpont: 2020. jún. 12., P, 20:26):I'm facing performance issues migrating from postgres 10 to 12 (also from 11\nto 12) even with a new DB.\nTh performance difference is huge 300ms in pg10 vs 3 minutes in pg12.\n\nI have a view that abstracts the data in the database:\n\nCREATE OR REPLACE VIEW public.my_constraints\nAS SELECT lower(tc.constraint_name) AS constraint_name,\n    tc.constraint_type,\n    tc.table_schema,\n    lower(tc.table_name) AS table_name,\n    lower(kcu.column_name) AS column_name,\n    ccu.table_schema AS reference_table_schema,\n    lower(ccu.table_name) AS reference_table_name,\n    lower(ccu.column_name) AS reference_column_name,\n    rc.update_rule,\n    rc.delete_rule\n   FROM information_schema.table_constraints tc\n     LEFT JOIN information_schema.key_column_usage kcu ON\ntc.constraint_catalog = kcu.constraint_catalog AND tc.constraint_schema =\nkcu.constraint_schema AND tc.constraint_name = kcu.constraint_name\n     LEFT JOIN information_schema.referential_constraints rc ON\ntc.constraint_catalog = rc.constraint_catalog AND tc.constraint_schema =\nrc.constraint_schema AND tc.constraint_name = rc.constraint_name\n     LEFT JOIN information_schema.constraint_column_usage ccu ON\nrc.unique_constraint_catalog = ccu.constraint_catalog AND\nrc.unique_constraint_schema = ccu.constraint_schema AND\nrc.unique_constraint_name = ccu.constraint_name\n  WHERE tc.constraint_catalog = 'my_catalog' AND tc.constraint_schema =\n'public' AND tc.constraint_type <> 'CHECK';\n\nThe simple query: select * from my_constraints is normal but as soon as I\nadd where constraint_type = 'FOREIGN KEY' it takes a lot of time.\nI don't have data in my tables at the moment, I have around 600 tables in my\nschema.\n\nI've analyzed the query but can't figure out what's wrong, this is the query\nwith the filter without the view:\n\n  select * from (SELECT lower(tc.constraint_name) AS constraint_name,\n    tc.constraint_type,\n    tc.table_schema,\n    lower(tc.table_name) AS table_name,\n    lower(kcu.column_name) AS column_name,\n    ccu.table_schema AS reference_table_schema,\n    lower(ccu.table_name) AS reference_table_name,\n    lower(ccu.column_name) AS reference_column_name,\n    rc.update_rule,\n    rc.delete_rule\n   FROM information_schema.table_constraints tc\n     LEFT JOIN information_schema.key_column_usage kcu ON\ntc.constraint_catalog = kcu.constraint_catalog AND tc.constraint_schema =\nkcu.constraint_schema AND tc.constraint_name = kcu.constraint_name\n     LEFT JOIN information_schema.referential_constraints rc ON\ntc.constraint_catalog = rc.constraint_catalog AND tc.constraint_schema =\nrc.constraint_schema AND tc.constraint_name = rc.constraint_name\n     LEFT JOIN information_schema.constraint_column_usage ccu ON\nrc.unique_constraint_catalog = ccu.constraint_catalog AND\nrc.unique_constraint_schema = ccu.constraint_schema AND\nrc.unique_constraint_name = ccu.constraint_name\n  WHERE tc.constraint_catalog = 'my_catalog' AND tc.constraint_schema =\n'public' AND tc.constraint_type <> 'CHECK'\n  ) as a\n  where constraint_type = 'FOREIGN KEY'\n\n\npostgres 10 plan\nhttps://explain.depesz.com/s/mEmv\n\npostgres 12 plan\nhttps://explain.depesz.com/s/lovP\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html", "msg_date": "Fri, 12 Jun 2020 21:00:43 +0200", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "regrog <[email protected]> writes:\n> I'm facing performance issues migrating from postgres 10 to 12 (also from 11\n> to 12) even with a new DB.\n> The simple query: select * from my_constraints is normal but as soon as I\n> add where constraint_type = 'FOREIGN KEY' it takes a lot of time.\n\nI looked at this a bit. I see what's going on, but I don't see an easy\nworkaround :-(. The information_schema.table_constraints view contains\na UNION ALL, which in your v10 query produces this part of the plan:\n\n -> Append (cost=0.29..1127.54 rows=316 width=192) (actual time=0.068..11.116 rows=1839 loops=1)\n -> Subquery Scan on \"*SELECT* 1\" (cost=0.29..226.26 rows=1 width=192) (actual time=0.068..10.952 rows=1839 loops=1)\n -> Result (cost=0.29..226.25 rows=1 width=288) (actual time=0.067..10.707 rows=1839 loops=1)\n One-Time Filter: (((current_database())::information_schema.sql_identifier)::text = 'testzeal'::text)\n -> Nested Loop (cost=0.29..226.25 rows=1 width=288) (actual time=0.055..10.454 rows=1839 loops=1)\n ...\n -> Subquery Scan on \"*SELECT* 2\" (cost=1.44..901.27 rows=315 width=192) (actual time=0.001..0.001 rows=0 loops=1)\n -> Result (cost=1.44..898.12 rows=315 width=288) (actual time=0.001..0.001 rows=0 loops=1)\n One-Time Filter: (((('CHECK'::character varying)::information_schema.character_data)::text <> 'CHECK'::text) AND (((current_database())::information_schema.sql_identifier)::text = 'testzeal'::text) AND ((('CHECK' (...)\n -> Nested Loop (cost=1.44..898.12 rows=315 width=288) (never executed)\n ...\n\nThe first clause in that \"One-Time Filter\" arises from your view's\n\"tc.constraint_type <> 'CHECK'\" condition. It's obviously constant-false,\nbut the v10 planner can't quite prove that because of the domain cast\nthat's in the way. So the second arm of the UNION doesn't contribute any\nactual result rows, but nonetheless it adds 315 rows to the estimated\noutput of the Append. In v12, this same UNION produces just this:\n\n -> Subquery Scan on \"*SELECT* 1\" (cost=0.29..199.30 rows=1 width=352) (actual time=0.382..45.343 rows=1848 loops=1)\n -> Result (cost=0.29..199.29 rows=1 width=512) (actual time=0.381..44.384 rows=1848 loops=1)\n One-Time Filter: (((current_database())::information_schema.sql_identifier)::text = 'testzeal'::text)\n -> Nested Loop (cost=0.29..199.28 rows=1 width=257) (actual time=0.376..40.953 rows=1848 loops=1)\n ...\n\nThe v12 planner is able to see through the domain cast, prove that\n'CHECK' <> 'CHECK' is constant false, and thereby toss the entire second\nhalf of the UNION as being a no-op. Great work! Except that now, the\nestimated output rowcount is just one row not 316, which causes the\nentire shape of the surrounding plan to change, to a form that is pretty\nawful when the output rowcount is actually 1800-some. The rowcount\nestimates for the two UNION arms were just as lousy in v10, but it quite\naccidentally fell into an overall estimate that was at least within an\norder of magnitude of reality, allowing it to produce an overall plan\nthat didn't suck.\n\nTo get a decent plan out of v12, the problem is to get it to produce\na better rowcount estimate for the first arm of table_constraints'\nUNION. We don't necessarily need it to match the 1800 reality, but\nwe need it to be more than 1. Unfortunately there's no simple way\nto affect that. The core misestimate is here:\n\n -> Seq Scan on pg_constraint c_1 (cost=0.00..192.60 rows=14 width=73) (actual time=0.340..3.962 rows=1848 loops=1)\n Filter: ((contype <> ALL ('{t,x}'::\"char\"[])) AND ((CASE contype WHEN 'c'::\"char\" THEN 'CHECK'::text WHEN 'f'::\"char\" THEN 'FOREIGN KEY'::text WHEN 'p'::\"char\" THEN 'PRIMARY KEY'::text WHEN 'u'::\"char\" THEN 'UNIQUE'::text ELSE NULL::text END)::text <> 'CHECK'::text) AND ((CASE contype WHEN 'c'::\"char\" THEN 'CHECK'::text WHEN 'f'::\"char\" THEN 'FOREIGN KEY'::text WHEN 'p'::\"char\" THEN 'PRIMARY KEY'::text WHEN 'u'::\"char\" THEN 'UNIQUE'::text ELSE NULL::text END)::text = 'FOREIGN KEY'::text))\n Rows Removed by Filter: 1052\n\nI expect you're getting a fairly decent estimate for the \"contype <>\nALL\" condition, but the planner has no idea what to make of the CASE\nconstruct, so it just falls back to a hard-wired default estimate.\n\nI don't have any good suggestions at the moment. If you had a lot more\ntables (hence more rows in pg_constraint) the plan would likely shift\nto something tolerable even with the crummy selectivity estimate for the\nCASE. But where you are, it's hard. A conceivable workaround is to\ndrop the \"tc.constraint_type <> 'CHECK'\" condition from your view, which\nwould resurrect that UNION arm and probably get you back to something\nsimilar to the v10 plan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Jun 2020 23:11:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "On Sat, 13 Jun 2020 at 06:26, regrog <[email protected]> wrote:\n>\n> I'm facing performance issues migrating from postgres 10 to 12 (also from 11\n> to 12) even with a new DB.\n> Th performance difference is huge 300ms in pg10 vs 3 minutes in pg12.\n\nThis appears to be down to bad statistics that cause pg12 to choose a\nnested loop plan. The pg12 plan has:\n\n-> Hash Join (cost=1281.91..2934.18 rows=68 width=192) (actual\ntime=0.024..21.915 rows=3538 loops=1848)\"\n\non the inner side of a nested loop. 21.915 * 1848 loops is 40498.92\nms, so most of the time.\n\nThis comes down to the difference caused by 04fe805a17, where after\nthat commit we don't bother looking at the NOT NULL constraints in\ntable_constraints.\n\nexplain select * from (select * from\ninformation_schema.table_constraints) c where constraint_type <>\n'CHECK';\n\nIf you execute the above on both instances, you'll see PG12 does not\ndo an Append. PG10 does. Which results in more rows being estimated\nand the planner choosing something better than a nested loop join.\n\nYou could try: SET enable_nestloop TO off;\n\nI'm not really sure there's much you could do to improve the\nstatistics on the catalogue tables.\n\nAlternatively, you could write a view based directly on the base\ntables, bypassing information_schema completely.\n\nDavid\n\n\n", "msg_date": "Sat, 13 Jun 2020 15:15:15 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "On Sat, 13 Jun 2020 at 15:11, Tom Lane <[email protected]> wrote:\n> I expect you're getting a fairly decent estimate for the \"contype <>\n> ALL\" condition, but the planner has no idea what to make of the CASE\n> construct, so it just falls back to a hard-wired default estimate.\n\nThis feels quite similar to [1].\n\nI wondered if it would be more simple to add some smarts to look a bit\ndeeper into case statements for selectivity estimation purposes. An\nOpExpr like:\n\nCASE c.contype WHEN 'c' THEN 'CHECK' WHEN 'f' THEN 'FOREIGN KEY' WHEN\n'p' THEN 'PRIMARY KEY' WHEN 'u' THEN 'UNIQUE' END = 'CHECK';\n\ncould be simplified to c.contype = 'c', which we should have\nstatistics for. There'd certainly be case statement forms that\ncouldn't be simplified, but I think this one could.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAApHDvr%2B6%3D7SZBAtesEavgOQ0ZC03syaRQk19E%2B%2BpiWLopTRbg%40mail.gmail.com#3ec465f343f1204446941df29fc9e715\n\n\n", "msg_date": "Sat, 13 Jun 2020 15:55:46 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Sat, 13 Jun 2020 at 15:11, Tom Lane <[email protected]> wrote:\n>> I expect you're getting a fairly decent estimate for the \"contype <>\n>> ALL\" condition, but the planner has no idea what to make of the CASE\n>> construct, so it just falls back to a hard-wired default estimate.\n\n> This feels quite similar to [1].\n\nYeah, it's the same thing. As I commented in that thread, I'd seen\napplications of the idea in information_schema views -- it's the\nsame principle of a view exposing a CASE construct that translates\na catalog column to what the SQL spec says should be returned, and\nthen the calling query trying to constrain that output.\n\n> I wondered if it would be more simple to add some smarts to look a bit\n> deeper into case statements for selectivity estimation purposes. An\n> OpExpr like:\n> CASE c.contype WHEN 'c' THEN 'CHECK' WHEN 'f' THEN 'FOREIGN KEY' WHEN\n> 'p' THEN 'PRIMARY KEY' WHEN 'u' THEN 'UNIQUE' END = 'CHECK';\n\nHm. Maybe we could reasonably assume that the equality operators used\nfor such constructs are error-and-side-effect-free, thus dodging the\nsemantic problem I mentioned in the other thread?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Jun 2020 00:07:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "On Fri, Jun 12, 2020 at 11:11:09PM -0400, Tom Lane wrote:\n> regrog <[email protected]> writes:\n> > I'm facing performance issues migrating from postgres 10 to 12 (also from 11\n> > to 12) even with a new DB.\n> > The simple query: select * from my_constraints is normal but as soon as I\n> > add where constraint_type = 'FOREIGN KEY' it takes a lot of time.\n> \n> I looked at this a bit. I see what's going on, but I don't see an easy\n> workaround :-(. The information_schema.table_constraints view contains\n> a UNION ALL, which in your v10 query produces this part of the plan:\n\n> To get a decent plan out of v12, the problem is to get it to produce\n> a better rowcount estimate for the first arm of table_constraints'\n> UNION. We don't necessarily need it to match the 1800 reality, but\n> we need it to be more than 1. Unfortunately there's no simple way\n> to affect that. The core misestimate is here:\n\n> I expect you're getting a fairly decent estimate for the \"contype <>\n> ALL\" condition, but the planner has no idea what to make of the CASE\n> construct, so it just falls back to a hard-wired default estimate.\n> \n> I don't have any good suggestions at the moment. If you had a lot more\n> tables (hence more rows in pg_constraint) the plan would likely shift\n> to something tolerable even with the crummy selectivity estimate for the\n> CASE. But where you are, it's hard. A conceivable workaround is to\n> drop the \"tc.constraint_type <> 'CHECK'\" condition from your view, which\n> would resurrect that UNION arm and probably get you back to something\n> similar to the v10 plan.\n\nFor the purposes of making this work for v12, you might try to look at either a\ntemporary table:\n\nCREATE TEMP TABLE constraints AS SELECT * FROM information_schema.table_constraints WHERE constraint_type='FOREIGN KEY';\nANALYZE constraints;\nSELECT * FROM ... LEFT JOIN constraints LEFT JOIN ...\n\nor a CTE (which, if it works, is mostly dumb luck):\nWITH constraints AS MATERIALIZED (SELECT * FROM information_schema.table_constraints) SELECT * FROM constraints WHERE constraint_type='FOREIGN KEY';\n\nOr make a copy of the system view with hacks for the worst misestimates (like\ncontype<>'c' instead of constraint_type<>'CHECK').\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 12 Jun 2020 23:34:43 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "so 13. 6. 2020 v 6:34 odesílatel Justin Pryzby <[email protected]>\nnapsal:\n\n> On Fri, Jun 12, 2020 at 11:11:09PM -0400, Tom Lane wrote:\n> > regrog <[email protected]> writes:\n> > > I'm facing performance issues migrating from postgres 10 to 12 (also\n> from 11\n> > > to 12) even with a new DB.\n> > > The simple query: select * from my_constraints is normal but as soon\n> as I\n> > > add where constraint_type = 'FOREIGN KEY' it takes a lot of time.\n> >\n> > I looked at this a bit. I see what's going on, but I don't see an easy\n> > workaround :-(. The information_schema.table_constraints view contains\n> > a UNION ALL, which in your v10 query produces this part of the plan:\n>\n> > To get a decent plan out of v12, the problem is to get it to produce\n> > a better rowcount estimate for the first arm of table_constraints'\n> > UNION. We don't necessarily need it to match the 1800 reality, but\n> > we need it to be more than 1. Unfortunately there's no simple way\n> > to affect that. The core misestimate is here:\n>\n> > I expect you're getting a fairly decent estimate for the \"contype <>\n> > ALL\" condition, but the planner has no idea what to make of the CASE\n> > construct, so it just falls back to a hard-wired default estimate.\n> >\n> > I don't have any good suggestions at the moment. If you had a lot more\n> > tables (hence more rows in pg_constraint) the plan would likely shift\n> > to something tolerable even with the crummy selectivity estimate for the\n> > CASE. But where you are, it's hard. A conceivable workaround is to\n> > drop the \"tc.constraint_type <> 'CHECK'\" condition from your view, which\n> > would resurrect that UNION arm and probably get you back to something\n> > similar to the v10 plan.\n>\n> For the purposes of making this work for v12, you might try to look at\n> either a\n> temporary table:\n>\n> CREATE TEMP TABLE constraints AS SELECT * FROM\n> information_schema.table_constraints WHERE constraint_type='FOREIGN KEY';\n> ANALYZE constraints;\n> SELECT * FROM ... LEFT JOIN constraints LEFT JOIN ...\n>\n> or a CTE (which, if it works, is mostly dumb luck):\n> WITH constraints AS MATERIALIZED (SELECT * FROM\n> information_schema.table_constraints) SELECT * FROM constraints WHERE\n> constraint_type='FOREIGN KEY';\n>\n> Or make a copy of the system view with hacks for the worst misestimates\n> (like\n> contype<>'c' instead of constraint_type<>'CHECK').\n>\n\nTomas Vondra is working on functional statistics. Can it be the solution of\nCASE issue?\n\nRegards\n\nPavel\n\n\n>\n> --\n> Justin\n>\n>\n>\n\nso 13. 6. 2020 v 6:34 odesílatel Justin Pryzby <[email protected]> napsal:On Fri, Jun 12, 2020 at 11:11:09PM -0400, Tom Lane wrote:\n> regrog <[email protected]> writes:\n> > I'm facing performance issues migrating from postgres 10 to 12 (also from 11\n> > to 12) even with a new DB.\n> > The simple query: select * from my_constraints is normal but as soon as I\n> > add where constraint_type = 'FOREIGN KEY' it takes a lot of time.\n> \n> I looked at this a bit.  I see what's going on, but I don't see an easy\n> workaround :-(.  The information_schema.table_constraints view contains\n> a UNION ALL, which in your v10 query produces this part of the plan:\n\n> To get a decent plan out of v12, the problem is to get it to produce\n> a better rowcount estimate for the first arm of table_constraints'\n> UNION.  We don't necessarily need it to match the 1800 reality, but\n> we need it to be more than 1.  Unfortunately there's no simple way\n> to affect that.  The core misestimate is here:\n\n> I expect you're getting a fairly decent estimate for the \"contype <>\n> ALL\" condition, but the planner has no idea what to make of the CASE\n> construct, so it just falls back to a hard-wired default estimate.\n> \n> I don't have any good suggestions at the moment.  If you had a lot more\n> tables (hence more rows in pg_constraint) the plan would likely shift\n> to something tolerable even with the crummy selectivity estimate for the\n> CASE.  But where you are, it's hard.  A conceivable workaround is to\n> drop the \"tc.constraint_type <> 'CHECK'\" condition from your view, which\n> would resurrect that UNION arm and probably get you back to something\n> similar to the v10 plan.\n\nFor the purposes of making this work for v12, you might try to look at either a\ntemporary table:\n\nCREATE TEMP TABLE constraints AS SELECT * FROM information_schema.table_constraints WHERE constraint_type='FOREIGN KEY';\nANALYZE constraints;\nSELECT * FROM ... LEFT JOIN constraints LEFT JOIN ...\n\nor a CTE (which, if it works, is mostly dumb luck):\nWITH constraints AS MATERIALIZED (SELECT * FROM information_schema.table_constraints) SELECT * FROM constraints WHERE constraint_type='FOREIGN KEY';\n\nOr make a copy of the system view with hacks for the worst misestimates (like\ncontype<>'c' instead of constraint_type<>'CHECK').Tomas Vondra is working on functional statistics. Can it be the solution of CASE issue?RegardsPavel \n\n-- \nJustin", "msg_date": "Sat, 13 Jun 2020 07:13:46 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "so 13. 6. 2020 v 7:13 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n>\n>\n> so 13. 6. 2020 v 6:34 odesílatel Justin Pryzby <[email protected]>\n> napsal:\n>\n>> On Fri, Jun 12, 2020 at 11:11:09PM -0400, Tom Lane wrote:\n>> > regrog <[email protected]> writes:\n>> > > I'm facing performance issues migrating from postgres 10 to 12 (also\n>> from 11\n>> > > to 12) even with a new DB.\n>> > > The simple query: select * from my_constraints is normal but as soon\n>> as I\n>> > > add where constraint_type = 'FOREIGN KEY' it takes a lot of time.\n>> >\n>> > I looked at this a bit. I see what's going on, but I don't see an easy\n>> > workaround :-(. The information_schema.table_constraints view contains\n>> > a UNION ALL, which in your v10 query produces this part of the plan:\n>>\n>> > To get a decent plan out of v12, the problem is to get it to produce\n>> > a better rowcount estimate for the first arm of table_constraints'\n>> > UNION. We don't necessarily need it to match the 1800 reality, but\n>> > we need it to be more than 1. Unfortunately there's no simple way\n>> > to affect that. The core misestimate is here:\n>>\n>> > I expect you're getting a fairly decent estimate for the \"contype <>\n>> > ALL\" condition, but the planner has no idea what to make of the CASE\n>> > construct, so it just falls back to a hard-wired default estimate.\n>> >\n>> > I don't have any good suggestions at the moment. If you had a lot more\n>> > tables (hence more rows in pg_constraint) the plan would likely shift\n>> > to something tolerable even with the crummy selectivity estimate for the\n>> > CASE. But where you are, it's hard. A conceivable workaround is to\n>> > drop the \"tc.constraint_type <> 'CHECK'\" condition from your view, which\n>> > would resurrect that UNION arm and probably get you back to something\n>> > similar to the v10 plan.\n>>\n>> For the purposes of making this work for v12, you might try to look at\n>> either a\n>> temporary table:\n>>\n>> CREATE TEMP TABLE constraints AS SELECT * FROM\n>> information_schema.table_constraints WHERE constraint_type='FOREIGN KEY';\n>> ANALYZE constraints;\n>> SELECT * FROM ... LEFT JOIN constraints LEFT JOIN ...\n>>\n>> or a CTE (which, if it works, is mostly dumb luck):\n>> WITH constraints AS MATERIALIZED (SELECT * FROM\n>> information_schema.table_constraints) SELECT * FROM constraints WHERE\n>> constraint_type='FOREIGN KEY';\n>>\n>> Or make a copy of the system view with hacks for the worst misestimates\n>> (like\n>> contype<>'c' instead of constraint_type<>'CHECK').\n>>\n>\n> Tomas Vondra is working on functional statistics. Can it be the solution\n> of CASE issue?\n>\n\nand maybe workaround. Can we use functional index there. It has a\nstatistics.\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> --\n>> Justin\n>>\n>>\n>>\n\nso 13. 6. 2020 v 7:13 odesílatel Pavel Stehule <[email protected]> napsal:so 13. 6. 2020 v 6:34 odesílatel Justin Pryzby <[email protected]> napsal:On Fri, Jun 12, 2020 at 11:11:09PM -0400, Tom Lane wrote:\n> regrog <[email protected]> writes:\n> > I'm facing performance issues migrating from postgres 10 to 12 (also from 11\n> > to 12) even with a new DB.\n> > The simple query: select * from my_constraints is normal but as soon as I\n> > add where constraint_type = 'FOREIGN KEY' it takes a lot of time.\n> \n> I looked at this a bit.  I see what's going on, but I don't see an easy\n> workaround :-(.  The information_schema.table_constraints view contains\n> a UNION ALL, which in your v10 query produces this part of the plan:\n\n> To get a decent plan out of v12, the problem is to get it to produce\n> a better rowcount estimate for the first arm of table_constraints'\n> UNION.  We don't necessarily need it to match the 1800 reality, but\n> we need it to be more than 1.  Unfortunately there's no simple way\n> to affect that.  The core misestimate is here:\n\n> I expect you're getting a fairly decent estimate for the \"contype <>\n> ALL\" condition, but the planner has no idea what to make of the CASE\n> construct, so it just falls back to a hard-wired default estimate.\n> \n> I don't have any good suggestions at the moment.  If you had a lot more\n> tables (hence more rows in pg_constraint) the plan would likely shift\n> to something tolerable even with the crummy selectivity estimate for the\n> CASE.  But where you are, it's hard.  A conceivable workaround is to\n> drop the \"tc.constraint_type <> 'CHECK'\" condition from your view, which\n> would resurrect that UNION arm and probably get you back to something\n> similar to the v10 plan.\n\nFor the purposes of making this work for v12, you might try to look at either a\ntemporary table:\n\nCREATE TEMP TABLE constraints AS SELECT * FROM information_schema.table_constraints WHERE constraint_type='FOREIGN KEY';\nANALYZE constraints;\nSELECT * FROM ... LEFT JOIN constraints LEFT JOIN ...\n\nor a CTE (which, if it works, is mostly dumb luck):\nWITH constraints AS MATERIALIZED (SELECT * FROM information_schema.table_constraints) SELECT * FROM constraints WHERE constraint_type='FOREIGN KEY';\n\nOr make a copy of the system view with hacks for the worst misestimates (like\ncontype<>'c' instead of constraint_type<>'CHECK').Tomas Vondra is working on functional statistics. Can it be the solution of CASE issue?and maybe workaround.  Can we use functional index there. It has a statistics.PavelRegardsPavel \n\n-- \nJustin", "msg_date": "Sat, 13 Jun 2020 07:15:11 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "so 13. 6. 2020 v 7:15 odesílatel Pavel Stehule <[email protected]>\r\nnapsal:\r\n\r\n>\r\n>\r\n> so 13. 6. 2020 v 7:13 odesílatel Pavel Stehule <[email protected]>\r\n> napsal:\r\n>\r\n>>\r\n>>\r\n>> so 13. 6. 2020 v 6:34 odesílatel Justin Pryzby <[email protected]>\r\n>> napsal:\r\n>>\r\n>>> On Fri, Jun 12, 2020 at 11:11:09PM -0400, Tom Lane wrote:\r\n>>> > regrog <[email protected]> writes:\r\n>>> > > I'm facing performance issues migrating from postgres 10 to 12 (also\r\n>>> from 11\r\n>>> > > to 12) even with a new DB.\r\n>>> > > The simple query: select * from my_constraints is normal but as soon\r\n>>> as I\r\n>>> > > add where constraint_type = 'FOREIGN KEY' it takes a lot of time.\r\n>>> >\r\n>>> > I looked at this a bit. I see what's going on, but I don't see an easy\r\n>>> > workaround :-(. The information_schema.table_constraints view contains\r\n>>> > a UNION ALL, which in your v10 query produces this part of the plan:\r\n>>>\r\n>>> > To get a decent plan out of v12, the problem is to get it to produce\r\n>>> > a better rowcount estimate for the first arm of table_constraints'\r\n>>> > UNION. We don't necessarily need it to match the 1800 reality, but\r\n>>> > we need it to be more than 1. Unfortunately there's no simple way\r\n>>> > to affect that. The core misestimate is here:\r\n>>>\r\n>>> > I expect you're getting a fairly decent estimate for the \"contype <>\r\n>>> > ALL\" condition, but the planner has no idea what to make of the CASE\r\n>>> > construct, so it just falls back to a hard-wired default estimate.\r\n>>> >\r\n>>> > I don't have any good suggestions at the moment. If you had a lot more\r\n>>> > tables (hence more rows in pg_constraint) the plan would likely shift\r\n>>> > to something tolerable even with the crummy selectivity estimate for\r\n>>> the\r\n>>> > CASE. But where you are, it's hard. A conceivable workaround is to\r\n>>> > drop the \"tc.constraint_type <> 'CHECK'\" condition from your view,\r\n>>> which\r\n>>> > would resurrect that UNION arm and probably get you back to something\r\n>>> > similar to the v10 plan.\r\n>>>\r\n>>> For the purposes of making this work for v12, you might try to look at\r\n>>> either a\r\n>>> temporary table:\r\n>>>\r\n>>> CREATE TEMP TABLE constraints AS SELECT * FROM\r\n>>> information_schema.table_constraints WHERE constraint_type='FOREIGN KEY';\r\n>>> ANALYZE constraints;\r\n>>> SELECT * FROM ... LEFT JOIN constraints LEFT JOIN ...\r\n>>>\r\n>>> or a CTE (which, if it works, is mostly dumb luck):\r\n>>> WITH constraints AS MATERIALIZED (SELECT * FROM\r\n>>> information_schema.table_constraints) SELECT * FROM constraints WHERE\r\n>>> constraint_type='FOREIGN KEY';\r\n>>>\r\n>>> Or make a copy of the system view with hacks for the worst misestimates\r\n>>> (like\r\n>>> contype<>'c' instead of constraint_type<>'CHECK').\r\n>>>\r\n>>\r\n>> Tomas Vondra is working on functional statistics. Can it be the solution\r\n>> of CASE issue?\r\n>>\r\n>\r\n> and maybe workaround. Can we use functional index there. It has a\r\n> statistics.\r\n>\r\n\r\ncreate table foo(a int);\r\ninsert into foo select random()* 3 from generate_series(1,1000000);\r\ncreate view x as select case when a = 0 then 'Ahoj' when a = 1 then\r\n'nazdar' when a = 2 then 'Hi' end from foo;\r\nanalyze foo;\r\n\r\npostgres=# explain analyze select * from x where \"case\" = 'Ahoj';\r\n┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│\r\nQUERY PLAN\r\n │\r\n╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Gather (cost=1000.00..14273.96 rows=5000 width=32) (actual\r\ntime=1.265..129.771 rows=166744 loops=1)\r\n │\r\n│ Workers Planned: 2\r\n\r\n │\r\n│ Workers Launched: 2\r\n\r\n │\r\n│ -> Parallel Seq Scan on foo (cost=0.00..12773.96 rows=2083 width=32)\r\n(actual time=0.031..63.663 rows=55581 loops=3)\r\n │\r\n│ Filter: (CASE WHEN (a = 0) THEN 'Ahoj'::text WHEN (a = 1) THEN\r\n'nazdar'::text WHEN (a = 2) THEN 'Hi'::text ELSE NULL::text END =\r\n'Ahoj'::text) │\r\n│ Rows Removed by Filter: 277752\r\n\r\n │\r\n│ Planning Time: 0.286 ms\r\n\r\n │\r\n│ Execution Time: 137.538 ms\r\n\r\n │\r\n└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(8 rows)\r\n\r\ncreate index on foo((CASE WHEN (a = 0) THEN 'Ahoj'::text WHEN (a = 1) THEN\r\n'nazdar'::text WHEN (a = 2) THEN 'Hi'::text ELSE NULL::text END));\r\nanalyze foo;\r\n\r\npostgres=# explain analyze select * from x where \"case\" = 'Ahoj';\r\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n│\r\nQUERY PLAN\r\n\r\n╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════\r\n│ Bitmap Heap Scan on foo (cost=1862.67..10880.17 rows=167000 width=32)\r\n(actual time=16.992..65.300 rows=166744 loops=1)\r\n\r\n│ Recheck Cond: (CASE WHEN (a = 0) THEN 'Ahoj'::text WHEN (a = 1) THEN\r\n'nazdar'::text WHEN (a = 2) THEN 'Hi'::text ELSE NULL::text END =\r\n'Ahoj'::text)\r\n│ Heap Blocks: exact=4425\r\n\r\n\r\n│ -> Bitmap Index Scan on foo_case_idx (cost=0.00..1820.92 rows=167000\r\nwidth=0) (actual time=16.293..16.293 rows=166744 loops=1)\r\n\r\n│ Index Cond: (CASE WHEN (a = 0) THEN 'Ahoj'::text WHEN (a = 1)\r\nTHEN 'nazdar'::text WHEN (a = 2) THEN 'Hi'::text ELSE NULL::text END =\r\n'Ahoj'::tex\r\n│ Planning Time: 0.768 ms\r\n\r\n\r\n│ Execution Time: 72.098 ms\r\n\r\n\r\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\r\n(7 rows)\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\n\r\n>\r\n> Pavel\r\n>\r\n>\r\n>> Regards\r\n>>\r\n>> Pavel\r\n>>\r\n>>\r\n>>>\r\n>>> --\r\n>>> Justin\r\n>>>\r\n>>>\r\n>>>\r\n\nso 13. 6. 2020 v 7:15 odesílatel Pavel Stehule <[email protected]> napsal:so 13. 6. 2020 v 7:13 odesílatel Pavel Stehule <[email protected]> napsal:so 13. 6. 2020 v 6:34 odesílatel Justin Pryzby <[email protected]> napsal:On Fri, Jun 12, 2020 at 11:11:09PM -0400, Tom Lane wrote:\r\n> regrog <[email protected]> writes:\r\n> > I'm facing performance issues migrating from postgres 10 to 12 (also from 11\r\n> > to 12) even with a new DB.\r\n> > The simple query: select * from my_constraints is normal but as soon as I\r\n> > add where constraint_type = 'FOREIGN KEY' it takes a lot of time.\r\n> \r\n> I looked at this a bit.  I see what's going on, but I don't see an easy\r\n> workaround :-(.  The information_schema.table_constraints view contains\r\n> a UNION ALL, which in your v10 query produces this part of the plan:\n\r\n> To get a decent plan out of v12, the problem is to get it to produce\r\n> a better rowcount estimate for the first arm of table_constraints'\r\n> UNION.  We don't necessarily need it to match the 1800 reality, but\r\n> we need it to be more than 1.  Unfortunately there's no simple way\r\n> to affect that.  The core misestimate is here:\n\r\n> I expect you're getting a fairly decent estimate for the \"contype <>\r\n> ALL\" condition, but the planner has no idea what to make of the CASE\r\n> construct, so it just falls back to a hard-wired default estimate.\r\n> \r\n> I don't have any good suggestions at the moment.  If you had a lot more\r\n> tables (hence more rows in pg_constraint) the plan would likely shift\r\n> to something tolerable even with the crummy selectivity estimate for the\r\n> CASE.  But where you are, it's hard.  A conceivable workaround is to\r\n> drop the \"tc.constraint_type <> 'CHECK'\" condition from your view, which\r\n> would resurrect that UNION arm and probably get you back to something\r\n> similar to the v10 plan.\n\r\nFor the purposes of making this work for v12, you might try to look at either a\r\ntemporary table:\n\r\nCREATE TEMP TABLE constraints AS SELECT * FROM information_schema.table_constraints WHERE constraint_type='FOREIGN KEY';\r\nANALYZE constraints;\r\nSELECT * FROM ... LEFT JOIN constraints LEFT JOIN ...\n\r\nor a CTE (which, if it works, is mostly dumb luck):\r\nWITH constraints AS MATERIALIZED (SELECT * FROM information_schema.table_constraints) SELECT * FROM constraints WHERE constraint_type='FOREIGN KEY';\n\r\nOr make a copy of the system view with hacks for the worst misestimates (like\r\ncontype<>'c' instead of constraint_type<>'CHECK').Tomas Vondra is working on functional statistics. Can it be the solution of CASE issue?and maybe workaround.  Can we use functional index there. It has a statistics.create table foo(a int);insert into foo select random()* 3 from generate_series(1,1000000);create view x as select case when a = 0 then 'Ahoj' when a = 1 then 'nazdar' when a = 2 then 'Hi' end from foo;analyze foo;postgres=# explain analyze select * from x where \"case\" = 'Ahoj';┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│                                                                       QUERY PLAN                                                                       │╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Gather  (cost=1000.00..14273.96 rows=5000 width=32) (actual time=1.265..129.771 rows=166744 loops=1)                                                   ││   Workers Planned: 2                                                                                                                                   ││   Workers Launched: 2                                                                                                                                  ││   ->  Parallel Seq Scan on foo  (cost=0.00..12773.96 rows=2083 width=32) (actual time=0.031..63.663 rows=55581 loops=3)                                ││         Filter: (CASE WHEN (a = 0) THEN 'Ahoj'::text WHEN (a = 1) THEN 'nazdar'::text WHEN (a = 2) THEN 'Hi'::text ELSE NULL::text END = 'Ahoj'::text) ││         Rows Removed by Filter: 277752                                                                                                                 ││ Planning Time: 0.286 ms                                                                                                                                ││ Execution Time: 137.538 ms                                                                                                                             │└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(8 rows)create index on foo((CASE WHEN (a = 0) THEN 'Ahoj'::text WHEN (a = 1) THEN 'nazdar'::text WHEN (a = 2) THEN 'Hi'::text ELSE NULL::text END));analyze foo;postgres=# explain analyze select * from x where \"case\" = 'Ahoj';┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────│                                                                         QUERY PLAN                                                                      ╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════│ Bitmap Heap Scan on foo  (cost=1862.67..10880.17 rows=167000 width=32) (actual time=16.992..65.300 rows=166744 loops=1)                                 │   Recheck Cond: (CASE WHEN (a = 0) THEN 'Ahoj'::text WHEN (a = 1) THEN 'nazdar'::text WHEN (a = 2) THEN 'Hi'::text ELSE NULL::text END = 'Ahoj'::text)  │   Heap Blocks: exact=4425                                                                                                                               │   ->  Bitmap Index Scan on foo_case_idx  (cost=0.00..1820.92 rows=167000 width=0) (actual time=16.293..16.293 rows=166744 loops=1)                      │         Index Cond: (CASE WHEN (a = 0) THEN 'Ahoj'::text WHEN (a = 1) THEN 'nazdar'::text WHEN (a = 2) THEN 'Hi'::text ELSE NULL::text END = 'Ahoj'::tex│ Planning Time: 0.768 ms                                                                                                                                 │ Execution Time: 72.098 ms                                                                                                                               └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────(7 rows)RegardsPavel PavelRegardsPavel \n\r\n-- \r\nJustin", "msg_date": "Sat, 13 Jun 2020 07:23:04 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "On Sat, 13 Jun 2020 at 16:07, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > I wondered if it would be more simple to add some smarts to look a bit\n> > deeper into case statements for selectivity estimation purposes. An\n> > OpExpr like:\n> > CASE c.contype WHEN 'c' THEN 'CHECK' WHEN 'f' THEN 'FOREIGN KEY' WHEN\n> > 'p' THEN 'PRIMARY KEY' WHEN 'u' THEN 'UNIQUE' END = 'CHECK';\n>\n> Hm. Maybe we could reasonably assume that the equality operators used\n> for such constructs are error-and-side-effect-free, thus dodging the\n> semantic problem I mentioned in the other thread?\n\nI'm only really talking about selectivity estimation only for now.\nI'm not really sure why we'd need to ensure that the equality operator\nis error and side effect free. We'd surely only be executing the case\nstatement's operator's oprrest function? We'd need to ensure we don't\ninvoke any casts that could error out.\n\nDavid\n\n\n", "msg_date": "Sat, 13 Jun 2020 19:52:44 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "On Sat, 13 Jun 2020 at 19:52, David Rowley <[email protected]> wrote:\n>\n> On Sat, 13 Jun 2020 at 16:07, Tom Lane <[email protected]> wrote:\n> >\n> > David Rowley <[email protected]> writes:\n> > > I wondered if it would be more simple to add some smarts to look a bit\n> > > deeper into case statements for selectivity estimation purposes. An\n> > > OpExpr like:\n> > > CASE c.contype WHEN 'c' THEN 'CHECK' WHEN 'f' THEN 'FOREIGN KEY' WHEN\n> > > 'p' THEN 'PRIMARY KEY' WHEN 'u' THEN 'UNIQUE' END = 'CHECK';\n> >\n> > Hm. Maybe we could reasonably assume that the equality operators used\n> > for such constructs are error-and-side-effect-free, thus dodging the\n> > semantic problem I mentioned in the other thread?\n>\n> I'm only really talking about selectivity estimation only for now.\n> I'm not really sure why we'd need to ensure that the equality operator\n> is error and side effect free. We'd surely only be executing the case\n> statement's operator's oprrest function? We'd need to ensure we don't\n> invoke any casts that could error out.\n\nHmm, after a bit of thought I now see what you mean. We'd need to\nloop through each WHEN clause to ensure there's a Const and check if\nthat Const is equal to the Const on the other side of the OpExpr, then\nselect the first match. That, of course, must perform a comparison,\nbut, that's not really doing anything additional to what constant\nfolding code already does, is it?\n\nDavid\n\n\n", "msg_date": "Sat, 13 Jun 2020 20:22:09 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Sat, 13 Jun 2020 at 19:52, David Rowley <[email protected]> wrote:\n>> On Sat, 13 Jun 2020 at 16:07, Tom Lane <[email protected]> wrote:\n>>> Hm. Maybe we could reasonably assume that the equality operators used\n>>> for such constructs are error-and-side-effect-free, thus dodging the\n>>> semantic problem I mentioned in the other thread?\n\n>> I'm only really talking about selectivity estimation only for now.\n>> I'm not really sure why we'd need to ensure that the equality operator\n>> is error and side effect free. We'd surely only be executing the case\n>> statement's operator's oprrest function? We'd need to ensure we don't\n>> invoke any casts that could error out.\n\n> Hmm, after a bit of thought I now see what you mean.\n\nNo, you were right the first time: we're considering different things.\nI was wondering about how to constant-fold a \"CASE = constant\" construct\nas was being requested in the other thread. Obviously, if that succeeds\nthen it'll simplify selectivity estimation too --- but it's reasonable\nto also think about what to do for \"CASE = constant\" in selectivity\nestimation, because with or without such a constant-folding rule,\nthere would be lots of cases that the rule fails to simplify. Further\nwe should be thinking about how to get some estimate for cases that\nthe folding rule would fail at, so I'm not sure that we ought to restrict\nour thoughts to constant comparisons.\n\nIn the cases I've seen so far, even a rule as dumb as \"if the CASE has\nN arms then estimate selectivity as 1/N\" would be a lot better than\nwhat we get now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Jun 2020 13:06:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "On Fri, Jun 12, 2020 at 12:26 PM regrog <[email protected]> wrote:\n\n> I'm facing performance issues migrating from postgres 10 to 12 (also from\n> 11\n> to 12) even with a new DB.\n> Th performance difference is huge 300ms in pg10 vs 3 minutes in pg12.\n>\n> I have a view that abstracts the data in the database:\n>\n> CREATE OR REPLACE VIEW public.my_constraints\n>\n\n\nAssuming your DDL changes fairly seldomly, and you already have a well\nstructured deployment process in place for that, perhaps just change this\nto a materialized view and refresh (concurrently) after any DDL gets\nexecuted. That way, you have stats on what your view has in it and are not\nsubject to issues with planning the execution of the query in this view.\n\nOn Fri, Jun 12, 2020 at 12:26 PM regrog <[email protected]> wrote:I'm facing performance issues migrating from postgres 10 to 12 (also from 11\nto 12) even with a new DB.\nTh performance difference is huge 300ms in pg10 vs 3 minutes in pg12.\n\nI have a view that abstracts the data in the database:\n\nCREATE OR REPLACE VIEW public.my_constraintsAssuming your DDL changes fairly seldomly, and you already have a well structured deployment process in place for that, perhaps just change this to a materialized view and refresh (concurrently) after any DDL gets executed. That way, you have stats on what your view has in it and are not subject to issues with planning the execution of the query in this view.", "msg_date": "Mon, 15 Jun 2020 12:21:40 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" }, { "msg_contents": "I tested both postgres 12.3 and 13 beta 1 and the results are the same.\n\nI could read the pg_ tables instead of the views in the information_schema\nbut that's the SQL standard schema so I'd prefer to stick to that.\n\nI reported this issue because the performance gap is huge and that could be\nuseful to bring in some improvements.\n\nThe DDL is still evolving so a materialized table/view is not an option at\nthe moment.\n\nI'll try to remove the <> 'CHECK' clause, I'm quite sure we needed that for\nsome reason but I didn't follow that change.\n\nThanks\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n\n", "msg_date": "Tue, 16 Jun 2020 02:23:14 -0700 (MST)", "msg_from": "regrog <[email protected]>", "msg_from_op": true, "msg_subject": "Re: view reading information_schema is slow in PostgreSQL 12" } ]
[ { "msg_contents": "Hello \n\n\nMy PostgreSQL server 10.11 running on windows which are running very slow. DB has two tables with ~200Mil records in each. user queries are very slow even explain analyze also taking a longer.\n\n\nCould you please help me to tune this query and any suggestions to improve system performance?\n\nTable structures: \n\nTable1:\n\n-- Records 213621151\n\nCREATE TABLE test1\n(\n individual_entity_proxy_id bigint NOT NULL,\n household_entity_proxy_id bigint,\n individual_personal_link_sid bigint NOT NULL,\n city_name character varying(100) COLLATE pg_catalog.\"default\",\n state_prov_cd character varying(40) COLLATE pg_catalog.\"default\",\n pstl_code character varying(40) COLLATE pg_catalog.\"default\",\n npa integer,\n nxx integer,\n email_domain character varying(400) COLLATE pg_catalog.\"default\",\n email_preference character varying(40) COLLATE pg_catalog.\"default\",\n direct_mail_preference character varying(40) COLLATE pg_catalog.\"default\",\n profane_wrd_ind character(1) COLLATE pg_catalog.\"default\",\n tmo_ofnsv_name_ind character(1) COLLATE pg_catalog.\"default\",\n census_block_id character varying(40) COLLATE pg_catalog.\"default\",\n has_first_name character(1) COLLATE pg_catalog.\"default\",\n has_middle_name character(1) COLLATE pg_catalog.\"default\",\n has_last_name character(1) COLLATE pg_catalog.\"default\",\n has_email_address character(1) COLLATE pg_catalog.\"default\",\n has_individual_address character(1) COLLATE pg_catalog.\"default\",\n email_address_sid bigint,\n person_name_sid bigint,\n physical_address_sid bigint,\n telephone_number_sid bigint,\n shared_email_with_customer_ind character(1) COLLATE pg_catalog.\"default\",\n shared_paddr_with_customer_ind character(1) COLLATE pg_catalog.\"default\",\n last_contacted_email_datetime timestamp without time zone,\n last_contacted_dm_datetime timestamp without time zone,\n last_contacted_digital_datetime timestamp without time zone,\n last_contacted_anychannel_dttm timestamp without time zone,\n hard_bounce_ind integer,\n src_sys_id integer NOT NULL,\n insrt_prcs_id bigint,\n updt_prcs_id bigint,\n stg_prcs_id bigint,\n load_dttm timestamp without time zone NOT NULL,\n updt_dttm timestamp without time zone,\n md5_chk_sum character varying(200) COLLATE pg_catalog.\"default\",\n deld_from_src_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n orphan_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n CONSTRAINT pk_i_entity_proxy_id PRIMARY KEY (individual_entity_proxy_id)\n\n);\nCREATE INDEX indx_prospect_indv_entty_id\n ON test1 USING btree\n (individual_entity_proxy_id )\n\n\nTable 2:\n-- Records 260652202 \n\nCREATE TABLE test2\n(\n individual_entity_proxy_id bigint NOT NULL,\n cstmr_prspct_ind character varying(40) COLLATE pg_catalog.\"default\",\n last_appnd_dttm timestamp without time zone,\n last_sprsn_dttm timestamp without time zone,\n infrrd_gender_code character varying(40) COLLATE pg_catalog.\"default\",\n govt_prison_ind character(1) COLLATE pg_catalog.\"default\",\n tax_bnkrpt_dcsd_ind character(1) COLLATE pg_catalog.\"default\",\n underbank_rank_nbr integer,\n hvy_txn_rank_nbr integer,\n prominence_nbr integer,\n ocptn_code character varying(40) COLLATE pg_catalog.\"default\",\n educ_lvl_nbr integer,\n gender_code character varying(40) COLLATE pg_catalog.\"default\",\n infrrd_hh_rank_nbr integer,\n econmc_stable_nbr integer,\n directv_sbscrbr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n dish_sbscrbr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n iphone_user_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n smrt_hm_devc_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n sml_busi_ownr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n tv_internet_bndl_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n dog_ownr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n cat_ownr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n dine_out_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n taco_bell_diner_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n auto_insrnc_byr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n src_sys_id integer NOT NULL,\n insrt_prcs_id bigint,\n updt_prcs_id bigint,\n stg_prcs_id bigint,\n load_dttm timestamp without time zone NOT NULL,\n updt_dttm timestamp without time zone,\n deld_from_src_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n orphan_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n CONSTRAINT pk_entity_proxy_id PRIMARY KEY (individual_entity_proxy_id)\n);\n\n\nUser query: \n\nexplain analyze select COUNT(*) as \"DII_1\"\n from ( select distinct table0.\"individual_entity_proxy_id\" as \"INDIVIDUAL_ENTITY_PROXY_ID\"\n from test1 table0\n inner join test2 table1\n on table0.\"individual_entity_proxy_id\" = table1.\"individual_entity_proxy_id\"\n where ((table0.\"shared_paddr_with_customer_ind\" = 'N')\n and (table0.\"profane_wrd_ind\" = 'N')\n and (table0.\"tmo_ofnsv_name_ind\" = 'N')\n and ((table0.\"last_contacted_dm_datetime\" is null)\n or (table0.\"last_contacted_dm_datetime\" < TIMESTAMP '2020-03-15 0:00:00.000000'))\n and (table0.\"has_individual_address\" = 'Y')\n and (table0.\"has_last_name\" = 'Y')\n and (table0.\"has_first_name\" = 'Y')\n and (table0.\"direct_mail_preference\" is null))\n and ((table1.\"tax_bnkrpt_dcsd_ind\" = 'N')\n and (table1.\"cstmr_prspct_ind\" = 'Prospect')\n and (table1.\"govt_prison_ind\" = 'N')) ) TXT_1;\n\nExplain Analyze :\n\n\"Aggregate (cost=5345632.91..5345632.92 rows=1 width=8) (actual time=442688.462..442688.462 rows=1 loops=1)\"\n\" -> Unique (cost=150.13..4943749.39 rows=32150682 width=8) (actual time=0.022..439964.214 rows=32368180 loops=1)\"\n\" -> Merge Join (cost=150.13..4863372.68 rows=32150682 width=8) (actual time=0.021..435818.276 rows=32368180 loops=1)\"\n\" Merge Cond: (table0.individual_entity_proxy_id = table1.individual_entity_proxy_id)\"\n\" -> Index Scan using indx_prospect_indv_entty_id on test1 table0 (cost=0.56..2493461.92 rows=32233405 width=8) (actual time=0.011..63009.551 rows=32368180 loops=1)\"\n\" Filter: ((direct_mail_preference IS NULL) AND ((last_contacted_dm_datetime IS NULL) OR (last_contacted_dm_datetime < '2020-03-15 00:00:00'::timestamp without time zone)) AND (shared_paddr_with_customer_ind = 'N'::bpchar) AND (profane_wrd_ind = 'N'::bpchar) AND (tmo_ofnsv_name_ind = 'N'::bpchar) AND (has_individual_address = 'Y'::bpchar) AND (has_last_name = 'Y'::bpchar) AND (has_first_name = 'Y'::bpchar))\"\n\" Rows Removed by Filter: 7709177\"\n\" -> Index Scan using pk_entity_proxy_id on test2 table1 (cost=0.56..1867677.94 rows=40071417 width=8) (actual time=0.008..363534.437 rows=40077727 loops=1)\"\n\" Filter: ((tax_bnkrpt_dcsd_ind = 'N'::bpchar) AND (govt_prison_ind = 'N'::bpchar) AND ((cstmr_prspct_ind)::text = 'Prospect'::text))\"\n\" Rows Removed by Filter: 94756\"\n\"Planning time: 0.400 ms\"\n\"Execution time: 442688.523 ms\"\n\nServer config:\n\nPostgreSQL v10.11\nRAM: 380GB\nvCore: 32\nShared_buffers: 65GB\nwork_mem:104857kB\nmaintenance_work_mem:256MB\neffective_cache_size: 160GB\n\n\n\n\nhttps://dba.stackexchange.com/questions/269138/postgresql-server-running-very-slow-at-minimal-work-load\n\n\nThanks,\nRaj\n\n\n", "msg_date": "Sun, 14 Jun 2020 22:45:52 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue" }, { "msg_contents": "On Mon, 15 Jun 2020 at 10:46, Nagaraj Raj <[email protected]> wrote:\n> CREATE TABLE test1\n> (\n...\n\n> CONSTRAINT pk_i_entity_proxy_id PRIMARY KEY (individual_entity_proxy_id)\n>\n> );\n\n> CREATE TABLE test2\n> (\n...\n\n> CONSTRAINT pk_entity_proxy_id PRIMARY KEY (individual_entity_proxy_id)\n> );\n>\n>\n> User query:\n>\n> explain analyze select COUNT(*) as \"DII_1\"\n> from ( select distinct table0.\"individual_entity_proxy_id\" as \"INDIVIDUAL_ENTITY_PROXY_ID\"\n> from test1 table0\n> inner join test2 table1\n> on table0.\"individual_entity_proxy_id\" = table1.\"individual_entity_proxy_id\"\n\nWhy do you use \"select distinct\". It seems to me that you're putting a\ndistinct clause on the primary key of test1 and joining to another\ntable in a way that cannot cause duplicates.\n\nI imagine dropping that distinct will speed up the query quite a bit.\n\nDavid\n\n\n", "msg_date": "Mon, 15 Jun 2020 11:55:30 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue" }, { "msg_contents": "On Sun, Jun 14, 2020 at 10:45:52PM +0000, Nagaraj Raj wrote:\n> My PostgreSQL server 10.11 running on windows which are running very slow. DB has two tables with ~200Mil records in each. user queries are very slow even explain analyze also taking a longer.\n> \n> Could you please help me to tune this query and any suggestions to improve system performance?\n\n> CREATE TABLE test1\n> (\n> individual_entity_proxy_id bigint NOT NULL,\n...\n> CONSTRAINT pk_i_entity_proxy_id PRIMARY KEY (individual_entity_proxy_id)\n> \n> );\n> CREATE INDEX indx_prospect_indv_entty_id ON test1 USING btree (individual_entity_proxy_id )\n\nThis index is redundant with the primary key, which implicitly creates a unique\nindex.\n\nThe table structure seems strange: you have two tables with the same PK column,\nwhich is how they're being joined. It seems like that's better expressed as a\nsingle table with all the columns rather than separate tables (but see below).\n\n> explain analyze select COUNT(*) as \"DII_1\"\n> from ( select distinct table0.\"individual_entity_proxy_id\" as \"INDIVIDUAL_ENTITY_PROXY_ID\"\n> from test1 table0\n> inner join test2 table1\n\nI think this may be better written as something like:\n\n| SELECT COUNT(id) FROM t0 WHERE EXISTS (SELECT 1 FROM t1 WHERE t1.id=t0.id AND ...) AND ...\n\nIt's guaranteed to be distinct since it's a PK column, so it doesn't need a\n\"Unique\" node.\n\nI think it might prefer an seq scan on t0, which might be good since it seems\nto be returning over 10% of records.\n\n> Explain Analyze :\n> \n> \"Aggregate (cost=5345632.91..5345632.92 rows=1 width=8) (actual time=442688.462..442688.462 rows=1 loops=1)\"\n> \" -> Unique (cost=150.13..4943749.39 rows=32150682 width=8) (actual time=0.022..439964.214 rows=32368180 loops=1)\"\n> \" -> Merge Join (cost=150.13..4863372.68 rows=32150682 width=8) (actual time=0.021..435818.276 rows=32368180 loops=1)\"\n> \" Merge Cond: (table0.individual_entity_proxy_id = table1.individual_entity_proxy_id)\"\n> \" -> Index Scan using indx_prospect_indv_entty_id on test1 table0 (cost=0.56..2493461.92 rows=32233405 width=8) (actual time=0.011..63009.551 rows=32368180 loops=1)\"\n> \" Filter: ((direct_mail_preference IS NULL) AND ((last_contacted_dm_datetime IS NULL) OR (last_contacted_dm_datetime < '2020-03-15 00:00:00'::timestamp without time zone)) AND (shared_paddr_with_customer_ind = 'N'::bpchar) AND (profane_wrd_ind = 'N'::bpchar) AND (tmo_ofnsv_name_ind = 'N'::bpchar) AND (has_individual_address = 'Y'::bpchar) AND (has_last_name = 'Y'::bpchar) AND (has_first_name = 'Y'::bpchar))\"\n> \" Rows Removed by Filter: 7709177\"\n> \" -> Index Scan using pk_entity_proxy_id on test2 table1 (cost=0.56..1867677.94 rows=40071417 width=8) (actual time=0.008..363534.437 rows=40077727 loops=1)\"\n> \" Filter: ((tax_bnkrpt_dcsd_ind = 'N'::bpchar) AND (govt_prison_ind = 'N'::bpchar) AND ((cstmr_prspct_ind)::text = 'Prospect'::text))\"\n> \" Rows Removed by Filter: 94756\"\n\nIt might help to show explain(ANALYZE,BUFFERS).\n\nIt looks like test2/table1 index scan is a lot slower than table0.\nMaybe table1 gets lots of updates, so isn't clustered on its primary key, so\nthe index scan is highly random. You could check the \"correlation\" of its PK\nID column:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nIf true, that would be a good reason to have separate tables.\n\n> vCore: 32\n\nPossibly it would be advantageous to use parallel query.\nA better query+plan might allow that.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 14 Jun 2020 19:05:16 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue" } ]
[ { "msg_contents": "I wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.\n\nselect T0.\"physical_address_sid\", T0.\"individual_entity_proxy_id\", T2.\"infrrd_hh_rank_nbr\"\nfrom \"cms_prospects\".PROSPECT T0\n--inner join \"sas_prs_tmp\".DEDUPE3583E3F18 T1 on T0.\"individual_entity_proxy_id\" = T1.\"individual_entity_proxy_id\"\nleft join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on T0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\" \n\n\n\"Merge Left Join (cost=55.96..18147747.08 rows=213620928 width=20)\"\n\" Merge Cond: (t0.individual_entity_proxy_id = t2.individual_entity_proxy_id)\"\n\" -> Index Scan using pk_prospect on prospect t0 (cost=0.57..10831606.89 rows=213620928 width=16)\"\n\" -> Index Only Scan using indxp_individual_demo_infrrd_hh_rank_nbr on individual_demographic t2 (cost=0.57..5013756.93 rows=260652064 width=12)\"\n\n\n\nAny suggestions or help would be highly appreciated. \n\n\n\n\nBest regards,\nRj\n\n\n\n\n\n", "msg_date": "Tue, 16 Jun 2020 20:35:31 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "simple query running for ever" }, { "msg_contents": "On Tue, Jun 16, 2020 at 2:35 PM Nagaraj Raj <[email protected]> wrote:\n\n> I wrote a simple query, and it is taking too long, not sure what is wrong\n> in it, even its not giving EXPLAIN ANALYZE.\n>\n\nMore context is needed. Please review-\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nOn Tue, Jun 16, 2020 at 2:35 PM Nagaraj Raj <[email protected]> wrote:I wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.More context is needed. Please review-https://wiki.postgresql.org/wiki/Slow_Query_Questions", "msg_date": "Tue, 16 Jun 2020 14:43:41 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple query running for ever" }, { "msg_contents": "On Tue, Jun 16, 2020 at 08:35:31PM +0000, Nagaraj Raj wrote:\n> I wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.\n\nIs this related to last week's question ?\nhttps://www.postgresql.org/message-id/1211705382.726951.1592174752720%40mail.yahoo.com\n\nWas that issue resolved ?\n\nI didn't see answers to a few questions I asked there.\n\n> select T0.\"physical_address_sid\", T0.\"individual_entity_proxy_id\", T2.\"infrrd_hh_rank_nbr\"\n> from \"cms_prospects\".PROSPECT T0\n> --inner join \"sas_prs_tmp\".DEDUPE3583E3F18 T1 on T0.\"individual_entity_proxy_id\" = T1.\"individual_entity_proxy_id\"\n> left join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on T0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\" \n> \n> \"Merge Left Join (cost=55.96..18147747.08 rows=213620928 width=20)\"\n> \" Merge Cond: (t0.individual_entity_proxy_id = t2.individual_entity_proxy_id)\"\n> \" -> Index Scan using pk_prospect on prospect t0 (cost=0.57..10831606.89 rows=213620928 width=16)\"\n> \" -> Index Only Scan using indxp_individual_demo_infrrd_hh_rank_nbr on individual_demographic t2 (cost=0.57..5013756.93 rows=260652064 width=12)\"\n> \n> Any suggestions or help would be highly appreciated. \n> \n> Best regards,\n> Rj\n\n\n", "msg_date": "Tue, 16 Jun 2020 15:46:59 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple query running for ever" }, { "msg_contents": "Hi Justin,\n\nMy apologies, I missed that.\n\nYes, I change work mem to 2GB but didn't see any difference. So, as your suggestion removed the distinct on pk and added a multi-column index so query planner did index-only can that is fixed the issue and query completed in 1Min.\n\nBest regards,\nRj\n On Tuesday, June 16, 2020, 01:47:21 PM PDT, Justin Pryzby <[email protected]> wrote: \n \n On Tue, Jun 16, 2020 at 08:35:31PM +0000, Nagaraj Raj wrote:\n> I wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.\n\nIs this related to last week's question ?\nhttps://www.postgresql.org/message-id/1211705382.726951.1592174752720%40mail.yahoo.com\n\nWas that issue resolved ?\n\nI didn't see answers to a few questions I asked there.\n\n> select T0.\"physical_address_sid\", T0.\"individual_entity_proxy_id\", T2.\"infrrd_hh_rank_nbr\"\n> from \"cms_prospects\".PROSPECT T0\n> --inner join \"sas_prs_tmp\".DEDUPE3583E3F18 T1 on T0.\"individual_entity_proxy_id\" = T1.\"individual_entity_proxy_id\"\n> left join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on T0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\" \n> \n> \"Merge Left Join  (cost=55.96..18147747.08 rows=213620928 width=20)\"\n> \"  Merge Cond: (t0.individual_entity_proxy_id = t2.individual_entity_proxy_id)\"\n> \"  ->  Index Scan using pk_prospect on prospect t0  (cost=0.57..10831606.89 rows=213620928 width=16)\"\n> \"  ->  Index Only Scan using indxp_individual_demo_infrrd_hh_rank_nbr on individual_demographic t2  (cost=0.57..5013756.93 rows=260652064 width=12)\"\n> \n> Any suggestions or help would be highly appreciated. \n> \n> Best regards,\n> Rj\n\n\n \n Hi Justin,My apologies, I missed that.Yes, I change work mem to 2GB but didn't see any difference. So, as your suggestion removed the distinct on pk and added a multi-column index so query planner did index-only can that is fixed the issue and query completed in 1Min.Best regards,Rj On Tuesday, June 16, 2020, 01:47:21 PM PDT, Justin Pryzby <[email protected]> wrote: On Tue, Jun 16, 2020 at 08:35:31PM +0000, Nagaraj Raj wrote:> I wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.Is this related to last week's question ?https://www.postgresql.org/message-id/1211705382.726951.1592174752720%40mail.yahoo.comWas that issue resolved ?I didn't see answers to a few questions I asked there.> select T0.\"physical_address_sid\", T0.\"individual_entity_proxy_id\", T2.\"infrrd_hh_rank_nbr\"> from \"cms_prospects\".PROSPECT T0> --inner join \"sas_prs_tmp\".DEDUPE3583E3F18 T1 on T0.\"individual_entity_proxy_id\" = T1.\"individual_entity_proxy_id\"> left join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on T0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\" > > \"Merge Left Join  (cost=55.96..18147747.08 rows=213620928 width=20)\"> \"  Merge Cond: (t0.individual_entity_proxy_id = t2.individual_entity_proxy_id)\"> \"  ->  Index Scan using pk_prospect on prospect t0  (cost=0.57..10831606.89 rows=213620928 width=16)\"> \"  ->  Index Only Scan using indxp_individual_demo_infrrd_hh_rank_nbr on individual_demographic t2  (cost=0.57..5013756.93 rows=260652064 width=12)\"> > Any suggestions or help would be highly appreciated. > > Best regards,> Rj", "msg_date": "Tue, 16 Jun 2020 20:57:43 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: simple query running for ever" }, { "msg_contents": "Hi Michael,\n\nSorry, I missed table structure,\n\n\nexplain select T0.\"physical_address_sid\", T0.\"individual_entity_proxy_id\", T2.\"infrrd_hh_rank_nbr\"\nfrom \"cms_prospects\".PROSPECT T0\ninner join public.t1680035748gcccqqdpmrblxp33_bkp T1 on T0.\"individual_entity_proxy_id\" = T1.\"individual_entity_proxy_id\"\nleft join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on T0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\";\n\n\n\n\"Hash Join (cost=1417.48..21353422.52 rows=213620928 width=20)\"\n\" Hash Cond: ((t0.individual_entity_proxy_id)::numeric = t1.individual_entity_proxy_id)\"\n\" -> Merge Left Join (cost=55.96..18147747.08 rows=213620928 width=20)\"\n\" Merge Cond: (t0.individual_entity_proxy_id = t2.individual_entity_proxy_id)\"\n\" -> Index Scan using pk_prospect on prospect t0 (cost=0.57..10831606.89 rows=213620928 width=16)\"\n\" -> Index Only Scan using indxp_individual_demo_infrrd_hh_rank_nbr on individual_demographic t2 (cost=0.57..5013756.93 rows=260652064 width=12)\"\n\" -> Hash (cost=741.79..741.79 rows=49579 width=8)\"\n\" -> Seq Scan on t1680035748gcccqqdpmrblxp33_bkp t1 (cost=0.00..741.79 rows=49579 width=8)\"\n\n--T0\n\nCREATE TABLE cms_prospects.prospect\n(\n individual_entity_proxy_id bigint NOT NULL,\n household_entity_proxy_id bigint,\n individual_personal_link_sid bigint NOT NULL,\n city_name character varying(100) COLLATE pg_catalog.\"default\",\n state_prov_cd character varying(40) COLLATE pg_catalog.\"default\",\n pstl_code character varying(40) COLLATE pg_catalog.\"default\",\n npa integer,\n nxx integer,\n email_domain character varying(400) COLLATE pg_catalog.\"default\",\n email_preference character varying(40) COLLATE pg_catalog.\"default\",\n direct_mail_preference character varying(40) COLLATE pg_catalog.\"default\",\n profane_wrd_ind character(1) COLLATE pg_catalog.\"default\",\n tmo_ofnsv_name_ind character(1) COLLATE pg_catalog.\"default\",\n census_block_id character varying(40) COLLATE pg_catalog.\"default\",\n has_first_name character(1) COLLATE pg_catalog.\"default\",\n has_middle_name character(1) COLLATE pg_catalog.\"default\",\n has_last_name character(1) COLLATE pg_catalog.\"default\",\n has_email_address character(1) COLLATE pg_catalog.\"default\",\n has_individual_address character(1) COLLATE pg_catalog.\"default\",\n email_address_sid bigint,\n person_name_sid bigint,\n physical_address_sid bigint,\n telephone_number_sid bigint,\n last_contacted_email_datetime timestamp without time zone,\n last_contacted_dm_datetime timestamp without time zone,\n last_contacted_digital_datetime timestamp without time zone,\n last_contacted_anychannel_dttm timestamp without time zone,\n hard_bounce_ind integer,\n closest_store_site_id1 character varying(40) COLLATE pg_catalog.\"default\",\n distance_1 numeric(5,2),\n load_dttm timestamp without time zone NOT NULL,\n updt_dttm timestamp without time zone,\n md5_chk_sum character varying(200) COLLATE pg_catalog.\"default\",\n deld_from_src_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n orphan_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n CONSTRAINT pk_prospect PRIMARY KEY (individual_entity_proxy_id)\n);\n\n--T1\nCREATE TABLE public.t1680035748gcccqqdpmrblxp33_bkp(\n individual_entity_proxy_id numeric(20,0));\n\n-- T2 \n\nCREATE TABLE cms_prospects.individual_demographic\n(\n individual_entity_proxy_id bigint NOT NULL,\n cstmr_prspct_ind character varying(40) COLLATE pg_catalog.\"default\",\n last_appnd_dttm timestamp without time zone,\n last_sprsn_dttm timestamp without time zone,\n infrrd_gender_code character varying(40) COLLATE pg_catalog.\"default\",\n govt_prison_ind character(1) COLLATE pg_catalog.\"default\",\n tax_bnkrpt_dcsd_ind character(1) COLLATE pg_catalog.\"default\",\n underbank_rank_nbr integer,\n hvy_txn_rank_nbr integer,\n prominence_nbr integer,\n ocptn_code character varying(40) COLLATE pg_catalog.\"default\",\n educ_lvl_nbr integer,\n gender_code character varying(40) COLLATE pg_catalog.\"default\",\n infrrd_hh_rank_nbr integer,\n econmc_stable_nbr integer,\n directv_sbscrbr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n amazon_prm_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n iphone_user_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n smrt_hm_devc_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n dog_ownr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n cat_ownr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n msc_cncrt_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n dine_out_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n taco_bell_diner_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n auto_insrnc_byr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n load_dttm timestamp without time zone NOT NULL,\n updt_dttm timestamp without time zone,\n md5_chk_sum character varying(200) COLLATE pg_catalog.\"default\",\n deld_from_src_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n orphan_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n CONSTRAINT pk_individual_demographic PRIMARY KEY (individual_entity_proxy_id)\n);\n\n\nServer config:\nPostgreSQL v10.11RAM: 380GB\nvCore: 32\nShared_buffers: 65G\nBwork_mem:104857kB\nmaintenance_work_mem:256MB\neffective_cache_size: 160GB\n On Tuesday, June 16, 2020, 01:44:09 PM PDT, Michael Lewis <[email protected]> wrote: \n \n On Tue, Jun 16, 2020 at 2:35 PM Nagaraj Raj <[email protected]> wrote:\n\nI wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.\n\n\nMore context is needed. Please review-\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions  \n Hi Michael,Sorry, I missed table structure,explain select T0.\"physical_address_sid\", T0.\"individual_entity_proxy_id\", T2.\"infrrd_hh_rank_nbr\"from \"cms_prospects\".PROSPECT T0inner join public.t1680035748gcccqqdpmrblxp33_bkp T1 on T0.\"individual_entity_proxy_id\" = T1.\"individual_entity_proxy_id\"left join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on T0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\";\"Hash Join (cost=1417.48..21353422.52 rows=213620928 width=20)\"\" Hash Cond: ((t0.individual_entity_proxy_id)::numeric = t1.individual_entity_proxy_id)\"\" -> Merge Left Join (cost=55.96..18147747.08 rows=213620928 width=20)\"\" Merge Cond: (t0.individual_entity_proxy_id = t2.individual_entity_proxy_id)\"\" -> Index Scan using pk_prospect on prospect t0 (cost=0.57..10831606.89 rows=213620928 width=16)\"\" -> Index Only Scan using indxp_individual_demo_infrrd_hh_rank_nbr on individual_demographic t2 (cost=0.57..5013756.93 rows=260652064 width=12)\"\" -> Hash (cost=741.79..741.79 rows=49579 width=8)\"\" -> Seq Scan on t1680035748gcccqqdpmrblxp33_bkp t1 (cost=0.00..741.79 rows=49579 width=8)\"--T0CREATE TABLE cms_prospects.prospect( individual_entity_proxy_id bigint NOT NULL, household_entity_proxy_id bigint, individual_personal_link_sid bigint NOT NULL, city_name character varying(100) COLLATE pg_catalog.\"default\", state_prov_cd character varying(40) COLLATE pg_catalog.\"default\", pstl_code character varying(40) COLLATE pg_catalog.\"default\", npa integer, nxx integer, email_domain character varying(400) COLLATE pg_catalog.\"default\", email_preference character varying(40) COLLATE pg_catalog.\"default\", direct_mail_preference character varying(40) COLLATE pg_catalog.\"default\", profane_wrd_ind character(1) COLLATE pg_catalog.\"default\", tmo_ofnsv_name_ind character(1) COLLATE pg_catalog.\"default\", census_block_id character varying(40) COLLATE pg_catalog.\"default\", has_first_name character(1) COLLATE pg_catalog.\"default\", has_middle_name character(1) COLLATE pg_catalog.\"default\", has_last_name character(1) COLLATE pg_catalog.\"default\", has_email_address character(1) COLLATE pg_catalog.\"default\", has_individual_address character(1) COLLATE pg_catalog.\"default\", email_address_sid bigint, person_name_sid bigint, physical_address_sid bigint, telephone_number_sid bigint, last_contacted_email_datetime timestamp without time zone, last_contacted_dm_datetime timestamp without time zone, last_contacted_digital_datetime timestamp without time zone, last_contacted_anychannel_dttm timestamp without time zone, hard_bounce_ind integer, closest_store_site_id1 character varying(40) COLLATE pg_catalog.\"default\", distance_1 numeric(5,2), load_dttm timestamp without time zone NOT NULL, updt_dttm timestamp without time zone, md5_chk_sum character varying(200) COLLATE pg_catalog.\"default\", deld_from_src_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL, orphan_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL, CONSTRAINT pk_prospect PRIMARY KEY (individual_entity_proxy_id));--T1CREATE TABLE public.t1680035748gcccqqdpmrblxp33_bkp( individual_entity_proxy_id numeric(20,0));-- T2 CREATE TABLE cms_prospects.individual_demographic( individual_entity_proxy_id bigint NOT NULL, cstmr_prspct_ind character varying(40) COLLATE pg_catalog.\"default\", last_appnd_dttm timestamp without time zone, last_sprsn_dttm timestamp without time zone, infrrd_gender_code character varying(40) COLLATE pg_catalog.\"default\", govt_prison_ind character(1) COLLATE pg_catalog.\"default\", tax_bnkrpt_dcsd_ind character(1) COLLATE pg_catalog.\"default\", underbank_rank_nbr integer, hvy_txn_rank_nbr integer, prominence_nbr integer, ocptn_code character varying(40) COLLATE pg_catalog.\"default\", educ_lvl_nbr integer, gender_code character varying(40) COLLATE pg_catalog.\"default\", infrrd_hh_rank_nbr integer, econmc_stable_nbr integer, directv_sbscrbr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", amazon_prm_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", iphone_user_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", smrt_hm_devc_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", dog_ownr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", cat_ownr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", msc_cncrt_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", dine_out_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", taco_bell_diner_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", auto_insrnc_byr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", load_dttm timestamp without time zone NOT NULL, updt_dttm timestamp without time zone, md5_chk_sum character varying(200) COLLATE pg_catalog.\"default\", deld_from_src_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL, orphan_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL, CONSTRAINT pk_individual_demographic PRIMARY KEY (individual_entity_proxy_id));Server config:PostgreSQL v10.11RAM: 380GBvCore: 32Shared_buffers: 65GBwork_mem:104857kBmaintenance_work_mem:256MBeffective_cache_size: 160GB On Tuesday, June 16, 2020, 01:44:09 PM PDT, Michael Lewis <[email protected]> wrote: On Tue, Jun 16, 2020 at 2:35 PM Nagaraj Raj <[email protected]> wrote:I wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.More context is needed. Please review-https://wiki.postgresql.org/wiki/Slow_Query_Questions", "msg_date": "Tue, 16 Jun 2020 21:13:13 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: simple query running for ever" }, { "msg_contents": "And here is the explain analyze:\n\nhttps://explain.depesz.com/s/uQGA\n\nThanks!\n On Tuesday, June 16, 2020, 02:13:37 PM PDT, Nagaraj Raj <[email protected]> wrote: \n \n Hi Michael,\n\nSorry, I missed table structure,\n\n\nexplain select T0.\"physical_address_sid\", T0.\"individual_entity_proxy_id\", T2.\"infrrd_hh_rank_nbr\"\nfrom \"cms_prospects\".PROSPECT T0\ninner join public.t1680035748gcccqqdpmrblxp33_bkp T1 on T0.\"individual_entity_proxy_id\" = T1.\"individual_entity_proxy_id\"\nleft join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on T0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\";\n\n\n\n\"Hash Join (cost=1417.48..21353422.52 rows=213620928 width=20)\"\n\" Hash Cond: ((t0.individual_entity_proxy_id)::numeric = t1.individual_entity_proxy_id)\"\n\" -> Merge Left Join (cost=55.96..18147747.08 rows=213620928 width=20)\"\n\" Merge Cond: (t0.individual_entity_proxy_id = t2.individual_entity_proxy_id)\"\n\" -> Index Scan using pk_prospect on prospect t0 (cost=0.57..10831606.89 rows=213620928 width=16)\"\n\" -> Index Only Scan using indxp_individual_demo_infrrd_hh_rank_nbr on individual_demographic t2 (cost=0.57..5013756.93 rows=260652064 width=12)\"\n\" -> Hash (cost=741.79..741.79 rows=49579 width=8)\"\n\" -> Seq Scan on t1680035748gcccqqdpmrblxp33_bkp t1 (cost=0.00..741.79 rows=49579 width=8)\"\n\n--T0\n\nCREATE TABLE cms_prospects.prospect\n(\n individual_entity_proxy_id bigint NOT NULL,\n household_entity_proxy_id bigint,\n individual_personal_link_sid bigint NOT NULL,\n city_name character varying(100) COLLATE pg_catalog.\"default\",\n state_prov_cd character varying(40) COLLATE pg_catalog.\"default\",\n pstl_code character varying(40) COLLATE pg_catalog.\"default\",\n npa integer,\n nxx integer,\n email_domain character varying(400) COLLATE pg_catalog.\"default\",\n email_preference character varying(40) COLLATE pg_catalog.\"default\",\n direct_mail_preference character varying(40) COLLATE pg_catalog.\"default\",\n profane_wrd_ind character(1) COLLATE pg_catalog.\"default\",\n tmo_ofnsv_name_ind character(1) COLLATE pg_catalog.\"default\",\n census_block_id character varying(40) COLLATE pg_catalog.\"default\",\n has_first_name character(1) COLLATE pg_catalog.\"default\",\n has_middle_name character(1) COLLATE pg_catalog.\"default\",\n has_last_name character(1) COLLATE pg_catalog.\"default\",\n has_email_address character(1) COLLATE pg_catalog.\"default\",\n has_individual_address character(1) COLLATE pg_catalog.\"default\",\n email_address_sid bigint,\n person_name_sid bigint,\n physical_address_sid bigint,\n telephone_number_sid bigint,\n last_contacted_email_datetime timestamp without time zone,\n last_contacted_dm_datetime timestamp without time zone,\n last_contacted_digital_datetime timestamp without time zone,\n last_contacted_anychannel_dttm timestamp without time zone,\n hard_bounce_ind integer,\n closest_store_site_id1 character varying(40) COLLATE pg_catalog.\"default\",\n distance_1 numeric(5,2),\n load_dttm timestamp without time zone NOT NULL,\n updt_dttm timestamp without time zone,\n md5_chk_sum character varying(200) COLLATE pg_catalog.\"default\",\n deld_from_src_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n orphan_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n CONSTRAINT pk_prospect PRIMARY KEY (individual_entity_proxy_id)\n);\n\n--T1\nCREATE TABLE public.t1680035748gcccqqdpmrblxp33_bkp(\n individual_entity_proxy_id numeric(20,0));\n\n-- T2 \n\nCREATE TABLE cms_prospects.individual_demographic\n(\n individual_entity_proxy_id bigint NOT NULL,\n cstmr_prspct_ind character varying(40) COLLATE pg_catalog.\"default\",\n last_appnd_dttm timestamp without time zone,\n last_sprsn_dttm timestamp without time zone,\n infrrd_gender_code character varying(40) COLLATE pg_catalog.\"default\",\n govt_prison_ind character(1) COLLATE pg_catalog.\"default\",\n tax_bnkrpt_dcsd_ind character(1) COLLATE pg_catalog.\"default\",\n underbank_rank_nbr integer,\n hvy_txn_rank_nbr integer,\n prominence_nbr integer,\n ocptn_code character varying(40) COLLATE pg_catalog.\"default\",\n educ_lvl_nbr integer,\n gender_code character varying(40) COLLATE pg_catalog.\"default\",\n infrrd_hh_rank_nbr integer,\n econmc_stable_nbr integer,\n directv_sbscrbr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n amazon_prm_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n iphone_user_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n smrt_hm_devc_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n dog_ownr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n cat_ownr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n msc_cncrt_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n dine_out_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n taco_bell_diner_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n auto_insrnc_byr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\",\n load_dttm timestamp without time zone NOT NULL,\n updt_dttm timestamp without time zone,\n md5_chk_sum character varying(200) COLLATE pg_catalog.\"default\",\n deld_from_src_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n orphan_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL,\n CONSTRAINT pk_individual_demographic PRIMARY KEY (individual_entity_proxy_id)\n);\n\n\nServer config:\nPostgreSQL v10.11RAM: 380GB\nvCore: 32\nShared_buffers: 65G\nBwork_mem:104857kB\nmaintenance_work_mem:256MB\neffective_cache_size: 160GB\n On Tuesday, June 16, 2020, 01:44:09 PM PDT, Michael Lewis <[email protected]> wrote: \n \n On Tue, Jun 16, 2020 at 2:35 PM Nagaraj Raj <[email protected]> wrote:\n\nI wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.\n\n\nMore context is needed. Please review-\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions  \n And here is the explain analyze:https://explain.depesz.com/s/uQGAThanks! On Tuesday, June 16, 2020, 02:13:37 PM PDT, Nagaraj Raj <[email protected]> wrote: Hi Michael,Sorry, I missed table structure,explain select T0.\"physical_address_sid\", T0.\"individual_entity_proxy_id\", T2.\"infrrd_hh_rank_nbr\"from \"cms_prospects\".PROSPECT T0inner join public.t1680035748gcccqqdpmrblxp33_bkp T1 on T0.\"individual_entity_proxy_id\" = T1.\"individual_entity_proxy_id\"left join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on T0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\";\"Hash Join (cost=1417.48..21353422.52 rows=213620928 width=20)\"\" Hash Cond: ((t0.individual_entity_proxy_id)::numeric = t1.individual_entity_proxy_id)\"\" -> Merge Left Join (cost=55.96..18147747.08 rows=213620928 width=20)\"\" Merge Cond: (t0.individual_entity_proxy_id = t2.individual_entity_proxy_id)\"\" -> Index Scan using pk_prospect on prospect t0 (cost=0.57..10831606.89 rows=213620928 width=16)\"\" -> Index Only Scan using indxp_individual_demo_infrrd_hh_rank_nbr on individual_demographic t2 (cost=0.57..5013756.93 rows=260652064 width=12)\"\" -> Hash (cost=741.79..741.79 rows=49579 width=8)\"\" -> Seq Scan on t1680035748gcccqqdpmrblxp33_bkp t1 (cost=0.00..741.79 rows=49579 width=8)\"--T0CREATE TABLE cms_prospects.prospect( individual_entity_proxy_id bigint NOT NULL, household_entity_proxy_id bigint, individual_personal_link_sid bigint NOT NULL, city_name character varying(100) COLLATE pg_catalog.\"default\", state_prov_cd character varying(40) COLLATE pg_catalog.\"default\", pstl_code character varying(40) COLLATE pg_catalog.\"default\", npa integer, nxx integer, email_domain character varying(400) COLLATE pg_catalog.\"default\", email_preference character varying(40) COLLATE pg_catalog.\"default\", direct_mail_preference character varying(40) COLLATE pg_catalog.\"default\", profane_wrd_ind character(1) COLLATE pg_catalog.\"default\", tmo_ofnsv_name_ind character(1) COLLATE pg_catalog.\"default\", census_block_id character varying(40) COLLATE pg_catalog.\"default\", has_first_name character(1) COLLATE pg_catalog.\"default\", has_middle_name character(1) COLLATE pg_catalog.\"default\", has_last_name character(1) COLLATE pg_catalog.\"default\", has_email_address character(1) COLLATE pg_catalog.\"default\", has_individual_address character(1) COLLATE pg_catalog.\"default\", email_address_sid bigint, person_name_sid bigint, physical_address_sid bigint, telephone_number_sid bigint, last_contacted_email_datetime timestamp without time zone, last_contacted_dm_datetime timestamp without time zone, last_contacted_digital_datetime timestamp without time zone, last_contacted_anychannel_dttm timestamp without time zone, hard_bounce_ind integer, closest_store_site_id1 character varying(40) COLLATE pg_catalog.\"default\", distance_1 numeric(5,2), load_dttm timestamp without time zone NOT NULL, updt_dttm timestamp without time zone, md5_chk_sum character varying(200) COLLATE pg_catalog.\"default\", deld_from_src_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL, orphan_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL, CONSTRAINT pk_prospect PRIMARY KEY (individual_entity_proxy_id));--T1CREATE TABLE public.t1680035748gcccqqdpmrblxp33_bkp( individual_entity_proxy_id numeric(20,0));-- T2 CREATE TABLE cms_prospects.individual_demographic( individual_entity_proxy_id bigint NOT NULL, cstmr_prspct_ind character varying(40) COLLATE pg_catalog.\"default\", last_appnd_dttm timestamp without time zone, last_sprsn_dttm timestamp without time zone, infrrd_gender_code character varying(40) COLLATE pg_catalog.\"default\", govt_prison_ind character(1) COLLATE pg_catalog.\"default\", tax_bnkrpt_dcsd_ind character(1) COLLATE pg_catalog.\"default\", underbank_rank_nbr integer, hvy_txn_rank_nbr integer, prominence_nbr integer, ocptn_code character varying(40) COLLATE pg_catalog.\"default\", educ_lvl_nbr integer, gender_code character varying(40) COLLATE pg_catalog.\"default\", infrrd_hh_rank_nbr integer, econmc_stable_nbr integer, directv_sbscrbr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", amazon_prm_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", iphone_user_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", smrt_hm_devc_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", dog_ownr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", cat_ownr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", msc_cncrt_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", dine_out_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", taco_bell_diner_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", auto_insrnc_byr_propnsty_code character varying(40) COLLATE pg_catalog.\"default\", load_dttm timestamp without time zone NOT NULL, updt_dttm timestamp without time zone, md5_chk_sum character varying(200) COLLATE pg_catalog.\"default\", deld_from_src_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL, orphan_ind character(1) COLLATE pg_catalog.\"default\" NOT NULL, CONSTRAINT pk_individual_demographic PRIMARY KEY (individual_entity_proxy_id));Server config:PostgreSQL v10.11RAM: 380GBvCore: 32Shared_buffers: 65GBwork_mem:104857kBmaintenance_work_mem:256MBeffective_cache_size: 160GB On Tuesday, June 16, 2020, 01:44:09 PM PDT, Michael Lewis <[email protected]> wrote: On Tue, Jun 16, 2020 at 2:35 PM Nagaraj Raj <[email protected]> wrote:I wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.More context is needed. Please review-https://wiki.postgresql.org/wiki/Slow_Query_Questions", "msg_date": "Tue, 16 Jun 2020 21:15:58 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: simple query running for ever" }, { "msg_contents": "On Tue, Jun 16, 2020 at 08:35:31PM +0000, Nagaraj Raj wrote:\n> I wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.\n> \n> select T0.\"physical_address_sid\", T0.\"individual_entity_proxy_id\", T2.\"infrrd_hh_rank_nbr\"\n> from \"cms_prospects\".PROSPECT T0\n> --inner join \"sas_prs_tmp\".DEDUPE3583E3F18 T1 on T0.\"individual_entity_proxy_id\" = T1.\"individual_entity_proxy_id\"\n> left join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on T0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\" \n\nPardon me for saying so, but this query seems silly.\n\nIt's self-joining a table on its PK, which I don't think could ever be useful.\n\nYou do maybe more than 2x as much work, to get 2x as many columns, which are\nall redundant.\n\nCan't you just change \nT2.\"infrrd_hh_rank_nbr\" to T0, and avoid the join ?\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 16 Jun 2020 17:05:26 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple query running for ever" }, { "msg_contents": "På onsdag 17. juni 2020 kl. 00:05:26, skrev Justin Pryzby <[email protected]\n <mailto:[email protected]>>: \nOn Tue, Jun 16, 2020 at 08:35:31PM +0000, Nagaraj Raj wrote:\n > I wrote a simple query, and it is taking too long, not sure what is wrong \nin it, even its not giving EXPLAIN ANALYZE.\n >\n > select T0.\"physical_address_sid\", T0.\"individual_entity_proxy_id\", \nT2.\"infrrd_hh_rank_nbr\"\n > from \"cms_prospects\".PROSPECT T0\n > --inner join \"sas_prs_tmp\".DEDUPE3583E3F18 T1 on \nT0.\"individual_entity_proxy_id\" = T1.\"individual_entity_proxy_id\"\n > left join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on \nT0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\"\n\n Pardon me for saying so, but this query seems silly.\n\n It's self-joining a table on its PK, which I don't think could ever be useful.\n\nWhere is the self-join? \n\n\n--\n Andreas Joseph Krogh", "msg_date": "Wed, 17 Jun 2020 00:10:37 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple query running for ever" }, { "msg_contents": "On Wed, Jun 17, 2020 at 12:10:37AM +0200, Andreas Joseph Krogh wrote:\n> P� onsdag 17. juni 2020 kl. 00:05:26, skrev Justin Pryzby <[email protected]>: \n> On Tue, Jun 16, 2020 at 08:35:31PM +0000, Nagaraj Raj wrote:\n> > I wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.\n> >\n> > from \"cms_prospects\".PROSPECT T0\n> > left join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on T0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\"\n> \n> Pardon me for saying so, but this query seems silly.\n> \n> It's self-joining a table on its PK, which I don't think could ever be useful.\n> \n> Where is the self-join? \n\nSorry, I misread.\n\nI see now that \"cms_prospects\" refers to the database.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 16 Jun 2020 17:33:30 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple query running for ever" }, { "msg_contents": "On Tue, 2020-06-16 at 20:35 +0000, Nagaraj Raj wrote:\n> I wrote a simple query, and it is taking too long, not sure what is wrong in it, even its not giving EXPLAIN ANALYZE.\n> \n> select T0.\"physical_address_sid\", T0.\"individual_entity_proxy_id\", T2.\"infrrd_hh_rank_nbr\"\n> from \"cms_prospects\".PROSPECT T0\n> --inner join \"sas_prs_tmp\".DEDUPE3583E3F18 T1 on T0.\"individual_entity_proxy_id\" = T1.\"individual_entity_proxy_id\"\n> left join \"cms_prospects\".INDIVIDUAL_DEMOGRAPHIC T2 on T0.\"individual_entity_proxy_id\" = T2.\"individual_entity_proxy_id\" \n> \n> \n> \"Merge Left Join (cost=55.96..18147747.08 rows=213620928 width=20)\"\n> \" Merge Cond: (t0.individual_entity_proxy_id = t2.individual_entity_proxy_id)\"\n> \" -> Index Scan using pk_prospect on prospect t0 (cost=0.57..10831606.89 rows=213620928 width=16)\"\n> \" -> Index Only Scan using indxp_individual_demo_infrrd_hh_rank_nbr on individual_demographic t2 (cost=0.57..5013756.93 rows=260652064 width=12)\"\n> \n> \n> \n> Any suggestions or help would be highly appreciated. \n\nThe only potential improvement I can see is to strive for an\n\"index only scan\" on \"prospect\".\n\nFor that, you'd have to add and INCLUDE clause to \"pk_prospect\"\nso that \"physical_address_sid\" and \"individual_entity_proxy_id\"\nare included and VACUUM the table.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Wed, 17 Jun 2020 11:23:52 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simple query running for ever" } ]
[ { "msg_contents": "Hello,\n\nplease consider the following SQL query:\n\nSELECT * FROM \"transactions\" WHERE \n\t\"account\" IN (SELECT \"ID\" FROM \"accounts\" WHERE \"name\" ~~* '%test%') OR\n\t\"contract\" IN (SELECT \"ID\" FROM \"contracts\" WHERE \"name\" ~~* '%test%')\n\nThis yields the following plan on Postgres 11:\n\nSeq Scan on transactions (cost=67.21..171458.03 rows=1301316 width=1206)\n Filter: ((hashed SubPlan 1) OR (hashed SubPlan 2))\n SubPlan 1\n -> Bitmap Heap Scan on accounts (cost=33.36..61.16 rows=46 width=4)\n Recheck Cond: ((name)::text ~~* '%test%'::text)\n -> Bitmap Index Scan on s_accounts (cost=0.00..33.35 rows=46 width=0)\n Index Cond: ((name)::text ~~* '%test%'::text)\n SubPlan 2\n -> Seq Scan on contracts (cost=0.00..5.93 rows=5 width=4)\n Filter: ((name)::text ~~* '%test%'::text)\n\nSo the where clause of this query has just two subplans OR-ed together, one is estimated to yield 46 rows and one is estimated to yield 5 rows.\nI'd expect the total rows for the seqscan to be estimated at 46 then, following the logic that rows_seqscan = max(rows_subplan1, rows_subplan2). As you can see, the optimizer estimates a whopping 1301316 rows instead.\n\nI am absolutely aware that those are hashed sub plans below a seqscan and that Postgres therefore has to scan all tuples of the table. But the problem is that upper nodes (which are excluded from this example for simplicity) think they will receive 1301316 rows from the seqscan, when in fact they will probably only see a hand full, which the planner could have (easily?) deduced by taking the greater of the two subplan row estimates.\n\nWhat am I missing, or is this perhaps a shortfall of the planner?\n\nThanks,\n\nBen\n\n-- \n\nBejamin Coutu\[email protected]\n\nZeyOS GmbH & Co. KG\nhttp://www.zeyos.com\n\n\n", "msg_date": "Fri, 19 Jun 2020 17:12:31 +0200", "msg_from": "\"Benjamin Coutu\" <[email protected]>", "msg_from_op": true, "msg_subject": "Unclamped row estimates whith OR-ed subplans" }, { "msg_contents": "On Fri, 2020-06-19 at 17:12 +0200, Benjamin Coutu wrote:\n> please consider the following SQL query:\n> \n> SELECT * FROM \"transactions\" WHERE \n> \"account\" IN (SELECT \"ID\" FROM \"accounts\" WHERE \"name\" ~~* '%test%') OR\n> \"contract\" IN (SELECT \"ID\" FROM \"contracts\" WHERE \"name\" ~~* '%test%')\n> \n> This yields the following plan on Postgres 11:\n> \n> Seq Scan on transactions (cost=67.21..171458.03 rows=1301316 width=1206)\n> Filter: ((hashed SubPlan 1) OR (hashed SubPlan 2))\n> SubPlan 1\n> -> Bitmap Heap Scan on accounts (cost=33.36..61.16 rows=46 width=4)\n> Recheck Cond: ((name)::text ~~* '%test%'::text)\n> -> Bitmap Index Scan on s_accounts (cost=0.00..33.35 rows=46 width=0)\n> Index Cond: ((name)::text ~~* '%test%'::text)\n> SubPlan 2\n> -> Seq Scan on contracts (cost=0.00..5.93 rows=5 width=4)\n> Filter: ((name)::text ~~* '%test%'::text)\n> \n> So the where clause of this query has just two subplans OR-ed together, one is estimated to yield 46 rows and one is estimated to yield 5 rows.\n> I'd expect the total rows for the seqscan to be estimated at 46 then, following the logic that rows_seqscan = max(rows_subplan1, rows_subplan2). As you can see, the optimizer estimates a whopping\n> 1301316 rows instead.\n> \n> I am absolutely aware that those are hashed sub plans below a seqscan and that Postgres therefore has to scan all tuples of the table. But the problem is that upper nodes (which are excluded from\n> this example for simplicity) think they will receive 1301316 rows from the seqscan, when in fact they will probably only see a hand full, which the planner could have (easily?) deduced by taking the\n> greater of the two subplan row estimates.\n> \n> What am I missing, or is this perhaps a shortfall of the planner?\n\nThe subplans are executed *fpr each row* found in \"transactions\",\nand the estimate on the subplans is *per execution\".\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Fri, 19 Jun 2020 17:55:27 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unclamped row estimates whith OR-ed subplans" }, { "msg_contents": "\"Benjamin Coutu\" <[email protected]> writes:\n> please consider the following SQL query:\n\n> SELECT * FROM \"transactions\" WHERE \n> \t\"account\" IN (SELECT \"ID\" FROM \"accounts\" WHERE \"name\" ~~* '%test%') OR\n> \t\"contract\" IN (SELECT \"ID\" FROM \"contracts\" WHERE \"name\" ~~* '%test%')\n\n> This yields the following plan on Postgres 11:\n\n> Seq Scan on transactions (cost=67.21..171458.03 rows=1301316 width=1206)\n> Filter: ((hashed SubPlan 1) OR (hashed SubPlan 2))\n> SubPlan 1\n> -> Bitmap Heap Scan on accounts (cost=33.36..61.16 rows=46 width=4)\n> Recheck Cond: ((name)::text ~~* '%test%'::text)\n> -> Bitmap Index Scan on s_accounts (cost=0.00..33.35 rows=46 width=0)\n> Index Cond: ((name)::text ~~* '%test%'::text)\n> SubPlan 2\n> -> Seq Scan on contracts (cost=0.00..5.93 rows=5 width=4)\n> Filter: ((name)::text ~~* '%test%'::text)\n\n> So the where clause of this query has just two subplans OR-ed together, one is estimated to yield 46 rows and one is estimated to yield 5 rows.\n> I'd expect the total rows for the seqscan to be estimated at 46 then, following the logic that rows_seqscan = max(rows_subplan1, rows_subplan2). As you can see, the optimizer estimates a whopping 1301316 rows instead.\n\nNo. The subplan estimates are for the number of rows produced by one\nexecution of the subplan, ie the numbers of \"accounts\" or \"contracts\"\nrows that match those inner WHERE conditions. This has very little\na-priori relationship to the number of \"transactions\" rows that will\nsatisfy the outer WHERE condition. If we knew that transactions.account\nand transactions.contract were unique columns, then we might be able\nto say that there shouldn't be more than one outer match per subplan\nresult row ... but you didn't say that, and it seems unlikely.\n\n(Having said that, I think that the estimates for these cases very\npossibly are quite stupid. But that doesn't mean that 46+5 would\nbe the right answer.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jun 2020 12:04:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unclamped row estimates whith OR-ed subplans" }, { "msg_contents": "On Friday, June 19, 2020, Laurenz Albe <[email protected]> wrote:\n\n>\n> > I am absolutely aware that those are hashed sub plans below a seqscan\n> and that Postgres therefore has to scan all tuples of the table. But the\n> problem is that upper nodes (which are excluded from\n> > this example for simplicity) think they will receive 1301316 rows from\n> the seqscan, when in fact they will probably only see a hand full, which\n> the planner could have (easily?) deduced by taking the\n> > greater of the two subplan row estimates.\n> >\n> > What am I missing, or is this perhaps a shortfall of the planner?\n>\n> The subplans are executed *fpr each row* found in \"transactions\",\n> and the estimate on the subplans is *per execution\".\n>\n\nI understood Tom’s nearby answer but not this one. This seems to be\nreferring to correlated subplans which the examples are not.\n\nDavid J.\n\nOn Friday, June 19, 2020, Laurenz Albe <[email protected]> wrote:\n> I am absolutely aware that those are hashed sub plans below a seqscan and that Postgres therefore has to scan all tuples of the table. But the problem is that upper nodes (which are excluded from\n> this example for simplicity) think they will receive 1301316 rows from the seqscan, when in fact they will probably only see a hand full, which the planner could have (easily?) deduced by taking the\n> greater of the two subplan row estimates.\n> \n> What am I missing, or is this perhaps a shortfall of the planner?\n\nThe subplans are executed *fpr each row* found in \"transactions\",\nand the estimate on the subplans is *per execution\".\nI understood Tom’s nearby answer but not this one.  This seems to be referring to correlated subplans which the examples are not.David J.", "msg_date": "Fri, 19 Jun 2020 09:14:57 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unclamped row estimates whith OR-ed subplans" } ]
[ { "msg_contents": "> No. The subplan estimates are for the number of rows produced by one\n> execution of the subplan, ie the numbers of \"accounts\" or \"contracts\"\n> rows that match those inner WHERE conditions. This has very little\n> a-priori relationship to the number of \"transactions\" rows that will\n> satisfy the outer WHERE condition. If we knew that transactions.account\n> and transactions.contract were unique columns, then we might be able\n> to say that there shouldn't be more than one outer match per subplan\n> result row ... but you didn't say that, and it seems unlikely.\n\nThanks Tom, I understand your point.\n\nI don't want to waste your time but maybe there is room for improvement as both \"account\" and \"contract\" are highly distinct and the individual subplan estimates are quite accurate:\n\nSeq Scan on transactions (cost=67.81..171458.63 rows=1301316 width=1206) (actual time=69.418..917.594 rows=112 loops=1)\n Filter: ((hashed SubPlan 1) OR (hashed SubPlan 2))\n Rows Removed by Filter: 1792937\n SubPlan 1\n -> Bitmap Heap Scan on accounts (cost=33.96..61.76 rows=46 width=4) (actual time=3.053..3.292 rows=111 loops=1)\n Recheck Cond: ((name)::text ~~* '%test%'::text)\n Rows Removed by Index Recheck: 4\n Heap Blocks: exact=104\n -> Bitmap Index Scan on s_accounts (cost=0.00..33.95 rows=46 width=0) (actual time=0.505..0.505 rows=118 loops=1)\n Index Cond: ((name)::text ~~* '%test%'::text)\n SubPlan 2\n -> Seq Scan on contracts (cost=0.00..5.93 rows=5 width=4) (actual time=2.531..2.836 rows=4 loops=1)\n Filter: ((name)::text ~~* '%test%'::text)\n Rows Removed by Filter: 272\n\nFor comparison here are the plans for the queries with the individual where clauses:\n\nSELECT * FROM \"transactions\" WHERE \"account\" IN (SELECT \"ID\" FROM \"accounts\" WHERE \"name\" ~~* '%test%')\n\nNested Loop (cost=34.38..488.93 rows=155 width=1206) (actual time=0.599..1.393 rows=112 loops=1)\n -> Bitmap Heap Scan on accounts (cost=33.96..61.76 rows=46 width=4) (actual time=0.541..0.796 rows=111 loops=1)\n Recheck Cond: ((name)::text ~~* '%test%'::text)\n Rows Removed by Index Recheck: 4\n Heap Blocks: exact=104\n -> Bitmap Index Scan on s_accounts (cost=0.00..33.95 rows=46 width=0) (actual time=0.521..0.521 rows=118 loops=1)\n Index Cond: ((name)::text ~~* '%test%'::text)\n -> Index Scan using fk_transactions_account on transactions (cost=0.43..9.08 rows=21 width=1206) (actual time=0.004..0.005 rows=1 loops=111)\n Index Cond: (account = accounts.\"ID\")\n\nSELECT * FROM \"transactions\" WHERE \"contract\" IN (SELECT \"ID\" FROM \"contracts\" WHERE \"name\" ~~* '%test%')\n\nNested Loop (cost=3.76..10.10 rows=31662 width=1206) (actual time=0.082..0.082 rows=0 loops=1)\n -> Bitmap Heap Scan on contracts (cost=3.64..5.74 rows=5 width=4) (actual time=0.069..0.075 rows=4 loops=1)\n Recheck Cond: ((name)::text ~~* '%test%'::text)\n Heap Blocks: exact=2\n -> Bitmap Index Scan on s_contracts (cost=0.00..3.64 rows=5 width=0) (actual time=0.060..0.060 rows=4 loops=1)\n Index Cond: ((name)::text ~~* '%test%'::text)\n -> Index Scan using fk_transactions_contract on transactions (cost=0.12..0.86 rows=1 width=1206) (actual time=0.001..0.001 rows=0 loops=4)\n Index Cond: (contract = contracts.\"ID\")\n\nThe statistics for the columns are:\n\nSELECT attname, null_frac, n_distinct from pg_stats WHERE tablename = 'transactions' AND attname IN ('account', 'contract')\n\ntransactions.account: null_frac=0.025 n_distinct=80277\ntransactions.contract: null_frac=1 n_distinct=0 (there are basically no non-null values for field \"contract\" in transactions)\n\nAccording to pg_class.reltuples the table \"transactions\" has 1735088 rows.\n\nI'd naively expect the selectivity for an OR of those two hashed subplans given uniform distribution to be:\n\nrows_total = \n\t((rows_transactions * (1 - null_frac_account)) / n_distinct_account) * expected_rows_from_subplan1 +\n\t((rows_transactions * (1 - null_frac_contract)) / n_distinct_contract) * expected_rows_from_subplan2\n\n=> rows_total = \n\t((1735088 * (1 - 0.025)) / 80277) * 46 +\n\t((1735088 * (1 - 1)) / 0) * 5\n\n=> rows_total = 969 + 0 /* no non-null values for contract field */\n\nPlease forgive the sloppy math but something along this line could be promising.\n\nBtw, I don't quite understand why the nested loop on contract only is expected to yield 31662 rows, when the null_frac of field transactions.contract is 1. Shouldn't that indicate zero rows or some kind of default minimum estimate for that query?\n\nThanks again!\n\nBenjamin Coutu\n\n\n", "msg_date": "Fri, 19 Jun 2020 19:33:31 +0200", "msg_from": "\"Benjamin Coutu\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unclamped row estimates whith OR-ed subplans" }, { "msg_contents": "\"Benjamin Coutu\" <[email protected]> writes:\n> I don't want to waste your time but maybe there is room for improvement as both \"account\" and \"contract\" are highly distinct and the individual subplan estimates are quite accurate:\n\nYeah, as I said, the estimates you're getting for the OR'd subplans are\npretty stupid. Once you throw the OR in there, it's not possible to\nconvert the IN clauses to semi-joins, so they just stay as generic\nsubplans. It looks like we have exactly zero intelligence about the\ngeneric case --- unless I'm missing something in clause_selectivity,\nyou just end up with a default 0.5 selectivity estimate. So yeah,\nthere's a lot of room for improvement, whenever anyone finds some\nround tuits to work on that.\n\nWhile you're waiting, you might think about recasting the query to\navoid the OR. Perhaps you could do a UNION of two scans of the\ntransactions table?\n\n> Btw, I don't quite understand why the nested loop on contract only is expected to yield 31662 rows, when the null_frac of field transactions.contract is 1. Shouldn't that indicate zero rows or some kind of default minimum estimate for that query?\n\nThat I don't understand. I get a minimal rowcount estimate for an\nall-nulls outer table, as long as I'm using just one IN rather than\nan OR:\n\nregression=# create table contracts (id int);\nCREATE TABLE\nregression=# insert into contracts values(1),(2),(3),(4);\nINSERT 0 4\nregression=# analyze contracts ;\nANALYZE\nregression=# create table transactions (contract int);\nCREATE TABLE\nregression=# insert into transactions select null from generate_series(1,100000);\nINSERT 0 100000\nregression=# analyze transactions;\nANALYZE\nregression=# explain select * from transactions where contract in (select id from contracts);\n QUERY PLAN \n--------------------------------------------------------------------------\n Hash Semi Join (cost=1.09..1607.59 rows=1 width=4)\n Hash Cond: (transactions.contract = contracts.id)\n -> Seq Scan on transactions (cost=0.00..1344.00 rows=100000 width=4)\n -> Hash (cost=1.04..1.04 rows=4 width=4)\n -> Seq Scan on contracts (cost=0.00..1.04 rows=4 width=4)\n(5 rows)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jun 2020 14:16:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unclamped row estimates whith OR-ed subplans" }, { "msg_contents": ">\n> While you're waiting, you might think about recasting the query to\n> avoid the OR. Perhaps you could do a UNION of two scans of the\n> transactions table?\n>\n\nMinor note- use UNION ALL to avoid the dedupe work if you already know\nthose will be distinct sets, or having duplicates is fine.\n\nWhile you're waiting, you might think about recasting the query to\navoid the OR.  Perhaps you could do a UNION of two scans of the\ntransactions table?Minor note- use UNION ALL to avoid the dedupe work if you already know those will be distinct sets, or having duplicates is fine.", "msg_date": "Fri, 19 Jun 2020 12:37:12 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unclamped row estimates whith OR-ed subplans" } ]
[ { "msg_contents": "Hi PostgreSQL users,\n\nI was looking at a slow query in our CMDB that using postgresql-12.3 as its\nbackend. I since I am using the pg_trgm extension elsewhere I decided to give\nit a try. To my surprise, the query plan did not change. But when I disabled\nthe index scan I got the much, much faster scan using a bitmap index scan.\nAny ideas about why that is being chosen? Here are the details:\n\nshared_buffers = 12GB\nwork_mem = 128MB\nmaintenance_work_mem = 2GB\neffective_io_concurrency = 200\nmax_worker_processes = 24\nmax_parallel_maintenance_workers = 4\nmax_parallel_workers_per_gather = 4\nmax_parallel_workers = 24 \nrandom_page_cost = 1.1\nseq_page_cost = 1.0\neffective_cache_size = 36GB\ndefault_statistics_target = 500\nfrom_collapse_limit = 30\njoin_collapse_limit = 30\n\nSlow version with index scan:\n\n# explain analyze SELECT DISTINCT main.* FROM Articles main JOIN ObjectCustomFieldValues ObjectCustomFieldValues_1 ON ( ObjectCustomFieldValues_1.Disabled = '0' ) AND ( ObjectCustomFieldValues_1.ObjectId = main.id ) WHERE (ObjectCustomFieldValues_1.LargeContent ILIKE '%958575%' OR ObjectCustomFieldValues_1.Content ILIKE '%958575%') AND (main.Disabled = '0') ORDER BY main.SortOrder ASC, main.Name ASC;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=892.65..892.68 rows=1 width=137) (actual time=21165.464..21165.464 rows=0 loops=1)\n -> Sort (cost=892.65..892.66 rows=1 width=137) (actual time=21165.462..21165.462 rows=0 loops=1)\n Sort Key: main.sortorder, main.name, main.id, main.summary, main.class, main.parent, main.uri, main.creator, main.created, main.lastupdatedby, main.lastupdated\n Sort Method: quicksort Memory: 25kB\n -> Merge Join (cost=0.71..892.64 rows=1 width=137) (actual time=21165.453..21165.453 rows=0 loops=1)\n Merge Cond: (main.id = objectcustomfieldvalues_1.objectid)\n -> Index Scan using articles_pkey on articles main (cost=0.14..9.08 rows=142 width=137) (actual time=0.007..0.007 rows=1 loops=1)\n Filter: (disabled = '0'::smallint)\n -> Index Scan using objectcustomfieldvalues3 on objectcustomfieldvalues objectcustomfieldvalues_1 (cost=0.56..807603.40 rows=915 width=4) (actual time=21165.441..21165.441 rows=0 loops=1)\n Filter: ((disabled = 0) AND ((largecontent ~~* '%958575%'::text) OR ((content)::text ~~* '%958575%'::text)))\n Rows Removed by Filter: 19030904\n Planning Time: 1.198 ms\n Execution Time: 21165.552 ms\n(13 rows)\n\nTime: 21167.239 ms (00:21.167)\n\n\nFast version with enable_indexscan = 0:\n\n# explain analyze SELECT DISTINCT main.* FROM Articles main JOIN ObjectCustomFieldValues ObjectCustomFieldValues_1 ON ( ObjectCustomFieldValues_1.Disabled = '0' ) AND ( ObjectCustomFieldValues_1.ObjectId = main.id ) WHERE (ObjectCustomFieldValues_1.LargeContent ILIKE '%958575%' OR ObjectCustomFieldValues_1.Content ILIKE '%958575%') AND (main.Disabled = '0') ORDER BY main.SortOrder ASC, main.Name ASC;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=1315.42..1315.45 rows=1 width=137) (actual time=0.306..0.306 rows=0 loops=1)\n -> Sort (cost=1315.42..1315.43 rows=1 width=137) (actual time=0.305..0.305 rows=0 loops=1)\n Sort Key: main.sortorder, main.name, main.id, main.summary, main.class, main.parent, main.uri, main.creator, main.created, main.lastupdatedby, main.lastupdated\n Sort Method: quicksort Memory: 25kB\n -> Hash Join (cost=52.89..1315.41 rows=1 width=137) (actual time=0.296..0.297 rows=0 loops=1)\n Hash Cond: (objectcustomfieldvalues_1.objectid = main.id)\n -> Bitmap Heap Scan on objectcustomfieldvalues objectcustomfieldvalues_1 (cost=45.27..1305.40 rows=915 width=4) (actual time=0.213..0.213 rows=0 loops=1)\n Recheck Cond: ((largecontent ~~* '%958575%'::text) OR ((content)::text ~~* '%958575%'::text))\n Filter: (disabled = 0)\n -> BitmapOr (cost=45.27..45.27 rows=1136 width=0) (actual time=0.210..0.210 rows=0 loops=1)\n -> Bitmap Index Scan on objectcustomfieldvalues_largecontent_trgm (cost=0.00..15.40 rows=1 width=0) (actual time=0.041..0.041 rows=0 loops=1)\n Index Cond: (largecontent ~~* '%958575%'::text)\n -> Bitmap Index Scan on objectcustomfieldvalues_content_trgm (cost=0.00..29.41 rows=1135 width=0) (actual time=0.168..0.168 rows=0 loops=1)\n Index Cond: ((content)::text ~~* '%958575%'::text)\n -> Hash (cost=5.84..5.84 rows=142 width=137) (actual time=0.079..0.079 rows=146 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 34kB\n -> Seq Scan on articles main (cost=0.00..5.84 rows=142 width=137) (actual time=0.010..0.053 rows=146 loops=1)\n Filter: (disabled = '0'::smallint)\n Rows Removed by Filter: 5\n Planning Time: 1.308 ms\n Execution Time: 0.356 ms\n(21 rows)\n\nTime: 2.113 ms\n\nAnd the schema information:\n\n# \\d articles\n Table \"public.articles\"\n Column | Type | Collation | Nullable | Default \n---------------+-----------------------------+-----------+----------+--------------------------------------\n id | integer | | not null | nextval('articles_id_seq'::regclass)\n name | character varying(255) | | not null | ''::character varying\n summary | character varying(255) | | not null | ''::character varying\n sortorder | integer | | not null | 0\n class | integer | | not null | 0\n parent | integer | | not null | 0\n uri | character varying(255) | | | \n creator | integer | | not null | 0\n created | timestamp without time zone | | | \n lastupdatedby | integer | | not null | 0\n lastupdated | timestamp without time zone | | | \n disabled | smallint | | not null | 0\nIndexes:\n \"articles_pkey\" PRIMARY KEY, btree (id)\n\n# \\d objectcustomfieldvalues\n Table \"public.objectcustomfieldvalues\"\n Column | Type | Collation | Nullable | Default \n-----------------+-----------------------------+-----------+----------+---------------------------------------------------------\n id | integer | | not null | nextval('ticketcustomfieldvalues_id_s'::text::regclass)\n objectid | integer | | not null | \n customfield | integer | | not null | \n content | character varying(255) | | | \n creator | integer | | not null | 0\n created | timestamp without time zone | | | \n lastupdatedby | integer | | not null | 0\n lastupdated | timestamp without time zone | | | \n objecttype | character varying(255) | | not null | \n largecontent | text | | | \n contenttype | character varying(80) | | | \n contentencoding | character varying(80) | | | \n sortorder | integer | | not null | 0\n disabled | integer | | not null | 0\nIndexes:\n \"ticketcustomfieldvalues_pkey\" PRIMARY KEY, btree (id) CLUSTER\n \"objectcustomfieldvalues1\" btree (customfield, objecttype, objectid, content)\n \"objectcustomfieldvalues2\" btree (customfield, objecttype, objectid)\n \"objectcustomfieldvalues3\" btree (objectid, objecttype)\n \"objectcustomfieldvalues4\" btree (id) WHERE id IS NULL OR id = 0\n \"objectcustomfieldvalues_content_trgm\" gin (content gin_trgm_ops)\n \"objectcustomfieldvalues_largecontent_trgm\" gin (largecontent gin_trgm_ops)\n \"ticketcustomfieldvalues1\" btree (customfield, objectid, content)\n\nAny suggestions would be appreciated.\n\nRegards,\nKen\n\n\n", "msg_date": "Fri, 19 Jun 2020 14:49:14 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "Kenneth Marshall <[email protected]> writes:\n> I was looking at a slow query in our CMDB that using postgresql-12.3 as its\n> backend. I since I am using the pg_trgm extension elsewhere I decided to give\n> it a try. To my surprise, the query plan did not change. But when I disabled\n> the index scan I got the much, much faster scan using a bitmap index scan.\n> Any ideas about why that is being chosen? Here are the details:\n\nIt looks like the planner is being too optimistic about how quickly the\nmergejoin will end:\n\n> -> Merge Join (cost=0.71..892.64 rows=1 width=137) (actual time=21165.453..21165.453 rows=0 loops=1)\n> Merge Cond: (main.id = objectcustomfieldvalues_1.objectid)\n> -> Index Scan using articles_pkey on articles main (cost=0.14..9.08 rows=142 width=137) (actual time=0.007..0.007 rows=1 loops=1)\n> Filter: (disabled = '0'::smallint)\n> -> Index Scan using objectcustomfieldvalues3 on objectcustomfieldvalues objectcustomfieldvalues_1 (cost=0.56..807603.40 rows=915 width=4) (actual time=21165.441..21165.441 rows=0 loops=1)\n> Filter: ((disabled = 0) AND ((largecontent ~~* '%958575%'::text) OR ((content)::text ~~* '%958575%'::text)))\n> Rows Removed by Filter: 19030904\n\nThis merge cost estimate is way lower than the sum of the input scan\nestimates, where normally it would be that sum plus a nontrivial charge\nfor comparisons. So the planner must think that the input scans won't\nrun to completion. Which is something that can happen; merge join\nwill stop as soon as either input is exhausted. But in this case it\nlooks like the objectcustomfieldvalues scan is the one that ran to\ncompletion, while the articles scan had only one row demanded from it.\n(We can see from the other plan that articles has 146 rows satisfying\nthe filter, so that scan must have been shut down before completion.)\nThe planner must have been expecting the other way around, with not\nvery much of the expensive objectcustomfieldvalues scan actually getting\ndone.\n\nThe reason for such an estimation error usually is that the maximum\njoin key values recorded in pg_stats are off: the join side that is\ngoing to be exhausted is the one with the smaller max join key.\n\"articles\" seems to be small enough that the stats for it will be\nexact, so your problem is a poor estimate of the max value of\nobjectcustomfieldvalues.objectid. You might try raising the statistics\ntarget for that table. Or maybe it's just that ANALYZE hasn't been\ndone lately on one table or the other?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jun 2020 16:11:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "On Fri, Jun 19, 2020 at 04:11:10PM -0400, Tom Lane wrote:\n> \n> It looks like the planner is being too optimistic about how quickly the\n> mergejoin will end:\n> \n> > -> Merge Join (cost=0.71..892.64 rows=1 width=137) (actual time=21165.453..21165.453 rows=0 loops=1)\n> > Merge Cond: (main.id = objectcustomfieldvalues_1.objectid)\n> > -> Index Scan using articles_pkey on articles main (cost=0.14..9.08 rows=142 width=137) (actual time=0.007..0.007 rows=1 loops=1)\n> > Filter: (disabled = '0'::smallint)\n> > -> Index Scan using objectcustomfieldvalues3 on objectcustomfieldvalues objectcustomfieldvalues_1 (cost=0.56..807603.40 rows=915 width=4) (actual time=21165.441..21165.441 rows=0 loops=1)\n> > Filter: ((disabled = 0) AND ((largecontent ~~* '%958575%'::text) OR ((content)::text ~~* '%958575%'::text)))\n> > Rows Removed by Filter: 19030904\n> \n> This merge cost estimate is way lower than the sum of the input scan\n> estimates, where normally it would be that sum plus a nontrivial charge\n> for comparisons. So the planner must think that the input scans won't\n> run to completion. Which is something that can happen; merge join\n> will stop as soon as either input is exhausted. But in this case it\n> looks like the objectcustomfieldvalues scan is the one that ran to\n> completion, while the articles scan had only one row demanded from it.\n> (We can see from the other plan that articles has 146 rows satisfying\n> the filter, so that scan must have been shut down before completion.)\n> The planner must have been expecting the other way around, with not\n> very much of the expensive objectcustomfieldvalues scan actually getting\n> done.\n> \n> The reason for such an estimation error usually is that the maximum\n> join key values recorded in pg_stats are off: the join side that is\n> going to be exhausted is the one with the smaller max join key.\n> \"articles\" seems to be small enough that the stats for it will be\n> exact, so your problem is a poor estimate of the max value of\n> objectcustomfieldvalues.objectid. You might try raising the statistics\n> target for that table. Or maybe it's just that ANALYZE hasn't been\n> done lately on one table or the other?\n> \n> \t\t\tregards, tom lane\n\nHi Tod,\n\nThank you for the information and suggestion. I tried bumping the statistics for the\nobjectcustomfieldvalues.objectid column to 2k, 5k and 10k followed by an analyze and\nthe query plan stayed the same. I also analyzed the article table\nrepeatedly and their was no change in the plan. The table articles only has 151 rows\nwhile the objectcustomfieldvalues table has 19031909 rows. Any idea\nabout why it is so far off?\n\nRegards,\nKen\n\n\n", "msg_date": "Fri, 19 Jun 2020 15:49:50 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "Kenneth Marshall <[email protected]> writes:\n> On Fri, Jun 19, 2020 at 04:11:10PM -0400, Tom Lane wrote:\n>> The reason for such an estimation error usually is that the maximum\n>> join key values recorded in pg_stats are off: the join side that is\n>> going to be exhausted is the one with the smaller max join key.\n>> \"articles\" seems to be small enough that the stats for it will be\n>> exact, so your problem is a poor estimate of the max value of\n>> objectcustomfieldvalues.objectid. You might try raising the statistics\n>> target for that table. Or maybe it's just that ANALYZE hasn't been\n>> done lately on one table or the other?\n\n> Thank you for the information and suggestion. I tried bumping the statistics for the\n> objectcustomfieldvalues.objectid column to 2k, 5k and 10k followed by an analyze and\n> the query plan stayed the same. I also analyzed the article table\n> repeatedly and their was no change in the plan. The table articles only has 151 rows\n> while the objectcustomfieldvalues table has 19031909 rows. Any idea\n> about why it is so far off?\n\nWhat's the actual maximum value of objectcustomfieldvalues.objectid,\nand how does that compare to the endpoint value in the pg_stats\nhistogram for that column? If you've got one outlier in the table,\nit might get missed by ANALYZE.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jun 2020 16:59:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "On Fri, Jun 19, 2020 at 04:59:15PM -0400, Tom Lane wrote:\n> \n> > Thank you for the information and suggestion. I tried bumping the statistics for the\n> > objectcustomfieldvalues.objectid column to 2k, 5k and 10k followed by an analyze and\n> > the query plan stayed the same. I also analyzed the article table\n> > repeatedly and their was no change in the plan. The table articles only has 151 rows\n> > while the objectcustomfieldvalues table has 19031909 rows. Any idea\n> > about why it is so far off?\n> \n> What's the actual maximum value of objectcustomfieldvalues.objectid,\n> and how does that compare to the endpoint value in the pg_stats\n> histogram for that column? If you've got one outlier in the table,\n> it might get missed by ANALYZE.\n> \n> \t\t\tregards, tom lane\n\nHi Tom,\n\nmax(objectcustomfieldvalues.objectid) = 28108423 and here is the\nhistogram for that column:\n\n\n schemaname | tablename | attname | inherited | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation | most_common_elems | most_common_elem_freqs | elem_count_histogram \n------------+-------------------------+----------+-----------+-----------+-----------+------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------------------+------------------------+----------------------\n public | objectcustomfieldvalues | objectid | f | 0 | 4 | 615227 | {2521539,2621185,2269417,2847802,2956102,2397380,2623765,183974,2566321,2955862,3303717,3303979,3169130,2525623,1840685,2889335,190737,3188380,3303589} | {0.00012,0.000113333335,0.0001,9.3333336e-05,9.3333336e-05,8e-05,8e-05,7.3333336e-05,7.3333336e-05,7.3333336e-05,7.3333336e-05,7.3333336e-05,6.666667e-05,6e-05,4.6666668e-05,4.6666668e-05,4e-05,4e-05,4e-05} | {13,2134,3509,26547,34737,40152,46587,50105,52672,55353,57887,59711,61569,63460,65355,67312,69282,71624,73867,75957,77612,79489,81209,82909,84631,86414,88322,90288,92379,94079,95997,97857,99818,101665,103329,105219,107051,107897,109067,110861,112706,114406,116172,117998,119816,121652,124009,126170,128116,129934,132078,134032,135996,137776,139628,141620,143516,145342,147189,149198,151202,153037,154808,156492,158288,160054,161898,163653,165565,167635,169590,171793,173546,175286,177249,179324,181293,183134,185090,187036,188727,190379,191958,193496,195152,197104,199273,201063,202948,204836,206969,208730,210500,212108,213993,215950,217795,219622,221295,223425,225492,227812,229786,231604,233681,235666,237621,239273,241014,242929,244820,246835,248932,251080,253520,255509,257521,259358,261456,263416,265223,267034,268954,270791,272748,274775,276824,279028,280994,282867,284691,286480,288403,290118,291879,293724,295992,298195,300231,302005,303921,305829,307728,312187,322637,324637,326711,328410,330189,331775,333482,335258,337377,339324,341233,343491,345226,346956,348483,350291,352158,354138,356101,358109,360079,361881,364029,365859,367629,369643,371338,373086,375053,376983,378733,380404,382291,384249,386026,387804,390103,392037,394155,396146,397989,399995,401825,403596,405630,407367,409271,410961,412888,414877,416817,418590,420382,422393,424242,425999,427973,429975,431901,433971,435744,437793,439691,441941,443559,445134,446797,448650,450422,452053,453759,455479,457126,458948,460733,462434,464282,466002,467835,469652,471119,472773,474409,476020,477537,479555,481629,483237,484863,486409,488035,489791,491528,493298,495072,496836,498586,500354,502383,504174,505874,507521,509105,510761,512541,514306,515995,517715,519483,521248,522945,524539,526041,527706,529182,530744,532429,533948,535558,537359,539116,540863,542489,544262,546322,548161,549902,551405,553121,554802,556671,558536,560354,562347,564194,565897,567767,569405,571001,572912,574565,576328,578132,580065,581620,583221,585081,586608,588547,590356,592090,593962,595697,597285,598839,600508,602443,604227,605867,607468,609085,610797,612332,613921,615995,617851,619721,621524,623179,625233,626945,628751,630490,632141,633720,635495,637468,639500,641534,643506,645470,647717,649410,651203,653202,654833,656600,658617,660219,661861,663708,665443,667334,669170,670976,672711,674301,675804,677526,678907,680449,682221,684006,685693,687199,688838,690608,692189,693817,695615,697315,699054,700889,702630,704168,705826,707609,709307,710996,712999,714688,716716,718429,720331,722148,723922,725591,727357,729083,730848,732700,734386,736115,738141,739960,741521,743385,745319,747126,749002,750831,752764,754524,756279,758293,759958,761685,763366,764965,766557,768415,769979,771693,773446,775194,777188,779097,780810,782669,784398,786016,787695,789447,791420,793006,794838,796736,798655,800602,802817,804735,806387,808323,810183,811806,813674,815637,817375,819044,820907,822761,824890,826798,828762,830733,832618,834526,836655,838640,840602,842547,844741,846843,848834,850872,852630,854937,856685,858524,860329,862328,864262,866146,868297,870363,872379,874256,876114,877886,880210,882175,884423,886698,888682,890722,892866,895162,897253,899320,901578,903360,905209,907171,909164,911048,913052,915185,917251,919283,921343,923906,925787,929684,931600,933410,935385,937920,940047,942195,944602,946624,948614,951946,953948,955964,958135,1676537,2029763,2276887,2488636,2544458,2621609,2891726,3052758,3304313,3693956,27667772} | 0.95726633 | | | \n(1 row)\n\n\nRegards,\nKen\n\n\n", "msg_date": "Fri, 19 Jun 2020 16:23:38 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "Kenneth Marshall <[email protected]> writes:\n> On Fri, Jun 19, 2020 at 04:59:15PM -0400, Tom Lane wrote:\n>> What's the actual maximum value of objectcustomfieldvalues.objectid,\n>> and how does that compare to the endpoint value in the pg_stats\n>> histogram for that column? If you've got one outlier in the table,\n>> it might get missed by ANALYZE.\n\n> max(objectcustomfieldvalues.objectid) = 28108423 and here is the\n> histogram for that column:\n\n ... 3304313,3693956,27667772}\n\nHmm, does seem like you have some outlier keys. Are any of the keys in\nthe column you're trying to join to larger than 27667772?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jun 2020 18:10:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "On Fri, Jun 19, 2020 at 06:10:34PM -0400, Tom Lane wrote:\n> > max(objectcustomfieldvalues.objectid) = 28108423 and here is the\n> > histogram for that column:\n> \n> ... 3304313,3693956,27667772}\n> \n> Hmm, does seem like you have some outlier keys. Are any of the keys in\n> the column you're trying to join to larger than 27667772?\n> \n> \t\t\tregards, tom lane\n\nHi Tom,\n\nThe only values above 27667772? for objectid are:\n\n# select * from objectcustomfieldvalues where objectid > 27667772;\n id | objectid | customfield | content | creator |\ncreated | lastupdatedby | lastupdated | objecttype |\nlargecontent | contenttype | contentencoding | sortorder | disabled \n----------+----------+-------------+------------+---------+---------------------+---------------+---------------------+-----------------+--------------+-------------+-----------------+-----------+----------\n 19012927 | 27667773 | 375 | 2020-05-12 | 3768865 | 2020-05-13\n16:10:39 | 3768865 | 2020-05-13 16:10:39 | RT::Transaction |\n| | | 0 | 0\n 19012928 | 27667774 | 375 | 2020-05-12 | 3768865 | 2020-05-13\n16:10:39 | 3768865 | 2020-05-13 16:10:39 | RT::Transaction |\n| | | 0 | 0\n 19020166 | 27680053 | 375 | 2020-05-14 | 3570362 | 2020-05-14\n14:14:20 | 3570362 | 2020-05-14 14:14:20 | RT::Transaction |\n| | | 0 | 0\n 19025163 | 27688649 | 375 | 2020-05-13 | 3768865 | 2020-05-14\n20:27:04 | 3768865 | 2020-05-14 20:27:04 | RT::Transaction |\n| | | 0 | 0\n 19025164 | 27688650 | 375 | 2020-05-13 | 3768865 | 2020-05-14\n20:27:04 | 3768865 | 2020-05-14 20:27:04 | RT::Transaction |\n| | | 0 | 0\n 19025165 | 27688651 | 375 | 2020-05-13 | 3768865 | 2020-05-14\n20:27:04 | 3768865 | 2020-05-14 20:27:04 | RT::Transaction |\n| | | 0 | 0\n 19025166 | 27688652 | 375 | 2020-05-13 | 3768865 | 2020-05-14\n20:27:04 | 3768865 | 2020-05-14 20:27:04 | RT::Transaction |\n| | | 0 | 0\n 19025167 | 27688676 | 375 | 2020-05-14 | 3768865 | 2020-05-14\n20:27:29 | 3768865 | 2020-05-14 20:27:29 | RT::Transaction |\n| | | 0 | 0\n 19031374 | 27698358 | 375 | 2020-05-13 | 3768865 | 2020-05-15\n15:32:57 | 3768865 | 2020-05-15 15:32:57 | RT::Transaction |\n| | | 0 | 0\n 19031384 | 27698376 | 375 | 2020-05-14 | 3768865 | 2020-05-15\n15:33:50 | 3768865 | 2020-05-15 15:33:50 | RT::Transaction |\n| | | 0 | 0\n 19031385 | 27698377 | 375 | 2020-05-14 | 3768865 | 2020-05-15\n15:33:50 | 3768865 | 2020-05-15 15:33:50 | RT::Transaction |\n| | | 0 | 0\n 19033205 | 27701391 | 375 | 2020-05-15 | 3197295 | 2020-05-15\n18:21:36 | 3197295 | 2020-05-15 18:21:36 | RT::Transaction |\n| | | 0 | 0\n 19042369 | 27715839 | 375 | 2020-05-18 | 1403795 | 2020-05-18\n14:12:35 | 1403795 | 2020-05-18 14:12:35 | RT::Transaction |\n| | | 0 | 0\n 19047274 | 27723981 | 375 | 2020-05-18 | 3197295 | 2020-05-18\n19:29:14 | 3197295 | 2020-05-18 19:29:14 | RT::Transaction |\n| | | 0 | 0\n 19048566 | 27726800 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n| | | 0 | 0\n 19048567 | 27726801 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n| | | 0 | 0\n 19048568 | 27726802 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n| | | 0 | 0\n 19048569 | 27726803 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n| | | 0 | 0\n 19048570 | 27726804 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n| | | 0 | 0\n 19048571 | 27726805 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n| | | 0 | 0\n 19048572 | 27726806 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n20:23:19 | 3768865 | 2020-05-18 20:23:19 | RT::Transaction |\n| | | 0 | 0\n 19048573 | 27726807 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n20:23:19 | 3768865 | 2020-05-18 20:23:19 | RT::Transaction |\n| | | 0 | 0\n 19048574 | 27726808 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n20:23:19 | 3768865 | 2020-05-18 20:23:19 | RT::Transaction |\n| | | 0 | 0\n 19054502 | 27738410 | 375 | 2020-05-19 | 3197295 | 2020-05-19\n15:01:50 | 3197295 | 2020-05-19 15:01:50 | RT::Transaction |\n| | | 0 | 0\n 19056348 | 27741653 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n16:39:01 | 3768865 | 2020-05-19 16:39:01 | RT::Transaction |\n| | | 0 | 0\n 19056349 | 27741654 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n16:39:01 | 3768865 | 2020-05-19 16:39:01 | RT::Transaction |\n| | | 0 | 0\n 19056350 | 27741655 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n16:39:02 | 3768865 | 2020-05-19 16:39:02 | RT::Transaction |\n| | | 0 | 0\n 19056351 | 27741656 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n16:39:02 | 3768865 | 2020-05-19 16:39:02 | RT::Transaction |\n| | | 0 | 0\n 19056352 | 27741657 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n16:39:02 | 3768865 | 2020-05-19 16:39:02 | RT::Transaction |\n| | | 0 | 0\n 19056362 | 27741667 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n16:39:29 | 3768865 | 2020-05-19 16:39:29 | RT::Transaction |\n| | | 0 | 0\n 19057464 | 27743793 | 375 | 2020-05-19 | 3197295 | 2020-05-19\n18:03:16 | 3197295 | 2020-05-19 18:03:16 | RT::Transaction |\n| | | 0 | 0\n 19067180 | 27760343 | 375 | 2020-05-20 | 1403795 | 2020-05-20\n18:01:59 | 1403795 | 2020-05-20 18:01:59 | RT::Transaction |\n| | | 0 | 0\n 19067476 | 27760892 | 375 | 2020-05-19 | 3197295 | 2020-05-20\n18:23:48 | 3197295 | 2020-05-20 18:23:48 | RT::Transaction |\n| | | 0 | 0\n 19073560 | 27771129 | 375 | 2020-05-21 | 3197295 | 2020-05-21\n14:15:54 | 3197295 | 2020-05-21 14:15:54 | RT::Transaction |\n| | | 0 | 0\n 19074011 | 27771902 | 375 | 2020-05-21 | 3570362 | 2020-05-21\n15:02:49 | 3570362 | 2020-05-21 15:02:49 | RT::Transaction |\n| | | 0 | 0\n 19081811 | 27784951 | 375 | 2020-05-22 | 2960471 | 2020-05-22\n14:52:40 | 2960471 | 2020-05-22 14:52:40 | RT::Transaction |\n| | | 0 | 0\n 19093560 | 27804234 | 375 | 2020-05-26 | 3570362 | 2020-05-26\n15:00:29 | 3570362 | 2020-05-26 15:00:29 | RT::Transaction |\n| | | 0 | 0\n 19094043 | 27805100 | 375 | 2020-05-26 | 3570362 | 2020-05-26\n15:30:15 | 3570362 | 2020-05-26 15:30:15 | RT::Transaction |\n| | | 0 | 0\n 19094798 | 27806250 | 375 | 2020-05-26 | 3570362 | 2020-05-26\n15:59:22 | 3570362 | 2020-05-26 15:59:22 | RT::Transaction |\n| | | 0 | 0\n 19103803 | 27822098 | 375 | 2020-05-27 | 3570362 | 2020-05-27\n15:15:37 | 3570362 | 2020-05-27 15:15:37 | RT::Transaction |\n| | | 0 | 0\n 19103893 | 27822211 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n15:20:11 | 3768865 | 2020-05-27 15:20:11 | RT::Transaction |\n| | | 0 | 0\n 19103894 | 27822212 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n| | | 0 | 0\n 19103895 | 27822213 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n| | | 0 | 0\n 19103896 | 27822214 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n| | | 0 | 0\n 19103897 | 27822215 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n| | | 0 | 0\n 19103898 | 27822216 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n| | | 0 | 0\n 19103899 | 27822217 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n| | | 0 | 0\n 19103910 | 27822238 | 375 | 2020-05-27 | 3570362 | 2020-05-27\n15:21:30 | 3570362 | 2020-05-27 15:21:30 | RT::Transaction |\n| | | 0 | 0\n 19103921 | 27822243 | 375 | 2020-05-27 | 3768865 | 2020-05-27\n15:21:39 | 3768865 | 2020-05-27 15:21:39 | RT::Transaction |\n| | | 0 | 0\n 19103922 | 27822244 | 375 | 2020-05-27 | 3768865 | 2020-05-27\n15:21:39 | 3768865 | 2020-05-27 15:21:39 | RT::Transaction |\n| | | 0 | 0\n 19103923 | 27822245 | 375 | 2020-05-27 | 3768865 | 2020-05-27\n15:21:39 | 3768865 | 2020-05-27 15:21:39 | RT::Transaction |\n| | | 0 | 0\n 19103924 | 27822246 | 375 | 2020-05-27 | 3768865 | 2020-05-27\n15:21:39 | 3768865 | 2020-05-27 15:21:39 | RT::Transaction |\n| | | 0 | 0\n 19103925 | 27822247 | 375 | 2020-05-27 | 3768865 | 2020-05-27\n15:21:39 | 3768865 | 2020-05-27 15:21:39 | RT::Transaction |\n| | | 0 | 0\n 19109404 | 27830956 | 375 | 2020-05-27 | 3570362 | 2020-05-27\n20:42:58 | 3570362 | 2020-05-27 20:42:58 | RT::Transaction |\n| | | 0 | 0\n 19109462 | 27831009 | 375 | 2020-05-27 | 3570362 | 2020-05-27\n20:44:12 | 3570362 | 2020-05-27 20:44:12 | RT::Transaction |\n| | | 0 | 0\n 19115179 | 27840467 | 375 | 2020-05-28 | 3570362 | 2020-05-28\n15:28:09 | 3570362 | 2020-05-28 15:28:09 | RT::Transaction |\n| | | 0 | 0\n 19115214 | 27840551 | 375 | 2020-05-28 | 3570362 | 2020-05-28\n15:29:59 | 3570362 | 2020-05-28 15:29:59 | RT::Transaction |\n| | | 0 | 0\n 19118472 | 27845963 | 375 | 2020-05-28 | 3570362 | 2020-05-28\n18:50:57 | 3570362 | 2020-05-28 18:50:57 | RT::Transaction |\n| | | 0 | 0\n 19127210 | 27860753 | 375 | 2020-05-28 | 3768865 | 2020-05-29\n17:22:57 | 3768865 | 2020-05-29 17:22:57 | RT::Transaction |\n| | | 0 | 0\n 19127211 | 27860754 | 375 | 2020-05-28 | 3768865 | 2020-05-29\n17:22:57 | 3768865 | 2020-05-29 17:22:57 | RT::Transaction |\n| | | 0 | 0\n 19127212 | 27860755 | 375 | 2020-05-28 | 3768865 | 2020-05-29\n17:22:57 | 3768865 | 2020-05-29 17:22:57 | RT::Transaction |\n| | | 0 | 0\n 19127213 | 27860756 | 375 | 2020-05-28 | 3768865 | 2020-05-29\n17:22:57 | 3768865 | 2020-05-29 17:22:57 | RT::Transaction |\n| | | 0 | 0\n 19127214 | 27860757 | 375 | 2020-05-28 | 3768865 | 2020-05-29\n17:22:57 | 3768865 | 2020-05-29 17:22:57 | RT::Transaction |\n| | | 0 | 0\n 19163910 | 27922577 | 375 | 2020-06-02 | 3768865 | 2020-06-03\n20:57:29 | 3768865 | 2020-06-03 20:57:29 | RT::Transaction |\n| | | 0 | 0\n 19163911 | 27922578 | 375 | 2020-06-02 | 3768865 | 2020-06-03\n20:57:29 | 3768865 | 2020-06-03 20:57:29 | RT::Transaction |\n| | | 0 | 0\n 19163912 | 27922579 | 375 | 2020-06-02 | 3768865 | 2020-06-03\n20:57:29 | 3768865 | 2020-06-03 20:57:29 | RT::Transaction |\n| | | 0 | 0\n 19163913 | 27922580 | 375 | 2020-06-02 | 3768865 | 2020-06-03\n20:57:29 | 3768865 | 2020-06-03 20:57:29 | RT::Transaction |\n| | | 0 | 0\n 19163914 | 27922582 | 375 | 2020-06-01 | 3768865 | 2020-06-03\n20:57:52 | 3768865 | 2020-06-03 20:57:52 | RT::Transaction |\n| | | 0 | 0\n 19163915 | 27922583 | 375 | 2020-06-01 | 3768865 | 2020-06-03\n20:57:52 | 3768865 | 2020-06-03 20:57:52 | RT::Transaction |\n| | | 0 | 0\n 19163916 | 27922584 | 375 | 2020-06-01 | 3768865 | 2020-06-03\n20:57:52 | 3768865 | 2020-06-03 20:57:52 | RT::Transaction |\n| | | 0 | 0\n 19186439 | 27960807 | 375 | 2020-06-08 | 3197295 | 2020-06-08\n16:18:49 | 3197295 | 2020-06-08 16:18:49 | RT::Transaction |\n| | | 0 | 0\n 19189227 | 27965582 | 375 | 2020-06-08 | 22 | 2020-06-08\n19:24:19 | 22 | 2020-06-08 19:24:19 | RT::Transaction |\n| | | 0 | 0\n 19189269 | 27965637 | 375 | 2020-06-08 | 402 | 2020-06-08\n19:25:20 | 402 | 2020-06-08 19:25:20 | RT::Transaction |\n| | | 0 | 0\n 19189270 | 27965637 | 376 | 22 | 402 | 2020-06-08\n19:25:20 | 402 | 2020-06-08 19:25:20 | RT::Transaction |\n| | | 0 | 0\n 19189271 | 27965638 | 375 | 2020-06-08 | 402 | 2020-06-08\n19:25:20 | 402 | 2020-06-08 19:25:20 | RT::Transaction |\n| | | 0 | 0\n 19189272 | 27965638 | 376 | 22 | 402 | 2020-06-08\n19:25:20 | 402 | 2020-06-08 19:25:20 | RT::Transaction |\n| | | 0 | 0\n 19193472 | 27972893 | 375 | 2020-06-08 | 3197295 | 2020-06-09\n12:21:50 | 3197295 | 2020-06-09 12:21:50 | RT::Transaction |\n| | | 0 | 0\n 19204287 | 27991617 | 375 | 2020-06-10 | 3197295 | 2020-06-10\n15:52:41 | 3197295 | 2020-06-10 15:52:41 | RT::Transaction |\n| | | 0 | 0\n 19205446 | 27993528 | 375 | 2020-06-10 | 3768865 | 2020-06-10\n17:24:43 | 3768865 | 2020-06-10 17:24:43 | RT::Transaction |\n| | | 0 | 0\n 19226664 | 28019342 | 375 | 2020-06-10 | 3768865 | 2020-06-11\n15:24:50 | 3768865 | 2020-06-11 15:24:50 | RT::Transaction |\n| | | 0 | 0\n 19226665 | 28019343 | 375 | 2020-06-10 | 3768865 | 2020-06-11\n15:24:50 | 3768865 | 2020-06-11 15:24:50 | RT::Transaction |\n| | | 0 | 0\n 19226666 | 28019344 | 375 | 2020-06-10 | 3768865 | 2020-06-11\n15:24:50 | 3768865 | 2020-06-11 15:24:50 | RT::Transaction |\n| | | 0 | 0\n 19226667 | 28019345 | 375 | 2020-06-10 | 3768865 | 2020-06-11\n15:24:50 | 3768865 | 2020-06-11 15:24:50 | RT::Transaction |\n| | | 0 | 0\n 19233084 | 28030270 | 375 | 2020-06-12 | 3197295 | 2020-06-12\n14:05:00 | 3197295 | 2020-06-12 14:05:00 | RT::Transaction |\n| | | 0 | 0\n 19235815 | 28034687 | 375 | 2020-06-12 | 84 | 2020-06-12\n17:57:02 | 84 | 2020-06-12 17:57:02 | RT::Transaction |\n| | | 0 | 0\n 19236305 | 28035519 | 375 | 2020-06-12 | 3197295 | 2020-06-12\n18:29:25 | 3197295 | 2020-06-12 18:29:25 | RT::Transaction |\n| | | 0 | 0\n 19236386 | 28035692 | 375 | 2020-06-12 | 3197295 | 2020-06-12\n18:36:56 | 3197295 | 2020-06-12 18:36:56 | RT::Transaction |\n| | | 0 | 0\n 19237416 | 28037412 | 375 | 2020-06-12 | 3197295 | 2020-06-12\n19:44:36 | 3197295 | 2020-06-12 19:44:36 | RT::Transaction |\n| | | 0 | 0\n 19238015 | 28038402 | 375 | 2020-06-12 | 3768865 | 2020-06-12\n20:26:15 | 3768865 | 2020-06-12 20:26:15 | RT::Transaction |\n| | | 0 | 0\n 19238016 | 28038403 | 375 | 2020-06-12 | 3768865 | 2020-06-12\n20:26:15 | 3768865 | 2020-06-12 20:26:15 | RT::Transaction |\n| | | 0 | 0\n 19238017 | 28038404 | 375 | 2020-06-12 | 3768865 | 2020-06-12\n20:26:15 | 3768865 | 2020-06-12 20:26:15 | RT::Transaction |\n| | | 0 | 0\n 19238018 | 28038405 | 375 | 2020-06-12 | 3768865 | 2020-06-12\n20:26:15 | 3768865 | 2020-06-12 20:26:15 | RT::Transaction |\n| | | 0 | 0\n 19238032 | 28038422 | 375 | 2020-06-11 | 3768865 | 2020-06-12\n20:26:39 | 3768865 | 2020-06-12 20:26:39 | RT::Transaction |\n| | | 0 | 0\n 19238033 | 28038423 | 375 | 2020-06-11 | 3768865 | 2020-06-12\n20:26:39 | 3768865 | 2020-06-12 20:26:39 | RT::Transaction |\n| | | 0 | 0\n 19238034 | 28038424 | 375 | 2020-06-11 | 3768865 | 2020-06-12\n20:26:39 | 3768865 | 2020-06-12 20:26:39 | RT::Transaction |\n| | | 0 | 0\n 19238035 | 28038425 | 375 | 2020-06-11 | 3768865 | 2020-06-12\n20:26:39 | 3768865 | 2020-06-12 20:26:39 | RT::Transaction |\n| | | 0 | 0\n 19240041 | 28042208 | 375 | 2020-06-14 | 1403795 | 2020-06-14\n12:50:47 | 1403795 | 2020-06-14 12:50:47 | RT::Transaction |\n| | | 0 | 0\n 19242958 | 28046818 | 375 | 2020-06-15 | 3570362 | 2020-06-15\n14:38:57 | 3570362 | 2020-06-15 14:38:57 | RT::Transaction |\n| | | 0 | 0\n 19255465 | 28067560 | 375 | 2020-06-16 | 3570362 | 2020-06-16\n18:41:13 | 3570362 | 2020-06-16 18:41:13 | RT::Transaction |\n| | | 0 | 0\n 19279177 | 28108399 | 375 | 2020-06-18 | 3768865 | 2020-06-19\n17:38:39 | 3768865 | 2020-06-19 17:38:39 | RT::Transaction |\n| | | 0 | 0\n 19279178 | 28108400 | 375 | 2020-06-18 | 3768865 | 2020-06-19\n17:38:39 | 3768865 | 2020-06-19 17:38:39 | RT::Transaction |\n| | | 0 | 0\n 19279179 | 28108401 | 375 | 2020-06-18 | 3768865 | 2020-06-19\n17:38:40 | 3768865 | 2020-06-19 17:38:40 | RT::Transaction |\n| | | 0 | 0\n 19279180 | 28108402 | 375 | 2020-06-18 | 3768865 | 2020-06-19\n17:38:40 | 3768865 | 2020-06-19 17:38:40 | RT::Transaction |\n| | | 0 | 0\n 19279193 | 28108419 | 375 | 2020-06-17 | 3768865 | 2020-06-19\n17:39:12 | 3768865 | 2020-06-19 17:39:12 | RT::Transaction |\n| | | 0 | 0\n 19279194 | 28108420 | 375 | 2020-06-17 | 3768865 | 2020-06-19\n17:39:12 | 3768865 | 2020-06-19 17:39:12 | RT::Transaction |\n| | | 0 | 0\n 19279195 | 28108421 | 375 | 2020-06-17 | 3768865 | 2020-06-19\n17:39:12 | 3768865 | 2020-06-19 17:39:12 | RT::Transaction |\n| | | 0 | 0\n 19279196 | 28108422 | 375 | 2020-06-17 | 3768865 | 2020-06-19\n17:39:12 | 3768865 | 2020-06-19 17:39:12 | RT::Transaction |\n| | | 0 | 0\n 19279197 | 28108423 | 375 | 2020-06-17 | 3768865 | 2020-06-19\n17:39:12 | 3768865 | 2020-06-19 17:39:12 | RT::Transaction |\n| | | 0 | 0\n\n\nThey are just the time worked, so I do not understand why it is chosing\nthe crazy path that it does.\n\nRegards,\nKen\n\n\n", "msg_date": "Fri, 19 Jun 2020 17:25:33 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "Kenneth Marshall <[email protected]> writes:\n> On Fri, Jun 19, 2020 at 06:10:34PM -0400, Tom Lane wrote:\n>> Hmm, does seem like you have some outlier keys. Are any of the keys in\n>> the column you're trying to join to larger than 27667772?\n\n> The only values above 27667772? for objectid are:\n\nSorry, I meant the other join column, ie articles.id.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jun 2020 18:30:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "On Fri, Jun 19, 2020 at 05:25:33PM -0500, Kenneth Marshall wrote:\n> On Fri, Jun 19, 2020 at 06:10:34PM -0400, Tom Lane wrote:\n> > > max(objectcustomfieldvalues.objectid) = 28108423 and here is the\n> > > histogram for that column:\n> > \n> > ... 3304313,3693956,27667772}\n> > \n> > Hmm, does seem like you have some outlier keys. Are any of the keys in\n> > the column you're trying to join to larger than 27667772?\n> > \n> > \t\t\tregards, tom lane\n> \n> Hi Tom,\n> \n> The only values above 27667772? for objectid are:\n> \n> # select * from objectcustomfieldvalues where objectid > 27667772;\n> id | objectid | customfield | content | creator |\n> created | lastupdatedby | lastupdated | objecttype |\n> largecontent | contenttype | contentencoding | sortorder | disabled \n> ----------+----------+-------------+------------+---------+---------------------+---------------+---------------------+-----------------+--------------+-------------+-----------------+-----------+----------\n> 19012927 | 27667773 | 375 | 2020-05-12 | 3768865 | 2020-05-13\n> 16:10:39 | 3768865 | 2020-05-13 16:10:39 | RT::Transaction |\n> | | | 0 | 0\n> 19012928 | 27667774 | 375 | 2020-05-12 | 3768865 | 2020-05-13\n> 16:10:39 | 3768865 | 2020-05-13 16:10:39 | RT::Transaction |\n> | | | 0 | 0\n> 19020166 | 27680053 | 375 | 2020-05-14 | 3570362 | 2020-05-14\n> 14:14:20 | 3570362 | 2020-05-14 14:14:20 | RT::Transaction |\n> | | | 0 | 0\n> 19025163 | 27688649 | 375 | 2020-05-13 | 3768865 | 2020-05-14\n> 20:27:04 | 3768865 | 2020-05-14 20:27:04 | RT::Transaction |\n> | | | 0 | 0\n> 19025164 | 27688650 | 375 | 2020-05-13 | 3768865 | 2020-05-14\n> 20:27:04 | 3768865 | 2020-05-14 20:27:04 | RT::Transaction |\n> | | | 0 | 0\n> 19025165 | 27688651 | 375 | 2020-05-13 | 3768865 | 2020-05-14\n> 20:27:04 | 3768865 | 2020-05-14 20:27:04 | RT::Transaction |\n> | | | 0 | 0\n> 19025166 | 27688652 | 375 | 2020-05-13 | 3768865 | 2020-05-14\n> 20:27:04 | 3768865 | 2020-05-14 20:27:04 | RT::Transaction |\n> | | | 0 | 0\n> 19025167 | 27688676 | 375 | 2020-05-14 | 3768865 | 2020-05-14\n> 20:27:29 | 3768865 | 2020-05-14 20:27:29 | RT::Transaction |\n> | | | 0 | 0\n> 19031374 | 27698358 | 375 | 2020-05-13 | 3768865 | 2020-05-15\n> 15:32:57 | 3768865 | 2020-05-15 15:32:57 | RT::Transaction |\n> | | | 0 | 0\n> 19031384 | 27698376 | 375 | 2020-05-14 | 3768865 | 2020-05-15\n> 15:33:50 | 3768865 | 2020-05-15 15:33:50 | RT::Transaction |\n> | | | 0 | 0\n> 19031385 | 27698377 | 375 | 2020-05-14 | 3768865 | 2020-05-15\n> 15:33:50 | 3768865 | 2020-05-15 15:33:50 | RT::Transaction |\n> | | | 0 | 0\n> 19033205 | 27701391 | 375 | 2020-05-15 | 3197295 | 2020-05-15\n> 18:21:36 | 3197295 | 2020-05-15 18:21:36 | RT::Transaction |\n> | | | 0 | 0\n> 19042369 | 27715839 | 375 | 2020-05-18 | 1403795 | 2020-05-18\n> 14:12:35 | 1403795 | 2020-05-18 14:12:35 | RT::Transaction |\n> | | | 0 | 0\n> 19047274 | 27723981 | 375 | 2020-05-18 | 3197295 | 2020-05-18\n> 19:29:14 | 3197295 | 2020-05-18 19:29:14 | RT::Transaction |\n> | | | 0 | 0\n> 19048566 | 27726800 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n> 20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n> | | | 0 | 0\n> 19048567 | 27726801 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n> 20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n> | | | 0 | 0\n> 19048568 | 27726802 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n> 20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n> | | | 0 | 0\n> 19048569 | 27726803 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n> 20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n> | | | 0 | 0\n> 19048570 | 27726804 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n> 20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n> | | | 0 | 0\n> 19048571 | 27726805 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n> 20:23:18 | 3768865 | 2020-05-18 20:23:18 | RT::Transaction |\n> | | | 0 | 0\n> 19048572 | 27726806 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n> 20:23:19 | 3768865 | 2020-05-18 20:23:19 | RT::Transaction |\n> | | | 0 | 0\n> 19048573 | 27726807 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n> 20:23:19 | 3768865 | 2020-05-18 20:23:19 | RT::Transaction |\n> | | | 0 | 0\n> 19048574 | 27726808 | 375 | 2020-05-18 | 3768865 | 2020-05-18\n> 20:23:19 | 3768865 | 2020-05-18 20:23:19 | RT::Transaction |\n> | | | 0 | 0\n> 19054502 | 27738410 | 375 | 2020-05-19 | 3197295 | 2020-05-19\n> 15:01:50 | 3197295 | 2020-05-19 15:01:50 | RT::Transaction |\n> | | | 0 | 0\n> 19056348 | 27741653 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n> 16:39:01 | 3768865 | 2020-05-19 16:39:01 | RT::Transaction |\n> | | | 0 | 0\n> 19056349 | 27741654 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n> 16:39:01 | 3768865 | 2020-05-19 16:39:01 | RT::Transaction |\n> | | | 0 | 0\n> 19056350 | 27741655 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n> 16:39:02 | 3768865 | 2020-05-19 16:39:02 | RT::Transaction |\n> | | | 0 | 0\n> 19056351 | 27741656 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n> 16:39:02 | 3768865 | 2020-05-19 16:39:02 | RT::Transaction |\n> | | | 0 | 0\n> 19056352 | 27741657 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n> 16:39:02 | 3768865 | 2020-05-19 16:39:02 | RT::Transaction |\n> | | | 0 | 0\n> 19056362 | 27741667 | 375 | 2020-05-19 | 3768865 | 2020-05-19\n> 16:39:29 | 3768865 | 2020-05-19 16:39:29 | RT::Transaction |\n> | | | 0 | 0\n> 19057464 | 27743793 | 375 | 2020-05-19 | 3197295 | 2020-05-19\n> 18:03:16 | 3197295 | 2020-05-19 18:03:16 | RT::Transaction |\n> | | | 0 | 0\n> 19067180 | 27760343 | 375 | 2020-05-20 | 1403795 | 2020-05-20\n> 18:01:59 | 1403795 | 2020-05-20 18:01:59 | RT::Transaction |\n> | | | 0 | 0\n> 19067476 | 27760892 | 375 | 2020-05-19 | 3197295 | 2020-05-20\n> 18:23:48 | 3197295 | 2020-05-20 18:23:48 | RT::Transaction |\n> | | | 0 | 0\n> 19073560 | 27771129 | 375 | 2020-05-21 | 3197295 | 2020-05-21\n> 14:15:54 | 3197295 | 2020-05-21 14:15:54 | RT::Transaction |\n> | | | 0 | 0\n> 19074011 | 27771902 | 375 | 2020-05-21 | 3570362 | 2020-05-21\n> 15:02:49 | 3570362 | 2020-05-21 15:02:49 | RT::Transaction |\n> | | | 0 | 0\n> 19081811 | 27784951 | 375 | 2020-05-22 | 2960471 | 2020-05-22\n> 14:52:40 | 2960471 | 2020-05-22 14:52:40 | RT::Transaction |\n> | | | 0 | 0\n> 19093560 | 27804234 | 375 | 2020-05-26 | 3570362 | 2020-05-26\n> 15:00:29 | 3570362 | 2020-05-26 15:00:29 | RT::Transaction |\n> | | | 0 | 0\n> 19094043 | 27805100 | 375 | 2020-05-26 | 3570362 | 2020-05-26\n> 15:30:15 | 3570362 | 2020-05-26 15:30:15 | RT::Transaction |\n> | | | 0 | 0\n> 19094798 | 27806250 | 375 | 2020-05-26 | 3570362 | 2020-05-26\n> 15:59:22 | 3570362 | 2020-05-26 15:59:22 | RT::Transaction |\n> | | | 0 | 0\n> 19103803 | 27822098 | 375 | 2020-05-27 | 3570362 | 2020-05-27\n> 15:15:37 | 3570362 | 2020-05-27 15:15:37 | RT::Transaction |\n> | | | 0 | 0\n> 19103893 | 27822211 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n> 15:20:11 | 3768865 | 2020-05-27 15:20:11 | RT::Transaction |\n> | | | 0 | 0\n> 19103894 | 27822212 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n> 15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n> | | | 0 | 0\n> 19103895 | 27822213 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n> 15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n> | | | 0 | 0\n> 19103896 | 27822214 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n> 15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n> | | | 0 | 0\n> 19103897 | 27822215 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n> 15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n> | | | 0 | 0\n> 19103898 | 27822216 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n> 15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n> | | | 0 | 0\n> 19103899 | 27822217 | 375 | 2020-05-26 | 3768865 | 2020-05-27\n> 15:20:12 | 3768865 | 2020-05-27 15:20:12 | RT::Transaction |\n> | | | 0 | 0\n> 19103910 | 27822238 | 375 | 2020-05-27 | 3570362 | 2020-05-27\n> 15:21:30 | 3570362 | 2020-05-27 15:21:30 | RT::Transaction |\n> | | | 0 | 0\n> 19103921 | 27822243 | 375 | 2020-05-27 | 3768865 | 2020-05-27\n> 15:21:39 | 3768865 | 2020-05-27 15:21:39 | RT::Transaction |\n> | | | 0 | 0\n> 19103922 | 27822244 | 375 | 2020-05-27 | 3768865 | 2020-05-27\n> 15:21:39 | 3768865 | 2020-05-27 15:21:39 | RT::Transaction |\n> | | | 0 | 0\n> 19103923 | 27822245 | 375 | 2020-05-27 | 3768865 | 2020-05-27\n> 15:21:39 | 3768865 | 2020-05-27 15:21:39 | RT::Transaction |\n> | | | 0 | 0\n> 19103924 | 27822246 | 375 | 2020-05-27 | 3768865 | 2020-05-27\n> 15:21:39 | 3768865 | 2020-05-27 15:21:39 | RT::Transaction |\n> | | | 0 | 0\n> 19103925 | 27822247 | 375 | 2020-05-27 | 3768865 | 2020-05-27\n> 15:21:39 | 3768865 | 2020-05-27 15:21:39 | RT::Transaction |\n> | | | 0 | 0\n> 19109404 | 27830956 | 375 | 2020-05-27 | 3570362 | 2020-05-27\n> 20:42:58 | 3570362 | 2020-05-27 20:42:58 | RT::Transaction |\n> | | | 0 | 0\n> 19109462 | 27831009 | 375 | 2020-05-27 | 3570362 | 2020-05-27\n> 20:44:12 | 3570362 | 2020-05-27 20:44:12 | RT::Transaction |\n> | | | 0 | 0\n> 19115179 | 27840467 | 375 | 2020-05-28 | 3570362 | 2020-05-28\n> 15:28:09 | 3570362 | 2020-05-28 15:28:09 | RT::Transaction |\n> | | | 0 | 0\n> 19115214 | 27840551 | 375 | 2020-05-28 | 3570362 | 2020-05-28\n> 15:29:59 | 3570362 | 2020-05-28 15:29:59 | RT::Transaction |\n> | | | 0 | 0\n> 19118472 | 27845963 | 375 | 2020-05-28 | 3570362 | 2020-05-28\n> 18:50:57 | 3570362 | 2020-05-28 18:50:57 | RT::Transaction |\n> | | | 0 | 0\n> 19127210 | 27860753 | 375 | 2020-05-28 | 3768865 | 2020-05-29\n> 17:22:57 | 3768865 | 2020-05-29 17:22:57 | RT::Transaction |\n> | | | 0 | 0\n> 19127211 | 27860754 | 375 | 2020-05-28 | 3768865 | 2020-05-29\n> 17:22:57 | 3768865 | 2020-05-29 17:22:57 | RT::Transaction |\n> | | | 0 | 0\n> 19127212 | 27860755 | 375 | 2020-05-28 | 3768865 | 2020-05-29\n> 17:22:57 | 3768865 | 2020-05-29 17:22:57 | RT::Transaction |\n> | | | 0 | 0\n> 19127213 | 27860756 | 375 | 2020-05-28 | 3768865 | 2020-05-29\n> 17:22:57 | 3768865 | 2020-05-29 17:22:57 | RT::Transaction |\n> | | | 0 | 0\n> 19127214 | 27860757 | 375 | 2020-05-28 | 3768865 | 2020-05-29\n> 17:22:57 | 3768865 | 2020-05-29 17:22:57 | RT::Transaction |\n> | | | 0 | 0\n> 19163910 | 27922577 | 375 | 2020-06-02 | 3768865 | 2020-06-03\n> 20:57:29 | 3768865 | 2020-06-03 20:57:29 | RT::Transaction |\n> | | | 0 | 0\n> 19163911 | 27922578 | 375 | 2020-06-02 | 3768865 | 2020-06-03\n> 20:57:29 | 3768865 | 2020-06-03 20:57:29 | RT::Transaction |\n> | | | 0 | 0\n> 19163912 | 27922579 | 375 | 2020-06-02 | 3768865 | 2020-06-03\n> 20:57:29 | 3768865 | 2020-06-03 20:57:29 | RT::Transaction |\n> | | | 0 | 0\n> 19163913 | 27922580 | 375 | 2020-06-02 | 3768865 | 2020-06-03\n> 20:57:29 | 3768865 | 2020-06-03 20:57:29 | RT::Transaction |\n> | | | 0 | 0\n> 19163914 | 27922582 | 375 | 2020-06-01 | 3768865 | 2020-06-03\n> 20:57:52 | 3768865 | 2020-06-03 20:57:52 | RT::Transaction |\n> | | | 0 | 0\n> 19163915 | 27922583 | 375 | 2020-06-01 | 3768865 | 2020-06-03\n> 20:57:52 | 3768865 | 2020-06-03 20:57:52 | RT::Transaction |\n> | | | 0 | 0\n> 19163916 | 27922584 | 375 | 2020-06-01 | 3768865 | 2020-06-03\n> 20:57:52 | 3768865 | 2020-06-03 20:57:52 | RT::Transaction |\n> | | | 0 | 0\n> 19186439 | 27960807 | 375 | 2020-06-08 | 3197295 | 2020-06-08\n> 16:18:49 | 3197295 | 2020-06-08 16:18:49 | RT::Transaction |\n> | | | 0 | 0\n> 19189227 | 27965582 | 375 | 2020-06-08 | 22 | 2020-06-08\n> 19:24:19 | 22 | 2020-06-08 19:24:19 | RT::Transaction |\n> | | | 0 | 0\n> 19189269 | 27965637 | 375 | 2020-06-08 | 402 | 2020-06-08\n> 19:25:20 | 402 | 2020-06-08 19:25:20 | RT::Transaction |\n> | | | 0 | 0\n> 19189270 | 27965637 | 376 | 22 | 402 | 2020-06-08\n> 19:25:20 | 402 | 2020-06-08 19:25:20 | RT::Transaction |\n> | | | 0 | 0\n> 19189271 | 27965638 | 375 | 2020-06-08 | 402 | 2020-06-08\n> 19:25:20 | 402 | 2020-06-08 19:25:20 | RT::Transaction |\n> | | | 0 | 0\n> 19189272 | 27965638 | 376 | 22 | 402 | 2020-06-08\n> 19:25:20 | 402 | 2020-06-08 19:25:20 | RT::Transaction |\n> | | | 0 | 0\n> 19193472 | 27972893 | 375 | 2020-06-08 | 3197295 | 2020-06-09\n> 12:21:50 | 3197295 | 2020-06-09 12:21:50 | RT::Transaction |\n> | | | 0 | 0\n> 19204287 | 27991617 | 375 | 2020-06-10 | 3197295 | 2020-06-10\n> 15:52:41 | 3197295 | 2020-06-10 15:52:41 | RT::Transaction |\n> | | | 0 | 0\n> 19205446 | 27993528 | 375 | 2020-06-10 | 3768865 | 2020-06-10\n> 17:24:43 | 3768865 | 2020-06-10 17:24:43 | RT::Transaction |\n> | | | 0 | 0\n> 19226664 | 28019342 | 375 | 2020-06-10 | 3768865 | 2020-06-11\n> 15:24:50 | 3768865 | 2020-06-11 15:24:50 | RT::Transaction |\n> | | | 0 | 0\n> 19226665 | 28019343 | 375 | 2020-06-10 | 3768865 | 2020-06-11\n> 15:24:50 | 3768865 | 2020-06-11 15:24:50 | RT::Transaction |\n> | | | 0 | 0\n> 19226666 | 28019344 | 375 | 2020-06-10 | 3768865 | 2020-06-11\n> 15:24:50 | 3768865 | 2020-06-11 15:24:50 | RT::Transaction |\n> | | | 0 | 0\n> 19226667 | 28019345 | 375 | 2020-06-10 | 3768865 | 2020-06-11\n> 15:24:50 | 3768865 | 2020-06-11 15:24:50 | RT::Transaction |\n> | | | 0 | 0\n> 19233084 | 28030270 | 375 | 2020-06-12 | 3197295 | 2020-06-12\n> 14:05:00 | 3197295 | 2020-06-12 14:05:00 | RT::Transaction |\n> | | | 0 | 0\n> 19235815 | 28034687 | 375 | 2020-06-12 | 84 | 2020-06-12\n> 17:57:02 | 84 | 2020-06-12 17:57:02 | RT::Transaction |\n> | | | 0 | 0\n> 19236305 | 28035519 | 375 | 2020-06-12 | 3197295 | 2020-06-12\n> 18:29:25 | 3197295 | 2020-06-12 18:29:25 | RT::Transaction |\n> | | | 0 | 0\n> 19236386 | 28035692 | 375 | 2020-06-12 | 3197295 | 2020-06-12\n> 18:36:56 | 3197295 | 2020-06-12 18:36:56 | RT::Transaction |\n> | | | 0 | 0\n> 19237416 | 28037412 | 375 | 2020-06-12 | 3197295 | 2020-06-12\n> 19:44:36 | 3197295 | 2020-06-12 19:44:36 | RT::Transaction |\n> | | | 0 | 0\n> 19238015 | 28038402 | 375 | 2020-06-12 | 3768865 | 2020-06-12\n> 20:26:15 | 3768865 | 2020-06-12 20:26:15 | RT::Transaction |\n> | | | 0 | 0\n> 19238016 | 28038403 | 375 | 2020-06-12 | 3768865 | 2020-06-12\n> 20:26:15 | 3768865 | 2020-06-12 20:26:15 | RT::Transaction |\n> | | | 0 | 0\n> 19238017 | 28038404 | 375 | 2020-06-12 | 3768865 | 2020-06-12\n> 20:26:15 | 3768865 | 2020-06-12 20:26:15 | RT::Transaction |\n> | | | 0 | 0\n> 19238018 | 28038405 | 375 | 2020-06-12 | 3768865 | 2020-06-12\n> 20:26:15 | 3768865 | 2020-06-12 20:26:15 | RT::Transaction |\n> | | | 0 | 0\n> 19238032 | 28038422 | 375 | 2020-06-11 | 3768865 | 2020-06-12\n> 20:26:39 | 3768865 | 2020-06-12 20:26:39 | RT::Transaction |\n> | | | 0 | 0\n> 19238033 | 28038423 | 375 | 2020-06-11 | 3768865 | 2020-06-12\n> 20:26:39 | 3768865 | 2020-06-12 20:26:39 | RT::Transaction |\n> | | | 0 | 0\n> 19238034 | 28038424 | 375 | 2020-06-11 | 3768865 | 2020-06-12\n> 20:26:39 | 3768865 | 2020-06-12 20:26:39 | RT::Transaction |\n> | | | 0 | 0\n> 19238035 | 28038425 | 375 | 2020-06-11 | 3768865 | 2020-06-12\n> 20:26:39 | 3768865 | 2020-06-12 20:26:39 | RT::Transaction |\n> | | | 0 | 0\n> 19240041 | 28042208 | 375 | 2020-06-14 | 1403795 | 2020-06-14\n> 12:50:47 | 1403795 | 2020-06-14 12:50:47 | RT::Transaction |\n> | | | 0 | 0\n> 19242958 | 28046818 | 375 | 2020-06-15 | 3570362 | 2020-06-15\n> 14:38:57 | 3570362 | 2020-06-15 14:38:57 | RT::Transaction |\n> | | | 0 | 0\n> 19255465 | 28067560 | 375 | 2020-06-16 | 3570362 | 2020-06-16\n> 18:41:13 | 3570362 | 2020-06-16 18:41:13 | RT::Transaction |\n> | | | 0 | 0\n> 19279177 | 28108399 | 375 | 2020-06-18 | 3768865 | 2020-06-19\n> 17:38:39 | 3768865 | 2020-06-19 17:38:39 | RT::Transaction |\n> | | | 0 | 0\n> 19279178 | 28108400 | 375 | 2020-06-18 | 3768865 | 2020-06-19\n> 17:38:39 | 3768865 | 2020-06-19 17:38:39 | RT::Transaction |\n> | | | 0 | 0\n> 19279179 | 28108401 | 375 | 2020-06-18 | 3768865 | 2020-06-19\n> 17:38:40 | 3768865 | 2020-06-19 17:38:40 | RT::Transaction |\n> | | | 0 | 0\n> 19279180 | 28108402 | 375 | 2020-06-18 | 3768865 | 2020-06-19\n> 17:38:40 | 3768865 | 2020-06-19 17:38:40 | RT::Transaction |\n> | | | 0 | 0\n> 19279193 | 28108419 | 375 | 2020-06-17 | 3768865 | 2020-06-19\n> 17:39:12 | 3768865 | 2020-06-19 17:39:12 | RT::Transaction |\n> | | | 0 | 0\n> 19279194 | 28108420 | 375 | 2020-06-17 | 3768865 | 2020-06-19\n> 17:39:12 | 3768865 | 2020-06-19 17:39:12 | RT::Transaction |\n> | | | 0 | 0\n> 19279195 | 28108421 | 375 | 2020-06-17 | 3768865 | 2020-06-19\n> 17:39:12 | 3768865 | 2020-06-19 17:39:12 | RT::Transaction |\n> | | | 0 | 0\n> 19279196 | 28108422 | 375 | 2020-06-17 | 3768865 | 2020-06-19\n> 17:39:12 | 3768865 | 2020-06-19 17:39:12 | RT::Transaction |\n> | | | 0 | 0\n> 19279197 | 28108423 | 375 | 2020-06-17 | 3768865 | 2020-06-19\n> 17:39:12 | 3768865 | 2020-06-19 17:39:12 | RT::Transaction |\n> | | | 0 | 0\n> \n> \n> They are just the time worked, so I do not understand why it is chosing\n> the crazy path that it does.\n> \n> Regards,\n> Ken\n> \n\nHere is another query that is showing the same selection of an index\nscan when without it is is soooo much faster:\n\n# explain (analyze,buffers) SELECT COUNT(DISTINCT main.id) FROM Assets\n# main JOIN Groups Groups_1 ON ( LOWER(Groups_1.Domain) =\n# 'rt::asset-role' ) AND ( Groups_1.Instance = main.id ) JOIN\n# CachedGroupMembers CachedGroupMembers_2 ON (\n# CachedGroupMembers_2.Disabled = '0' ) AND (\n# CachedGroupMembers_2.GroupId = Groups_1.id ) WHERE ( (\n# CachedGroupMembers_2.MemberId = '151395' ) ) AND (LOWER(main.Status)\n# != 'deleted');\n QUERY\nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=12488.19..12488.20 rows=1 width=8) (actual\ntime=46.438..46.439 rows=1 loops=1)\n Buffers: shared hit=40111\n -> Nested Loop (cost=364.48..12488.19 rows=1 width=4) (actual\ntime=46.402..46.402 rows=0 loops=1)\n Buffers: shared hit=40111\n -> Hash Join (cost=363.16..12343.59 rows=59 width=8) (actual\ntime=4.111..11.633 rows=13194 loops=1)\n Hash Cond: (groups_1.instance = main.id)\n Buffers: shared hit=529\n -> Bitmap Heap Scan on groups groups_1\n(cost=186.22..12132.46 rows=13028 width=8) (actual time=0.918..3.492\nrows=13380 loops=1)\n Recheck Cond: (lower((domain)::text) =\n'rt::asset-role'::text)\n Heap Blocks: exact=390\n Buffers: shared hit=474\n -> Bitmap Index Scan on groups2\n(cost=0.00..182.97 rows=13028 width=0) (actual time=0.879..0.879\nrows=13380 loops=1)\n Index Cond: (lower((domain)::text) =\n'rt::asset-role'::text)\n Buffers: shared hit=84\n -> Hash (cost=121.66..121.66 rows=4422 width=4) (actual\ntime=3.174..3.174 rows=4398 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 219kB\n Buffers: shared hit=55\n -> Seq Scan on assets main (cost=0.00..121.66\nrows=4422 width=4) (actual time=0.014..2.425 rows=4398 loops=1)\n Filter: (lower((status)::text) <>\n'deleted'::text)\n Rows Removed by Filter: 47\n Buffers: shared hit=55\n -> Bitmap Heap Scan on cachedgroupmembers cachedgroupmembers_2\n(cost=1.32..2.44 rows=1 width=4) (actual time=0.002..0.002 rows=0\nloops=13194)\n Recheck Cond: ((groupid = groups_1.id) AND (memberid =\n151395) AND (disabled = '0'::smallint))\n Buffers: shared hit=39582\n -> Bitmap Index Scan on disgroumem (cost=0.00..1.32\nrows=1 width=0) (actual time=0.002..0.002 rows=0 loops=13194)\n Index Cond: ((groupid = groups_1.id) AND (memberid\n= 151395) AND (disabled = '0'::smallint))\n Buffers: shared hit=39582\n Planning Time: 0.520 ms\n Execution Time: 46.503 ms\n(29 rows)\n\nAnd with enable_indexscan = 1;\n\n# explain (analyze,buffers) SELECT COUNT(DISTINCT main.id) FROM Assets\n# main JOIN Groups Groups_1 ON ( LOWER(Groups_1.Domain) =\n# 'rt::asset-role' ) AND ( Groups_1.Instance = main.id ) JOIN\n# CachedGroupMembers CachedGroupMembers_2 ON (\n# CachedGroupMembers_2.Disabled = '0' ) AND (\n# CachedGroupMembers_2.GroupId = Groups_1.id ) WHERE ( (\n# CachedGroupMembers_2.MemberId = '151395' ) ) AND (LOWER(main.Status)\n# != 'deleted');\n QUERY\nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=563.50..563.51 rows=1 width=8) (actual\ntime=2626.584..2626.585 rows=1 loops=1)\n Buffers: shared hit=172390\n -> Nested Loop (cost=11.13..563.50 rows=1 width=4) (actual\ntime=2626.568..2626.568 rows=0 loops=1)\n Buffers: shared hit=172390\n -> Merge Join (cost=10.70..482.35 rows=59 width=8) (actual\ntime=0.352..2599.829 rows=13194 loops=1)\n Merge Cond: (main.id = groups_1.instance)\n Buffers: shared hit=132808\n -> Index Scan using assets_pkey on assets main\n(cost=0.28..160.81 rows=4422 width=4) (actual time=0.039..3.578\nrows=4398 loops=1)\n Filter: (lower((status)::text) <> 'deleted'::text)\n Rows Removed by Filter: 47\n Buffers: shared hit=103\n -> Index Scan using groups3 on groups groups_1\n(cost=0.43..130022.48 rows=13028 width=8) (actual time=0.296..2592.141\nrows=13380 loops=1)\n Filter: (lower((domain)::text) =\n'rt::asset-role'::text)\n Rows Removed by Filter: 3853979\n Buffers: shared hit=132705\n -> Index Only Scan using disgroumem on cachedgroupmembers\ncachedgroupmembers_2 (cost=0.43..1.37 rows=1 width=4) (actual\ntime=0.002..0.002 rows=0 loops=13194)\n Index Cond: ((groupid = groups_1.id) AND (memberid =\n151395) AND (disabled = '0'::smallint))\n Heap Fetches: 0\n Buffers: shared hit=39582\n Planning Time: 0.562 ms\n Execution Time: 2626.651 ms\n(21 rows)\n\nI'm not sure if it is just a pathological interaction of this\napplication with PostgreSQL or something I need to fix. Ideally, I could\nfigure out a way to have PostgreSQL do it automatically.\n\nRegards,\nKen\n\n\n\n", "msg_date": "Fri, 19 Jun 2020 17:37:16 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "[ please keep the mailing list cc'd ]\n\nKenneth Marshall <[email protected]> writes:\n> Here are the stats for articles.id:\n\n> 4,7,9,11,13,14,16,17,18,19,20,21,22,23,\n> 24,25,26,32,33,34,36,40,41,42,43,44,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,99,100,101,102,106,107,108,109,113,1 14,115,116,117,118,119,120,121,122,123,125,126,127,128,129,130,131,133,134,135,136,137,140,141,142,143,144,145,146,14 7,148,149,150,151,152,153,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177 ,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206, 207,208,1209,1210,1212,1213,1214,1215,1216,1219,1220,1221,1222,1223} \n> That completely matches the max(id) for articles.id.\n\nHm, well it's clear why the planner is going for the mergejoin strategy:\nit expects to only have to scan a very small fraction of the other table\nbefore it's up past objectid = 1223 and can stop merging. And it\nseems like it's right ...\n\n... oh, now I see: apparently, your filter condition is such that *no*\nrows of the objectcustomfieldvalues table get past the filter:\n\n -> Index Scan using objectcustomfieldvalues3 on objectcustomfieldvalues objectcustomfieldvalues_1 (cost=0.56..807603.40 rows=915 width=4) (actual time=21165.441..21165.441 rows=0 loops=1)\n Filter: ((disabled = 0) AND ((largecontent ~~* '%958575%'::text) OR ((content)::text ~~* '%958575%'::text)))\n Rows Removed by Filter: 19030904\n\n\"rows=0\" is the telltale. So even after we're past objectid = 1223, that\nscan continues, because the mergejoin needs to see a higher key before it\nknows it can stop.\n\nThat's kind of annoying :-(. I wonder if there's a way to be smarter?\nThis case would work a lot better if the filter conditions were not\napplied till after the merge; but of course that wouldn't be an\nimprovement in general. Or maybe we should penalize the mergejoin\ncost estimate if there's a highly selective filter in the way.\n(It does look like the planner is correctly estimating that the\nfilter is quite selective --- it's just not considering the potential\nimpact on the scan-until-finding-a-greater-key behavior.)\n\nRight now I don't have any better suggestion than disabling mergejoin\nif you think the filter is going to be very selective.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jun 2020 19:07:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "I wrote:\n> ... oh, now I see: apparently, your filter condition is such that *no*\n> rows of the objectcustomfieldvalues table get past the filter:\n>\n> -> Index Scan using objectcustomfieldvalues3 on objectcustomfieldvalues objectcustomfieldvalues_1 (cost=0.56..807603.40 rows=915 width=4) (actual time=21165.441..21165.441 rows=0 loops=1)\n> Filter: ((disabled = 0) AND ((largecontent ~~* '%958575%'::text) OR ((content)::text ~~* '%958575%'::text)))\n> Rows Removed by Filter: 19030904\n\n> That's kind of annoying :-(. I wonder if there's a way to be smarter?\n> This case would work a lot better if the filter conditions were not\n> applied till after the merge; but of course that wouldn't be an\n> improvement in general. Or maybe we should penalize the mergejoin\n> cost estimate if there's a highly selective filter in the way.\n\nI experimented with this some more, with the intention of creating a\nplanner patch that would do the latter, and was surprised to find that\nthere already is such a penalty. It's sort of indirect and undocumented,\nbut nonetheless the estimate for whether a mergejoin can stop early\ndoes get heavily de-rated if the planner realizes that the table is\nbeing heavily filtered. So now I'm thinking that your problem is that\n\"rows=915\" is not a good enough estimate of what will happen in this\nindexscan. It looks good in comparison to the table size of 19M rows,\nbut on a percentage basis compared to the true value of 0 rows, it's\nstill pretty bad. You said you'd increased the stats target for\nobjectcustomfieldvalues.objectid, but maybe the real problem is needing\nto increase the targets for content and largecontent, in hopes of driving\ndown the estimate for how many rows will pass this filter condition.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Jun 2020 14:22:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "On Sat, Jun 20, 2020 at 02:22:03PM -0400, Tom Lane wrote:\n> I wrote:\n> > ... oh, now I see: apparently, your filter condition is such that *no*\n> > rows of the objectcustomfieldvalues table get past the filter:\n> >\n> > -> Index Scan using objectcustomfieldvalues3 on objectcustomfieldvalues objectcustomfieldvalues_1 (cost=0.56..807603.40 rows=915 width=4) (actual time=21165.441..21165.441 rows=0 loops=1)\n> > Filter: ((disabled = 0) AND ((largecontent ~~* '%958575%'::text) OR ((content)::text ~~* '%958575%'::text)))\n> > Rows Removed by Filter: 19030904\n> \n> > That's kind of annoying :-(. I wonder if there's a way to be smarter?\n> > This case would work a lot better if the filter conditions were not\n> > applied till after the merge; but of course that wouldn't be an\n> > improvement in general. Or maybe we should penalize the mergejoin\n> > cost estimate if there's a highly selective filter in the way.\n> \n> I experimented with this some more, with the intention of creating a\n> planner patch that would do the latter, and was surprised to find that\n> there already is such a penalty. It's sort of indirect and undocumented,\n> but nonetheless the estimate for whether a mergejoin can stop early\n> does get heavily de-rated if the planner realizes that the table is\n> being heavily filtered. So now I'm thinking that your problem is that\n> \"rows=915\" is not a good enough estimate of what will happen in this\n> indexscan. It looks good in comparison to the table size of 19M rows,\n> but on a percentage basis compared to the true value of 0 rows, it's\n> still pretty bad. You said you'd increased the stats target for\n> objectcustomfieldvalues.objectid, but maybe the real problem is needing\n> to increase the targets for content and largecontent, in hopes of driving\n> down the estimate for how many rows will pass this filter condition.\n> \n> \t\t\tregards, tom lane\n\nHi Tom,\n\nI increased the statistics on the content field because it had the most\nvalues (19000000) versus largecontent (1000). When I reached 8000, the\nplan changed to:\n\n# explain (analyze,buffers) SELECT DISTINCT main.* FROM Articles main JOIN ObjectCustomFieldValues ObjectCustomFieldValues_1 ON ( ObjectCustomFieldValues_1.Disabled = '0' ) AND ( ObjectCustomFieldValues_1.ObjectId = main.id ) WHERE (ObjectCustomFieldValues_1.LargeContent ILIKE '%958575%' OR ObjectCustomFieldValues_1.Content ILIKE '%958575%') AND (main.Disabled = '0') ORDER BY main.SortOrder ASC, main.Name ASC;\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=1151.07..1151.10 rows=1 width=137) (actual time=1.782..1.782 rows=0 loops=1)\n Buffers: shared hit=158\n -> Sort (cost=1151.07..1151.08 rows=1 width=137) (actual time=1.781..1.781 rows=0 loops=1)\n Sort Key: main.sortorder, main.name, main.id, main.summary, main.class, main.parent, main.uri, main.creator, main.created, main.lastupdatedby, main.lastupdated\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=158\n -> Hash Join (cost=185.12..1151.06 rows=1 width=137) (actual time=1.777..1.777 rows=0 loops=1)\n Hash Cond: (objectcustomfieldvalues_1.objectid = main.id)\n Buffers: shared hit=158\n -> Bitmap Heap Scan on objectcustomfieldvalues objectcustomfieldvalues_1 (cost=177.36..1141.46 rows=699 width=4) (actual time=1.704..1.704 rows=0 loops=1)\n Recheck Cond: ((largecontent ~~* '%958575%'::text) OR ((content)::text ~~* '%958575%'::text))\n Filter: (disabled = 0)\n Buffers: shared hit=154\n -> BitmapOr (cost=177.36..177.36 rows=868 width=0) (actual time=1.703..1.703 rows=0 loops=1)\n Buffers: shared hit=154\n -> Bitmap Index Scan on objectcustomfieldvalues_largecontent_trgm (cost=0.00..30.80 rows=1 width=0) (actual time=0.282..0.282 rows=0 loops=1)\n Index Cond: (largecontent ~~* '%958575%'::text)\n Buffers: shared hit=28\n -> Bitmap Index Scan on objectcustomfieldvalues_content_trgm (cost=0.00..146.21 rows=868 width=0) (actual time=1.421..1.421 rows=0 loops=1)\n Index Cond: ((content)::text ~~* '%958575%'::text)\n Buffers: shared hit=126\n -> Hash (cost=5.91..5.91 rows=148 width=137) (actual time=0.071..0.071 rows=148 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 34kB\n Buffers: shared hit=4\n -> Seq Scan on articles main (cost=0.00..5.91 rows=148 width=137) (actual time=0.007..0.044 rows=148 loops=1)\n Filter: (disabled = '0'::smallint)\n Rows Removed by Filter: 5\n Buffers: shared hit=4\n Planning Time: 15.568 ms\n Execution Time: 1.818 ms\n(30 rows)\n\nTime: 17.679 ms\n\nIt is too bad that the statistics target has to be set that high to\nchoose correctly. I wonder if this class of problem would be handled by\nsome of the ideas discussed in other threads about pessimizing plans\nthat have exteremely large downsides. Thank you again for looking into\nthis and I have learned a lot.\n\nRegards,\nKen\n\n\n", "msg_date": "Sat, 20 Jun 2020 14:55:44 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "On 2020-Jun-20, Tom Lane wrote:\n\n> I wrote:\n> > ... oh, now I see: apparently, your filter condition is such that *no*\n> > rows of the objectcustomfieldvalues table get past the filter:\n> >\n> > -> Index Scan using objectcustomfieldvalues3 on objectcustomfieldvalues objectcustomfieldvalues_1 (cost=0.56..807603.40 rows=915 width=4) (actual time=21165.441..21165.441 rows=0 loops=1)\n> > Filter: ((disabled = 0) AND ((largecontent ~~* '%958575%'::text) OR ((content)::text ~~* '%958575%'::text)))\n> > Rows Removed by Filter: 19030904\n\n> You said you'd increased the stats target for\n> objectcustomfieldvalues.objectid, but maybe the real problem is needing\n> to increase the targets for content and largecontent, in hopes of driving\n> down the estimate for how many rows will pass this filter condition.\n\n... but those on content and largecontent are unanchored conditions --\nare we still able to do any cardinality analysis using those? I thought\nnot. Maybe a trigram search would help? See contrib/pg_trgm -- as far\nas I remember that module is able to work with LIKE conditions.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 22 Jun 2020 15:27:32 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "On Mon, Jun 22, 2020 at 03:27:32PM -0400, Alvaro Herrera wrote:\n> On 2020-Jun-20, Tom Lane wrote:\n> \n> > I wrote:\n> > > ... oh, now I see: apparently, your filter condition is such that *no*\n> > > rows of the objectcustomfieldvalues table get past the filter:\n> > >\n> > > -> Index Scan using objectcustomfieldvalues3 on objectcustomfieldvalues objectcustomfieldvalues_1 (cost=0.56..807603.40 rows=915 width=4) (actual time=21165.441..21165.441 rows=0 loops=1)\n> > > Filter: ((disabled = 0) AND ((largecontent ~~* '%958575%'::text) OR ((content)::text ~~* '%958575%'::text)))\n> > > Rows Removed by Filter: 19030904\n> \n> > You said you'd increased the stats target for\n> > objectcustomfieldvalues.objectid, but maybe the real problem is needing\n> > to increase the targets for content and largecontent, in hopes of driving\n> > down the estimate for how many rows will pass this filter condition.\n> \n> ... but those on content and largecontent are unanchored conditions --\n> are we still able to do any cardinality analysis using those? I thought\n> not. Maybe a trigram search would help? See contrib/pg_trgm -- as far\n> as I remember that module is able to work with LIKE conditions.\n> \n\nHi Alvaro,\n\nI do have a pg_trgm GIN index on those fields for the search.\n\nRegards,\nKen\n\n\n", "msg_date": "Mon, 22 Jun 2020 14:29:06 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2020-Jun-20, Tom Lane wrote:\n>> You said you'd increased the stats target for\n>> objectcustomfieldvalues.objectid, but maybe the real problem is needing\n>> to increase the targets for content and largecontent, in hopes of driving\n>> down the estimate for how many rows will pass this filter condition.\n\n> ... but those on content and largecontent are unanchored conditions --\n> are we still able to do any cardinality analysis using those?\n\nYes, if the stats histogram is large enough we'll apply it by just\nevaluating the query operator verbatim on each entry (thereby assuming\nthat the histogram is usable as a random sample). And we apply the\nquery condition on each MCV entry too (no assumptions needed there).\nThe unanchored LIKE conditions could not be used as btree indexquals,\nbut that has little to do with selectivity estimation.\n\nSince we bound those things at 10K entries, the histogram alone can't give\nbetter than 0.01% estimation precision, which in itself wouldn't have\ndone the job for the OP -- he needed a couple more places of accuracy\nthan that. I surmise that he had a nontrivial MCV population as well,\nsince he found that raising the stats target did eventually drive down\nthe estimate far enough to fix the problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jun 2020 15:39:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 12.3 slow index scan chosen" } ]
[ { "msg_contents": "> While you're waiting, you might think about recasting the query to\n> avoid the OR. Perhaps you could do a UNION of two scans of the\n> transactions table?\n\nThanks for the hint, I am well aware of the workaround for OR via UNION. I am not trying to improve this query per se as it is the small root of a very complex generated query and it's unfeasible to rewrite it to a UNION in this case.\n\nThe point of my message to the list was to highlight the misestimation, which I couldn't wrap my head around. Maybe this discussion can give some food for thought to someone who might tackle this in the future. It would surely be great to have selectivity estimate smarts for the generic case of OR-ed SubPlans someday.\n\n> \n> > Btw, I don't quite understand why the nested loop on contract only is expected to yield 31662 rows, when the null_frac of field transactions.contract is 1. Shouldn't that indicate zero rows or some kind of default minimum estimate for that query?\n> \n> That I don't understand. I get a minimal rowcount estimate for an\n> all-nulls outer table, as long as I'm using just one IN rather than\n\nYeah, that might be worth digging into. Is there any other info apart from those stats that I could provide?\n\n\n", "msg_date": "Fri, 19 Jun 2020 22:33:15 +0200", "msg_from": "\"Benjamin Coutu\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unclamped row estimates whith OR-ed subplans" } ]
[ { "msg_contents": "Hi Team,\n\nWe have a PostgreSQL 11.5.6 database running on VM.\nRAM - 48GB\nCPU - 6 cores\nDisk - SSD on SAN\n\nWe wanted to check how the WAL disk is performing using pg_test_fsync.We\nran a test and got around 870 ops/sec for opendatasync and fdatasync and\njust 430 ops/sec for fsync.We feel it is quite low as compared to what we\nget for local storage(2000 ops/sec for fsync).What is the recommended value\nfor fsync ops/sec for PosgreSQL WAL disks on SAN ?\n\nTest Results:\n\npg_test_fsync -f /WAL/pg_wal/test -s 120\n120 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync 877.891 ops/sec 1139 usecs/op\n fdatasync 880.911 ops/sec 1135 usecs/op\n fsync 433.456 ops/sec 2307 usecs/op\n fsync_writethrough n/a\n open_sync 450.094 ops/sec 2222 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync 439.119 ops/sec 2277 usecs/op\n fdatasync 898.221 ops/sec 1113 usecs/op\n fsync 456.887 ops/sec 2189 usecs/op\n fsync_writethrough n/a\n open_sync 229.973 ops/sec 4348 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB in different write\nopen_sync sizes.)\n 1 * 16kB open_sync write 453.444 ops/sec 2205 usecs/op\n 2 * 8kB open_sync writes 223.142 ops/sec 4481 usecs/op\n 4 * 4kB open_sync writes 116.360 ops/sec 8594 usecs/op\n 8 * 2kB open_sync writes 55.718 ops/sec 17948 usecs/op\n 16 * 1kB open_sync writes 27.766 ops/sec 36015 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written on a different\ndescriptor.)\n write, fsync, close 445.493 ops/sec 2245 usecs/op\n write, close, fsync 448.196 ops/sec 2231 usecs/op\n\nNon-sync'ed 8kB writes:\n write 132410.061 ops/sec 8 usecs/op\n\n\n\nThanks and Regards,\nNikhil\n\nHi Team,We have a PostgreSQL 11.5.6 database running on VM. RAM - 48GBCPU - 6 coresDisk - SSD on SANWe wanted to check how the WAL disk is performing using pg_test_fsync.We ran a test and got around 870 ops/sec for opendatasync and fdatasync and just 430 ops/sec for fsync.We feel it is quite low as compared to what we get for local storage(2000 ops/sec for fsync).What is the recommended value for fsync ops/sec for PosgreSQL WAL disks on SAN ?Test Results:pg_test_fsync -f /WAL/pg_wal/test -s 120\n120 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync 877.891 ops/sec 1139 usecs/op\n fdatasync 880.911 ops/sec 1135 usecs/op\n fsync 433.456 ops/sec 2307 usecs/op\n fsync_writethrough n/a\n open_sync 450.094 ops/sec 2222 usecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync 439.119 ops/sec 2277 usecs/op\n fdatasync 898.221 ops/sec 1113 usecs/op\n fsync 456.887 ops/sec 2189 usecs/op\n fsync_writethrough n/a\n open_sync 229.973 ops/sec 4348 usecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB in different write\nopen_sync sizes.)\n 1 * 16kB open_sync write 453.444 ops/sec 2205 usecs/op\n 2 * 8kB open_sync writes 223.142 ops/sec 4481 usecs/op\n 4 * 4kB open_sync writes 116.360 ops/sec 8594 usecs/op\n 8 * 2kB open_sync writes 55.718 ops/sec 17948 usecs/op\n 16 * 1kB open_sync writes 27.766 ops/sec 36015 usecs/op\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written on a different\ndescriptor.)\n write, fsync, close 445.493 ops/sec 2245 usecs/op\n write, close, fsync 448.196 ops/sec 2231 usecs/op\n\nNon-sync'ed 8kB writes:\n write 132410.061 ops/sec 8 usecs/op\nThanks and Regards,Nikhil", "msg_date": "Mon, 29 Jun 2020 14:56:42 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Recommended value for pg_test_fsync" }, { "msg_contents": "On Mon, Jun 29, 2020 at 02:56:42PM +0530, Nikhil Shetty wrote:\n> Hi Team,\n> \n> We have a PostgreSQL 11.5.6 database running on VM.�\n> RAM - 48GB\n> CPU - 6 cores\n> Disk - SSD on SAN\n> \n> We wanted to check how the WAL disk is performing using pg_test_fsync.We ran a\n> test and got around 870 ops/sec for opendatasync and fdatasync and just 430 ops\n> /sec for fsync.We feel it is quite low as compared to what we get for local\n> storage(2000 ops/sec for fsync).What is the recommended value for fsync ops/sec\n> for PosgreSQL WAL disks on SAN ?\n\nWell, it is the VM and SAN overhead, I guess. open_datasync or\nfdatasync both seem good.\n\n---------------------------------------------------------------------------\n\n\n> \n> Test Results:\n> \n> pg_test_fsync -f /WAL/pg_wal/test -s 120\n> 120 seconds per test\n> O_DIRECT supported on this platform for open_datasync and open_sync.\n> \n> Compare file sync methods using one 8kB write:\n> (in wal_sync_method preference order, except fdatasync is Linux's default)\n> open_datasync 877.891 ops/sec 1139 usecs/op\n> fdatasync 880.911 ops/sec 1135 usecs/op\n> fsync 433.456 ops/sec 2307 usecs/op\n> fsync_writethrough n/a\n> open_sync 450.094 ops/sec 2222 usecs/op\n> \n> Compare file sync methods using two 8kB writes:\n> (in wal_sync_method preference order, except fdatasync is Linux's default)\n> open_datasync 439.119 ops/sec 2277 usecs/op\n> fdatasync 898.221 ops/sec 1113 usecs/op\n> fsync 456.887 ops/sec 2189 usecs/op\n> fsync_writethrough n/a\n> open_sync 229.973 ops/sec 4348 usecs/op\n> \n> Compare open_sync with different write sizes:\n> (This is designed to compare the cost of writing 16kB in different write\n> open_sync sizes.)\n> 1 * 16kB open_sync write 453.444 ops/sec 2205 usecs/op\n> 2 * 8kB open_sync writes 223.142 ops/sec 4481 usecs/op\n> 4 * 4kB open_sync writes 116.360 ops/sec 8594 usecs/op\n> 8 * 2kB open_sync writes 55.718 ops/sec 17948 usecs/op\n> 16 * 1kB open_sync writes 27.766 ops/sec 36015 usecs/op\n> \n> Test if fsync on non-write file descriptor is honored:\n> (If the times are similar, fsync() can sync data written on a different\n> descriptor.)\n> write, fsync, close 445.493 ops/sec 2245 usecs/op\n> write, close, fsync 448.196 ops/sec 2231 usecs/op\n> \n> Non-sync'ed 8kB writes:\n> write 132410.061 ops/sec 8 usecs/op\n> \n> \n> \n> Thanks and Regards,\n> Nikhil\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Mon, 29 Jun 2020 12:06:38 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended value for pg_test_fsync" }, { "msg_contents": "Hi Bruce,\n\nBased on pg_test_fsync results, should we choose open_datasync or fdatasync\nas wal_sync_method? Can we rely on pg_test_fsync for choosing the best\nwal_sync_method or is there any other way?\n\nThanks and Regards,\nNikhil\n\nOn Mon, Jun 29, 2020 at 9:36 PM Bruce Momjian <[email protected]> wrote:\n\n> On Mon, Jun 29, 2020 at 02:56:42PM +0530, Nikhil Shetty wrote:\n> > Hi Team,\n> >\n> > We have a PostgreSQL 11.5.6 database running on VM.\n> > RAM - 48GB\n> > CPU - 6 cores\n> > Disk - SSD on SAN\n> >\n> > We wanted to check how the WAL disk is performing using pg_test_fsync.We\n> ran a\n> > test and got around 870 ops/sec for opendatasync and fdatasync and just\n> 430 ops\n> > /sec for fsync.We feel it is quite low as compared to what we get for\n> local\n> > storage(2000 ops/sec for fsync).What is the recommended value for fsync\n> ops/sec\n> > for PosgreSQL WAL disks on SAN ?\n>\n> Well, it is the VM and SAN overhead, I guess. open_datasync or\n> fdatasync both seem good.\n>\n> ---------------------------------------------------------------------------\n>\n>\n> >\n> > Test Results:\n> >\n> > pg_test_fsync -f /WAL/pg_wal/test -s 120\n> > 120 seconds per test\n> > O_DIRECT supported on this platform for open_datasync and open_sync.\n> >\n> > Compare file sync methods using one 8kB write:\n> > (in wal_sync_method preference order, except fdatasync is Linux's\n> default)\n> > open_datasync 877.891 ops/sec 1139\n> usecs/op\n> > fdatasync 880.911 ops/sec 1135\n> usecs/op\n> > fsync 433.456 ops/sec 2307\n> usecs/op\n> > fsync_writethrough n/a\n> > open_sync 450.094 ops/sec 2222\n> usecs/op\n> >\n> > Compare file sync methods using two 8kB writes:\n> > (in wal_sync_method preference order, except fdatasync is Linux's\n> default)\n> > open_datasync 439.119 ops/sec 2277\n> usecs/op\n> > fdatasync 898.221 ops/sec 1113\n> usecs/op\n> > fsync 456.887 ops/sec 2189\n> usecs/op\n> > fsync_writethrough n/a\n> > open_sync 229.973 ops/sec 4348\n> usecs/op\n> >\n> > Compare open_sync with different write sizes:\n> > (This is designed to compare the cost of writing 16kB in different write\n> > open_sync sizes.)\n> > 1 * 16kB open_sync write 453.444 ops/sec 2205\n> usecs/op\n> > 2 * 8kB open_sync writes 223.142 ops/sec 4481\n> usecs/op\n> > 4 * 4kB open_sync writes 116.360 ops/sec 8594\n> usecs/op\n> > 8 * 2kB open_sync writes 55.718 ops/sec 17948\n> usecs/op\n> > 16 * 1kB open_sync writes 27.766 ops/sec 36015\n> usecs/op\n> >\n> > Test if fsync on non-write file descriptor is honored:\n> > (If the times are similar, fsync() can sync data written on a different\n> > descriptor.)\n> > write, fsync, close 445.493 ops/sec 2245\n> usecs/op\n> > write, close, fsync 448.196 ops/sec 2231\n> usecs/op\n> >\n> > Non-sync'ed 8kB writes:\n> > write 132410.061 ops/sec 8\n> usecs/op\n> >\n> >\n> >\n> > Thanks and Regards,\n> > Nikhil\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EnterpriseDB https://enterprisedb.com\n>\n> The usefulness of a cup is in its emptiness, Bruce Lee\n>\n>\n\nHi Bruce,Based on pg_test_fsync results, should we choose open_datasync or fdatasync as wal_sync_method? Can we rely on pg_test_fsync for choosing the best wal_sync_method or is there any other way?Thanks and Regards,NikhilOn Mon, Jun 29, 2020 at 9:36 PM Bruce Momjian <[email protected]> wrote:On Mon, Jun 29, 2020 at 02:56:42PM +0530, Nikhil Shetty wrote:\n> Hi Team,\n> \n> We have a PostgreSQL 11.5.6 database running on VM. \n> RAM - 48GB\n> CPU - 6 cores\n> Disk - SSD on SAN\n> \n> We wanted to check how the WAL disk is performing using pg_test_fsync.We ran a\n> test and got around 870 ops/sec for opendatasync and fdatasync and just 430 ops\n> /sec for fsync.We feel it is quite low as compared to what we get for local\n> storage(2000 ops/sec for fsync).What is the recommended value for fsync ops/sec\n> for PosgreSQL WAL disks on SAN ?\n\nWell, it is the VM and SAN overhead, I guess.  open_datasync or\nfdatasync both seem good.\n\n---------------------------------------------------------------------------\n\n\n> \n> Test Results:\n> \n> pg_test_fsync -f /WAL/pg_wal/test -s 120\n> 120 seconds per test\n> O_DIRECT supported on this platform for open_datasync and open_sync.\n> \n> Compare file sync methods using one 8kB write:\n> (in wal_sync_method preference order, except fdatasync is Linux's default)\n>         open_datasync                       877.891 ops/sec    1139 usecs/op\n>         fdatasync                           880.911 ops/sec    1135 usecs/op\n>         fsync                               433.456 ops/sec    2307 usecs/op\n>         fsync_writethrough                              n/a\n>         open_sync                           450.094 ops/sec    2222 usecs/op\n> \n> Compare file sync methods using two 8kB writes:\n> (in wal_sync_method preference order, except fdatasync is Linux's default)\n>         open_datasync                       439.119 ops/sec    2277 usecs/op\n>         fdatasync                           898.221 ops/sec    1113 usecs/op\n>         fsync                               456.887 ops/sec    2189 usecs/op\n>         fsync_writethrough                              n/a\n>         open_sync                           229.973 ops/sec    4348 usecs/op\n> \n> Compare open_sync with different write sizes:\n> (This is designed to compare the cost of writing 16kB in different write\n> open_sync sizes.)\n>          1 * 16kB open_sync write           453.444 ops/sec    2205 usecs/op\n>          2 *  8kB open_sync writes          223.142 ops/sec    4481 usecs/op\n>          4 *  4kB open_sync writes          116.360 ops/sec    8594 usecs/op\n>          8 *  2kB open_sync writes           55.718 ops/sec   17948 usecs/op\n>         16 *  1kB open_sync writes           27.766 ops/sec   36015 usecs/op\n> \n> Test if fsync on non-write file descriptor is honored:\n> (If the times are similar, fsync() can sync data written on a different\n> descriptor.)\n>         write, fsync, close                 445.493 ops/sec    2245 usecs/op\n>         write, close, fsync                 448.196 ops/sec    2231 usecs/op\n> \n> Non-sync'ed 8kB writes:\n>         write                            132410.061 ops/sec       8 usecs/op\n> \n> \n> \n> Thanks and Regards,\n> Nikhil\n\n-- \n  Bruce Momjian  <[email protected]>        https://momjian.us\n  EnterpriseDB                             https://enterprisedb.com\n\n  The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Tue, 30 Jun 2020 10:32:13 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommended value for pg_test_fsync" }, { "msg_contents": "On Tue, Jun 30, 2020 at 10:32:13AM +0530, Nikhil Shetty wrote:\n> Hi Bruce,\n> \n> Based on pg_test_fsync results, should we choose open_datasync or fdatasync as\n> wal_sync_method? Can we rely on pg_test_fsync for choosing the best\n\nI would just pick the fastest method, but if the method is _too_ fast,\nit might mean that it isn't actually writing to durable storage.\n\n> wal_sync_method or is there any other way?\n\npg_test_fsync is the only way I know of, which is why I wrote it.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Tue, 30 Jun 2020 11:24:08 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended value for pg_test_fsync" }, { "msg_contents": "On Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]>\nwrote:\n\n> Hi Team,\n>\n> We have a PostgreSQL 11.5.6 database running on VM.\n> RAM - 48GB\n> CPU - 6 cores\n> Disk - SSD on SAN\n>\n> We wanted to check how the WAL disk is performing using pg_test_fsync.We\n> ran a test and got around 870 ops/sec for opendatasync and fdatasync and\n> just 430 ops/sec for fsync.We feel it is quite low as compared to what we\n> get for local storage(2000 ops/sec for fsync).\n>\n\nIt is not surprising to me that SAN would have higher latency than internal\nstorage. What kind of connection do you have between your server and your\nSAN?\n\n\n> What is the recommended value for fsync ops/sec for PosgreSQL WAL disks on\n> SAN ?\n>\n\nYou have the hardware you have. You can't change it the same way you can\nchange a config file entry, so I don't think that \"recommended value\"\nreally applies. Is the latency of sync requests a major bottleneck for\nyour workload? pg_test_fsync can tell you what the latency is, but can't\ntell you how much you care.\n\nCheers,\n\nJeff\n\n>\n\nOn Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]> wrote:Hi Team,We have a PostgreSQL 11.5.6 database running on VM. RAM - 48GBCPU - 6 coresDisk - SSD on SANWe wanted to check how the WAL disk is performing using pg_test_fsync.We ran a test and got around 870 ops/sec for opendatasync and fdatasync and just 430 ops/sec for fsync.We feel it is quite low as compared to what we get for local storage(2000 ops/sec for fsync).It is not surprising to me that SAN would have higher latency than internal storage.  What kind of connection do you have between your server and your SAN? What is the recommended value for fsync ops/sec for PosgreSQL WAL disks on SAN ?You have the hardware you have.  You can't change it the same way you can change a config file entry, so I don't think that \"recommended value\" really applies.  Is the latency of sync requests a major bottleneck for your workload? pg_test_fsync can tell you what the latency is, but can't tell you how much you care. Cheers,Jeff", "msg_date": "Tue, 30 Jun 2020 12:21:23 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended value for pg_test_fsync" }, { "msg_contents": "On Tue, Jun 30, 2020 at 1:02 AM Nikhil Shetty <[email protected]>\nwrote:\n\n> Hi Bruce,\n>\n> Based on pg_test_fsync results, should we choose open_datasync or\n> fdatasync as wal_sync_method? Can we rely on pg_test_fsync for choosing the\n> best wal_sync_method or is there any other way?\n>\n\nProbably the default of fdatasync. The place where pg_test_fsync would\ntell me not to use fdatasync is if it were so fast that it was not credible\nthat it was honestly syncing the data. I don't think pg_test_fsync does a\ngood job of exercising the realistic differences between fdatasync and\nopen_datasync. So unless it shows that one of them is lying about the\ndurability, it doesn't offer much help.\n\nCheers,\n\nJeff\n\nOn Tue, Jun 30, 2020 at 1:02 AM Nikhil Shetty <[email protected]> wrote:Hi Bruce,Based on pg_test_fsync results, should we choose open_datasync or fdatasync as wal_sync_method? Can we rely on pg_test_fsync for choosing the best wal_sync_method or is there any other way?Probably the default of fdatasync.  The place where pg_test_fsync would tell me not to use \n\nfdatasync\n\n is if it were so fast that it was not credible that it was honestly syncing the data.  I don't think pg_test_fsync does a good job of exercising the realistic differences between fdatasync and open_datasync.  So unless it shows that one of them is lying about the durability, it doesn't offer much help. Cheers,Jeff", "msg_date": "Tue, 30 Jun 2020 13:26:56 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended value for pg_test_fsync" }, { "msg_contents": "Hi Bruce,\n\nThank you. We may stick with fdatasync for now.\n\nThanks and regards,\nNikhil\n\nOn Tue, Jun 30, 2020 at 8:54 PM Bruce Momjian <[email protected]> wrote:\n\n> On Tue, Jun 30, 2020 at 10:32:13AM +0530, Nikhil Shetty wrote:\n> > Hi Bruce,\n> >\n> > Based on pg_test_fsync results, should we choose open_datasync or\n> fdatasync as\n> > wal_sync_method? Can we rely on pg_test_fsync for choosing the best\n>\n> I would just pick the fastest method, but if the method is _too_ fast,\n> it might mean that it isn't actually writing to durable storage.\n>\n> > wal_sync_method or is there any other way?\n>\n> pg_test_fsync is the only way I know of, which is why I wrote it.\n>\n> --\n> Bruce Momjian <[email protected]> https://momjian.us\n> EnterpriseDB https://enterprisedb.com\n>\n> The usefulness of a cup is in its emptiness, Bruce Lee\n>\n>\n\nHi Bruce,Thank you. We may stick with fdatasync for now.Thanks and regards,NikhilOn Tue, Jun 30, 2020 at 8:54 PM Bruce Momjian <[email protected]> wrote:On Tue, Jun 30, 2020 at 10:32:13AM +0530, Nikhil Shetty wrote:\n> Hi Bruce,\n> \n> Based on pg_test_fsync results, should we choose open_datasync or fdatasync as\n> wal_sync_method? Can we rely on pg_test_fsync for choosing the best\n\nI would just pick the fastest method, but if the method is _too_ fast,\nit might mean that it isn't actually writing to durable storage.\n\n> wal_sync_method or is there any other way?\n\npg_test_fsync is the only way I know of, which is why I wrote it.\n\n-- \n  Bruce Momjian  <[email protected]>        https://momjian.us\n  EnterpriseDB                             https://enterprisedb.com\n\n  The usefulness of a cup is in its emptiness, Bruce Lee", "msg_date": "Wed, 1 Jul 2020 22:25:58 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommended value for pg_test_fsync" }, { "msg_contents": "Hi Jeff,\n\nThank you for your inputs. We may stick with fdatasync for now. We will get\nmore details on connection details between SAN and server from the storage\nteam and update this thread.\n\nStorage is Hitachi G900 with 41Gbps bandwidth.\n\nThanks and regards,\nNikhil\n\n\n\nOn Tue, Jun 30, 2020 at 9:51 PM Jeff Janes <[email protected]> wrote:\n\n> On Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]>\n> wrote:\n>\n>> Hi Team,\n>>\n>> We have a PostgreSQL 11.5.6 database running on VM.\n>> RAM - 48GB\n>> CPU - 6 cores\n>> Disk - SSD on SAN\n>>\n>> We wanted to check how the WAL disk is performing using pg_test_fsync.We\n>> ran a test and got around 870 ops/sec for opendatasync and fdatasync and\n>> just 430 ops/sec for fsync.We feel it is quite low as compared to what we\n>> get for local storage(2000 ops/sec for fsync).\n>>\n>\n> It is not surprising to me that SAN would have higher latency than\n> internal storage. What kind of connection do you have between your server\n> and your SAN?\n>\n>\n>> What is the recommended value for fsync ops/sec for PosgreSQL WAL disks\n>> on SAN ?\n>>\n>\n> You have the hardware you have. You can't change it the same way you can\n> change a config file entry, so I don't think that \"recommended value\"\n> really applies. Is the latency of sync requests a major bottleneck for\n> your workload? pg_test_fsync can tell you what the latency is, but can't\n> tell you how much you care.\n>\n> Cheers,\n>\n> Jeff\n>\n>>\n\nHi Jeff,Thank you for your inputs. We may stick with fdatasync for now. We will get more details on connection details between SAN and server from the storage team and update this thread.Storage is Hitachi G900 with 41Gbps bandwidth.Thanks and regards,NikhilOn Tue, Jun 30, 2020 at 9:51 PM Jeff Janes <[email protected]> wrote:On Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]> wrote:Hi Team,We have a PostgreSQL 11.5.6 database running on VM. RAM - 48GBCPU - 6 coresDisk - SSD on SANWe wanted to check how the WAL disk is performing using pg_test_fsync.We ran a test and got around 870 ops/sec for opendatasync and fdatasync and just 430 ops/sec for fsync.We feel it is quite low as compared to what we get for local storage(2000 ops/sec for fsync).It is not surprising to me that SAN would have higher latency than internal storage.  What kind of connection do you have between your server and your SAN? What is the recommended value for fsync ops/sec for PosgreSQL WAL disks on SAN ?You have the hardware you have.  You can't change it the same way you can change a config file entry, so I don't think that \"recommended value\" really applies.  Is the latency of sync requests a major bottleneck for your workload? pg_test_fsync can tell you what the latency is, but can't tell you how much you care. Cheers,Jeff", "msg_date": "Wed, 1 Jul 2020 22:36:25 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommended value for pg_test_fsync" }, { "msg_contents": "Hi Jeff,\n\nTo avoid confusion, Hitachi Storage G900 has 41Gbps Performance bandwidth\n(Throughput) and 10Gbps N/W bandwidth.\n\nThanks and Regards,\nNikhil\n\nOn Wed, Jul 1, 2020 at 10:36 PM Nikhil Shetty <[email protected]>\nwrote:\n\n> Hi Jeff,\n>\n> Thank you for your inputs. We may stick with fdatasync for now. We will\n> get more details on connection details between SAN and server from the\n> storage team and update this thread.\n>\n> Storage is Hitachi G900 with 41Gbps bandwidth.\n>\n> Thanks and regards,\n> Nikhil\n>\n>\n>\n> On Tue, Jun 30, 2020 at 9:51 PM Jeff Janes <[email protected]> wrote:\n>\n>> On Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]>\n>> wrote:\n>>\n>>> Hi Team,\n>>>\n>>> We have a PostgreSQL 11.5.6 database running on VM.\n>>> RAM - 48GB\n>>> CPU - 6 cores\n>>> Disk - SSD on SAN\n>>>\n>>> We wanted to check how the WAL disk is performing using pg_test_fsync.We\n>>> ran a test and got around 870 ops/sec for opendatasync and fdatasync and\n>>> just 430 ops/sec for fsync.We feel it is quite low as compared to what we\n>>> get for local storage(2000 ops/sec for fsync).\n>>>\n>>\n>> It is not surprising to me that SAN would have higher latency than\n>> internal storage. What kind of connection do you have between your server\n>> and your SAN?\n>>\n>>\n>>> What is the recommended value for fsync ops/sec for PosgreSQL WAL disks\n>>> on SAN ?\n>>>\n>>\n>> You have the hardware you have. You can't change it the same way you can\n>> change a config file entry, so I don't think that \"recommended value\"\n>> really applies. Is the latency of sync requests a major bottleneck for\n>> your workload? pg_test_fsync can tell you what the latency is, but can't\n>> tell you how much you care.\n>>\n>> Cheers,\n>>\n>> Jeff\n>>\n>>>\n\nHi Jeff,To avoid confusion, Hitachi Storage G900 has 41Gbps Performance bandwidth (Throughput) and 10Gbps N/W bandwidth.Thanks and Regards,NikhilOn Wed, Jul 1, 2020 at 10:36 PM Nikhil Shetty <[email protected]> wrote:Hi Jeff,Thank you for your inputs. We may stick with fdatasync for now. We will get more details on connection details between SAN and server from the storage team and update this thread.Storage is Hitachi G900 with 41Gbps bandwidth.Thanks and regards,NikhilOn Tue, Jun 30, 2020 at 9:51 PM Jeff Janes <[email protected]> wrote:On Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]> wrote:Hi Team,We have a PostgreSQL 11.5.6 database running on VM. RAM - 48GBCPU - 6 coresDisk - SSD on SANWe wanted to check how the WAL disk is performing using pg_test_fsync.We ran a test and got around 870 ops/sec for opendatasync and fdatasync and just 430 ops/sec for fsync.We feel it is quite low as compared to what we get for local storage(2000 ops/sec for fsync).It is not surprising to me that SAN would have higher latency than internal storage.  What kind of connection do you have between your server and your SAN? What is the recommended value for fsync ops/sec for PosgreSQL WAL disks on SAN ?You have the hardware you have.  You can't change it the same way you can change a config file entry, so I don't think that \"recommended value\" really applies.  Is the latency of sync requests a major bottleneck for your workload? pg_test_fsync can tell you what the latency is, but can't tell you how much you care. Cheers,Jeff", "msg_date": "Wed, 1 Jul 2020 22:43:52 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommended value for pg_test_fsync" }, { "msg_contents": "Hello Nikhil,\nWe had performance issues with our Dell SC2020 storage in the past. We had\na 6 SSD RAID10 setup and due all the latencies expected 20K IOPS but were\ngetting 2K...\nAfter *a lot* of work the issue was not with the storage itself but with\nthe I/O scheduler of the filesystem (EXT4/Debian 9).\nThe default scheduler is CFQ, changing to deadline provided us the 10x\ndifference that we were expecting.\nIn the end this was buried on the storage documentation that\nsomehow slipped us...\nHope this helps.\nRegards,\nHaroldo Kerry\n\nOn Wed, Jul 1, 2020 at 2:06 PM Nikhil Shetty <[email protected]> wrote:\n\n> Hi Jeff,\n>\n> Thank you for your inputs. We may stick with fdatasync for now. We will\n> get more details on connection details between SAN and server from the\n> storage team and update this thread.\n>\n> Storage is Hitachi G900 with 41Gbps bandwidth.\n>\n> Thanks and regards,\n> Nikhil\n>\n>\n>\n> On Tue, Jun 30, 2020 at 9:51 PM Jeff Janes <[email protected]> wrote:\n>\n>> On Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]>\n>> wrote:\n>>\n>>> Hi Team,\n>>>\n>>> We have a PostgreSQL 11.5.6 database running on VM.\n>>> RAM - 48GB\n>>> CPU - 6 cores\n>>> Disk - SSD on SAN\n>>>\n>>> We wanted to check how the WAL disk is performing using pg_test_fsync.We\n>>> ran a test and got around 870 ops/sec for opendatasync and fdatasync and\n>>> just 430 ops/sec for fsync.We feel it is quite low as compared to what we\n>>> get for local storage(2000 ops/sec for fsync).\n>>>\n>>\n>> It is not surprising to me that SAN would have higher latency than\n>> internal storage. What kind of connection do you have between your server\n>> and your SAN?\n>>\n>>\n>>> What is the recommended value for fsync ops/sec for PosgreSQL WAL disks\n>>> on SAN ?\n>>>\n>>\n>> You have the hardware you have. You can't change it the same way you can\n>> change a config file entry, so I don't think that \"recommended value\"\n>> really applies. Is the latency of sync requests a major bottleneck for\n>> your workload? pg_test_fsync can tell you what the latency is, but can't\n>> tell you how much you care.\n>>\n>> Cheers,\n>>\n>> Jeff\n>>\n>>>\n\n-- \n\nHaroldo Kerry\n\nCTO/COO\n\nRua do Rócio, 220, 7° andar, conjunto 72\n\nSão Paulo – SP / CEP 04552-000\n\[email protected]\n\nwww.callix.com.br\n\nHello Nikhil,We had performance issues with our Dell SC2020 storage in the past. We had a 6 SSD RAID10 setup and due all the latencies expected 20K IOPS but were getting 2K...After *a lot* of work the issue was not with the storage itself but with the I/O scheduler of the filesystem (EXT4/Debian 9).The default scheduler is CFQ, changing to deadline provided us the 10x difference that we were expecting.In the end this was buried on the storage documentation that somehow slipped us...Hope this helps.Regards,Haroldo KerryOn Wed, Jul 1, 2020 at 2:06 PM Nikhil Shetty <[email protected]> wrote:Hi Jeff,Thank you for your inputs. We may stick with fdatasync for now. We will get more details on connection details between SAN and server from the storage team and update this thread.Storage is Hitachi G900 with 41Gbps bandwidth.Thanks and regards,NikhilOn Tue, Jun 30, 2020 at 9:51 PM Jeff Janes <[email protected]> wrote:On Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]> wrote:Hi Team,We have a PostgreSQL 11.5.6 database running on VM. RAM - 48GBCPU - 6 coresDisk - SSD on SANWe wanted to check how the WAL disk is performing using pg_test_fsync.We ran a test and got around 870 ops/sec for opendatasync and fdatasync and just 430 ops/sec for fsync.We feel it is quite low as compared to what we get for local storage(2000 ops/sec for fsync).It is not surprising to me that SAN would have higher latency than internal storage.  What kind of connection do you have between your server and your SAN? What is the recommended value for fsync ops/sec for PosgreSQL WAL disks on SAN ?You have the hardware you have.  You can't change it the same way you can change a config file entry, so I don't think that \"recommended value\" really applies.  Is the latency of sync requests a major bottleneck for your workload? pg_test_fsync can tell you what the latency is, but can't tell you how much you care. Cheers,Jeff\n\n\n-- Haroldo KerryCTO/COORua do Rócio, 220, 7° andar, conjunto 72São Paulo – SP / CEP [email protected]", "msg_date": "Wed, 1 Jul 2020 14:16:44 -0300", "msg_from": "Haroldo Kerry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended value for pg_test_fsync" }, { "msg_contents": "Hi Haroldo,\n\nThank you for the details.\n\nWe are using xfs on IBM Power Linux Rhel7 but I will check this in our\nenvironment and get back to you with the results.\n\nThanks and Regards,\nNikhil\n\n\nOn Wed, Jul 1, 2020, 22:46 Haroldo Kerry <[email protected]> wrote:\n\n> Hello Nikhil,\n> We had performance issues with our Dell SC2020 storage in the past. We had\n> a 6 SSD RAID10 setup and due all the latencies expected 20K IOPS but were\n> getting 2K...\n> After *a lot* of work the issue was not with the storage itself but with\n> the I/O scheduler of the filesystem (EXT4/Debian 9).\n> The default scheduler is CFQ, changing to deadline provided us the 10x\n> difference that we were expecting.\n> In the end this was buried on the storage documentation that\n> somehow slipped us...\n> Hope this helps.\n> Regards,\n> Haroldo Kerry\n>\n> On Wed, Jul 1, 2020 at 2:06 PM Nikhil Shetty <[email protected]>\n> wrote:\n>\n>> Hi Jeff,\n>>\n>> Thank you for your inputs. We may stick with fdatasync for now. We will\n>> get more details on connection details between SAN and server from the\n>> storage team and update this thread.\n>>\n>> Storage is Hitachi G900 with 41Gbps bandwidth.\n>>\n>> Thanks and regards,\n>> Nikhil\n>>\n>>\n>>\n>> On Tue, Jun 30, 2020 at 9:51 PM Jeff Janes <[email protected]> wrote:\n>>\n>>> On Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]>\n>>> wrote:\n>>>\n>>>> Hi Team,\n>>>>\n>>>> We have a PostgreSQL 11.5.6 database running on VM.\n>>>> RAM - 48GB\n>>>> CPU - 6 cores\n>>>> Disk - SSD on SAN\n>>>>\n>>>> We wanted to check how the WAL disk is performing using\n>>>> pg_test_fsync.We ran a test and got around 870 ops/sec for opendatasync and\n>>>> fdatasync and just 430 ops/sec for fsync.We feel it is quite low as\n>>>> compared to what we get for local storage(2000 ops/sec for fsync).\n>>>>\n>>>\n>>> It is not surprising to me that SAN would have higher latency than\n>>> internal storage. What kind of connection do you have between your server\n>>> and your SAN?\n>>>\n>>>\n>>>> What is the recommended value for fsync ops/sec for PosgreSQL WAL disks\n>>>> on SAN ?\n>>>>\n>>>\n>>> You have the hardware you have. You can't change it the same way you\n>>> can change a config file entry, so I don't think that \"recommended value\"\n>>> really applies. Is the latency of sync requests a major bottleneck for\n>>> your workload? pg_test_fsync can tell you what the latency is, but can't\n>>> tell you how much you care.\n>>>\n>>> Cheers,\n>>>\n>>> Jeff\n>>>\n>>>>\n>\n> --\n>\n> Haroldo Kerry\n>\n> CTO/COO\n>\n> Rua do Rócio, 220, 7° andar, conjunto 72\n>\n> São Paulo – SP / CEP 04552-000\n>\n> [email protected]\n>\n> www.callix.com.br\n>\n\nHi Haroldo,Thank you for the details.We are using xfs on IBM Power Linux Rhel7 but I will check this in our environment and get back to you with the results.Thanks and Regards,NikhilOn Wed, Jul 1, 2020, 22:46 Haroldo Kerry <[email protected]> wrote:Hello Nikhil,We had performance issues with our Dell SC2020 storage in the past. We had a 6 SSD RAID10 setup and due all the latencies expected 20K IOPS but were getting 2K...After *a lot* of work the issue was not with the storage itself but with the I/O scheduler of the filesystem (EXT4/Debian 9).The default scheduler is CFQ, changing to deadline provided us the 10x difference that we were expecting.In the end this was buried on the storage documentation that somehow slipped us...Hope this helps.Regards,Haroldo KerryOn Wed, Jul 1, 2020 at 2:06 PM Nikhil Shetty <[email protected]> wrote:Hi Jeff,Thank you for your inputs. We may stick with fdatasync for now. We will get more details on connection details between SAN and server from the storage team and update this thread.Storage is Hitachi G900 with 41Gbps bandwidth.Thanks and regards,NikhilOn Tue, Jun 30, 2020 at 9:51 PM Jeff Janes <[email protected]> wrote:On Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]> wrote:Hi Team,We have a PostgreSQL 11.5.6 database running on VM. RAM - 48GBCPU - 6 coresDisk - SSD on SANWe wanted to check how the WAL disk is performing using pg_test_fsync.We ran a test and got around 870 ops/sec for opendatasync and fdatasync and just 430 ops/sec for fsync.We feel it is quite low as compared to what we get for local storage(2000 ops/sec for fsync).It is not surprising to me that SAN would have higher latency than internal storage.  What kind of connection do you have between your server and your SAN? What is the recommended value for fsync ops/sec for PosgreSQL WAL disks on SAN ?You have the hardware you have.  You can't change it the same way you can change a config file entry, so I don't think that \"recommended value\" really applies.  Is the latency of sync requests a major bottleneck for your workload? pg_test_fsync can tell you what the latency is, but can't tell you how much you care. Cheers,Jeff\n\n\n-- Haroldo KerryCTO/COORua do Rócio, 220, 7° andar, conjunto 72São Paulo – SP / CEP [email protected]", "msg_date": "Wed, 1 Jul 2020 23:13:53 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommended value for pg_test_fsync" }, { "msg_contents": "Hi,\n\nThe client has done benchmark tests on available storage using a storage\nbenchmark tool and got IOPS of around 14k on iSCSI and around 150k on HBA\nchannel, which seems a good number but pg_test_fysnc gives numbers which\nare not reflecting good op/sec. Though pg_test_fysnc result should not be\ncompared to benchmark throughput but both are indicative of overall\ndatabase performance.\nWAL sync should not become a bottleneck during actual production workload.\n\nThanks and Regards,\nNikhil\n\nOn Wed, Jul 1, 2020 at 11:13 PM Nikhil Shetty <[email protected]>\nwrote:\n\n> Hi Haroldo,\n>\n> Thank you for the details.\n>\n> We are using xfs on IBM Power Linux Rhel7 but I will check this in our\n> environment and get back to you with the results.\n>\n> Thanks and Regards,\n> Nikhil\n>\n>\n> On Wed, Jul 1, 2020, 22:46 Haroldo Kerry <[email protected]> wrote:\n>\n>> Hello Nikhil,\n>> We had performance issues with our Dell SC2020 storage in the past. We\n>> had a 6 SSD RAID10 setup and due all the latencies expected 20K IOPS but\n>> were getting 2K...\n>> After *a lot* of work the issue was not with the storage itself but with\n>> the I/O scheduler of the filesystem (EXT4/Debian 9).\n>> The default scheduler is CFQ, changing to deadline provided us the 10x\n>> difference that we were expecting.\n>> In the end this was buried on the storage documentation that\n>> somehow slipped us...\n>> Hope this helps.\n>> Regards,\n>> Haroldo Kerry\n>>\n>> On Wed, Jul 1, 2020 at 2:06 PM Nikhil Shetty <[email protected]>\n>> wrote:\n>>\n>>> Hi Jeff,\n>>>\n>>> Thank you for your inputs. We may stick with fdatasync for now. We will\n>>> get more details on connection details between SAN and server from the\n>>> storage team and update this thread.\n>>>\n>>> Storage is Hitachi G900 with 41Gbps bandwidth.\n>>>\n>>> Thanks and regards,\n>>> Nikhil\n>>>\n>>>\n>>>\n>>> On Tue, Jun 30, 2020 at 9:51 PM Jeff Janes <[email protected]> wrote:\n>>>\n>>>> On Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> Hi Team,\n>>>>>\n>>>>> We have a PostgreSQL 11.5.6 database running on VM.\n>>>>> RAM - 48GB\n>>>>> CPU - 6 cores\n>>>>> Disk - SSD on SAN\n>>>>>\n>>>>> We wanted to check how the WAL disk is performing using\n>>>>> pg_test_fsync.We ran a test and got around 870 ops/sec for opendatasync and\n>>>>> fdatasync and just 430 ops/sec for fsync.We feel it is quite low as\n>>>>> compared to what we get for local storage(2000 ops/sec for fsync).\n>>>>>\n>>>>\n>>>> It is not surprising to me that SAN would have higher latency than\n>>>> internal storage. What kind of connection do you have between your server\n>>>> and your SAN?\n>>>>\n>>>>\n>>>>> What is the recommended value for fsync ops/sec for PosgreSQL WAL\n>>>>> disks on SAN ?\n>>>>>\n>>>>\n>>>> You have the hardware you have. You can't change it the same way you\n>>>> can change a config file entry, so I don't think that \"recommended value\"\n>>>> really applies. Is the latency of sync requests a major bottleneck for\n>>>> your workload? pg_test_fsync can tell you what the latency is, but can't\n>>>> tell you how much you care.\n>>>>\n>>>> Cheers,\n>>>>\n>>>> Jeff\n>>>>\n>>>>>\n>>\n>> --\n>>\n>> Haroldo Kerry\n>>\n>> CTO/COO\n>>\n>> Rua do Rócio, 220, 7° andar, conjunto 72\n>>\n>> São Paulo – SP / CEP 04552-000\n>>\n>> [email protected]\n>>\n>> www.callix.com.br\n>>\n>\n\nHi,The client has done benchmark tests on available storage using a storage benchmark tool and got IOPS of around 14k on iSCSI  and around 150k on HBA channel, which seems a good number but pg_test_fysnc gives numbers which are not reflecting good op/sec. Though pg_test_fysnc result should not be compared to benchmark throughput but both are indicative of overall database performance.WAL sync should not become a bottleneck during actual production workload.Thanks and Regards,NikhilOn Wed, Jul 1, 2020 at 11:13 PM Nikhil Shetty <[email protected]> wrote:Hi Haroldo,Thank you for the details.We are using xfs on IBM Power Linux Rhel7 but I will check this in our environment and get back to you with the results.Thanks and Regards,NikhilOn Wed, Jul 1, 2020, 22:46 Haroldo Kerry <[email protected]> wrote:Hello Nikhil,We had performance issues with our Dell SC2020 storage in the past. We had a 6 SSD RAID10 setup and due all the latencies expected 20K IOPS but were getting 2K...After *a lot* of work the issue was not with the storage itself but with the I/O scheduler of the filesystem (EXT4/Debian 9).The default scheduler is CFQ, changing to deadline provided us the 10x difference that we were expecting.In the end this was buried on the storage documentation that somehow slipped us...Hope this helps.Regards,Haroldo KerryOn Wed, Jul 1, 2020 at 2:06 PM Nikhil Shetty <[email protected]> wrote:Hi Jeff,Thank you for your inputs. We may stick with fdatasync for now. We will get more details on connection details between SAN and server from the storage team and update this thread.Storage is Hitachi G900 with 41Gbps bandwidth.Thanks and regards,NikhilOn Tue, Jun 30, 2020 at 9:51 PM Jeff Janes <[email protected]> wrote:On Mon, Jun 29, 2020 at 5:27 AM Nikhil Shetty <[email protected]> wrote:Hi Team,We have a PostgreSQL 11.5.6 database running on VM. RAM - 48GBCPU - 6 coresDisk - SSD on SANWe wanted to check how the WAL disk is performing using pg_test_fsync.We ran a test and got around 870 ops/sec for opendatasync and fdatasync and just 430 ops/sec for fsync.We feel it is quite low as compared to what we get for local storage(2000 ops/sec for fsync).It is not surprising to me that SAN would have higher latency than internal storage.  What kind of connection do you have between your server and your SAN? What is the recommended value for fsync ops/sec for PosgreSQL WAL disks on SAN ?You have the hardware you have.  You can't change it the same way you can change a config file entry, so I don't think that \"recommended value\" really applies.  Is the latency of sync requests a major bottleneck for your workload? pg_test_fsync can tell you what the latency is, but can't tell you how much you care. Cheers,Jeff\n\n\n-- Haroldo KerryCTO/COORua do Rócio, 220, 7° andar, conjunto 72São Paulo – SP / CEP [email protected]", "msg_date": "Wed, 1 Jul 2020 23:41:23 +0530", "msg_from": "Nikhil Shetty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recommended value for pg_test_fsync" }, { "msg_contents": "On Wed, Jul 1, 2020 at 11:41:23PM +0530, Nikhil Shetty wrote:\n> Hi,\n> \n> The client has done benchmark tests on available storage using a storage\n> benchmark tool and got IOPS of around 14k on iSCSI �and around 150k on HBA\n> channel, which seems a good number but pg_test_fysnc gives numbers which are\n> not reflecting good op/sec. Though pg_test_fysnc result should not be compared\n> to benchmark throughput but both are indicative of overall database\n> performance.\n\nWell, by definition, pg_test_fsync asks for fsync after every set of\nwrites. Only the last report, \"Non-sync'ed 8kB writes:\" gives non-fsync\nperformance.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Wed, 1 Jul 2020 15:59:58 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recommended value for pg_test_fsync" } ]
[ { "msg_contents": "Hi all,\n\nlong time ago I devised with your help a task queuing system which uses \nSELECT ... FOR UPDATE SKIP LOCKED for many parallel workers to find \ntasks in the queue, and it used a partitioned table where the hot part \nof the queue is short and so the query for a job is quick and the skip \nlocked locking makes sure that one job is only assigned to one worker. \nAnd this works pretty well for me, except that when we run many workers \nwe find a lot of these failures occurring:\n\n\"tuple to be locked was already moved to another partition due to \nconcurrent update\"\n\nThis would not exactly look like a bug, because the message says \"to be \nlocked\", so at least it's not allowing two workers to lock the same \ntuple. But it seems that the skip-locked mode should not make an error \nout of this, but treat it as the tuple was already locked. Why would it \nwant to lock the tuple (representing the job) if another worker has \nalready finished his UPDATE of the job to mark it as \"done\" (which is \nwhat makes the tuple move to the \"completed\" partition.)\n\nEither the SELECT for jobs to do returned a wrong tuple, which was \nalready update, or there is some lapse in the locking.\n\nEither way it would seem to be a waste of time throwing all these errors \nwhen the tuple should not even have been selected for update and locking.\n\nI wonder if anybody knows anything about that issue? Of course you'll \nwant to see the DDL and SQL queries, etc. but you can't really try it \nout unless you do some massively parallel magic. So I figured I just ask.\n\nregards,\n-Gunther\n\n\n\n", "msg_date": "Tue, 30 Jun 2020 12:10:17 -0400", "msg_from": "Gunther Schadow <[email protected]>", "msg_from_op": true, "msg_subject": "Is there a known bug with SKIP LOCKED and \"tuple to be locked was\n already moved to another partition due to concurrent update\"?" }, { "msg_contents": "Hi Gunther & List,\n\nI think I have an extremely similar issue and things point in the same \ndirection of a potential issue for skip locked on partitioned tables.\n\nBackground is I had a queue table on v9.6 with fairly high volume (>50M \nitems, growth in the 1+M/daily).\n\nProcessing the queue with FOR UPDATE SKIP LOCKED was reliable but \ntraffic volumes on v9.6 and the fact v12 is current let to migrating to \nv12 and using a partitioned table.\n\nQueue has distinct categories of items, so the table is partitioned by \nlist on each category.  Processing in 1 category results in it being \nupdated to the next logical category (i.e. it will migrate partition \nonce it is processed).\n\nWithin each category, there can be 10'sM rows, so the list partition is \nhash partitioned as well.  I don't think this is the issue but is \nmentioned for completeness.\n\nNow, when processing the queue, there are regular transaction aborts \nwith \"tuple to be locked was already moved to another partition due to \nconcurrent update\".\n\n From everything I can trace, it really does look like this is caused by \nrows which should be locked/skipped as they are processed by a different \nthread.\n\nI tried switching 'for update' to 'for key share' and that created a \ncascade of deadlock aborts, so was worse for my situation.\n\nFor now, I roll back and repeat the select for update skip locked until \nit succeeds - which it eventually does.\n\nHowever, it really feels like these should just have been skipped by \nPostgreSQL without the rollback/retry until success.\n\nSo, am I missing something/doing it wrong?  Or could there be a \npotential issue that needs raised?\n\nThanks\n\nJim\n\n\n\nOn 30-Jun.-2020 12:10, Gunther Schadow wrote:\n> Hi all,\n>\n> long time ago I devised with your help a task queuing system which \n> uses SELECT ... FOR UPDATE SKIP LOCKED for many parallel workers to \n> find tasks in the queue, and it used a partitioned table where the hot \n> part of the queue is short and so the query for a job is quick and the \n> skip locked locking makes sure that one job is only assigned to one \n> worker. And this works pretty well for me, except that when we run \n> many workers we find a lot of these failures occurring:\n>\n> \"tuple to be locked was already moved to another partition due to \n> concurrent update\"\n>\n> This would not exactly look like a bug, because the message says \"to \n> be locked\", so at least it's not allowing two workers to lock the same \n> tuple. But it seems that the skip-locked mode should not make an error \n> out of this, but treat it as the tuple was already locked. Why would \n> it want to lock the tuple (representing the job) if another worker has \n> already finished his UPDATE of the job to mark it as \"done\" (which is \n> what makes the tuple move to the \"completed\" partition.)\n>\n> Either the SELECT for jobs to do returned a wrong tuple, which was \n> already update, or there is some lapse in the locking.\n>\n> Either way it would seem to be a waste of time throwing all these \n> errors when the tuple should not even have been selected for update \n> and locking.\n>\n> I wonder if anybody knows anything about that issue? Of course you'll \n> want to see the DDL and SQL queries, etc. but you can't really try it \n> out unless you do some massively parallel magic. So I figured I just ask.\n>\n> regards,\n> -Gunther\n>\n>\n>\n>\n>\n\n\n\n\n\n\nHi Gunther & List,\nI think I have an extremely similar issue and things point in the\n same direction of a potential issue for skip locked on partitioned\n tables.\nBackground is I had a queue table on v9.6 with fairly high volume\n (>50M items, growth in the 1+M/daily).\nProcessing the queue with FOR UPDATE SKIP LOCKED was reliable but\n traffic volumes on v9.6 and the fact v12 is current let to\n migrating to v12 and using a partitioned table.\nQueue has distinct categories of items, so the table is\n partitioned by list on each category.  Processing in 1 category\n results in it being updated to the next logical category (i.e. it\n will migrate partition once it is processed).\n\nWithin each category, there can be 10'sM rows, so the list\n partition is hash partitioned as well.  I don't think this is the\n issue but is mentioned for completeness.\n\nNow, when processing the queue, there are regular transaction\n aborts with \"tuple to be locked was already moved to another\n partition due to concurrent update\".\nFrom everything I can trace, it really does look like this is\n caused by rows which should be locked/skipped as they are\n processed by a different thread.\nI tried switching 'for update' to 'for key share' and that\n created a cascade of deadlock aborts, so was worse for my\n situation.\n\nFor now, I roll back and repeat the select for update skip locked\n until it succeeds - which it eventually does.\nHowever, it really feels like these should just have been skipped\n by PostgreSQL without the rollback/retry until success.\n\nSo, am I missing something/doing it wrong?  Or could there be a\n potential issue that needs raised?\nThanks\nJim\n\n\n\n\nOn 30-Jun.-2020 12:10, Gunther Schadow\n wrote:\n\nHi\n all,\n \n\n long time ago I devised with your help a task queuing system which\n uses SELECT ... FOR UPDATE SKIP LOCKED for many parallel workers\n to find tasks in the queue, and it used a partitioned table where\n the hot part of the queue is short and so the query for a job is\n quick and the skip locked locking makes sure that one job is only\n assigned to one worker. And this works pretty well for me, except\n that when we run many workers we find a lot of these failures\n occurring:\n \n\n \"tuple to be locked was already moved to another partition due to\n concurrent update\"\n \n\n This would not exactly look like a bug, because the message says\n \"to be locked\", so at least it's not allowing two workers to lock\n the same tuple. But it seems that the skip-locked mode should not\n make an error out of this, but treat it as the tuple was already\n locked. Why would it want to lock the tuple (representing the job)\n if another worker has already finished his UPDATE of the job to\n mark it as \"done\" (which is what makes the tuple move to the\n \"completed\" partition.)\n \n\n Either the SELECT for jobs to do returned a wrong tuple, which was\n already update, or there is some lapse in the locking.\n \n\n Either way it would seem to be a waste of time throwing all these\n errors when the tuple should not even have been selected for\n update and locking.\n \n\n I wonder if anybody knows anything about that issue? Of course\n you'll want to see the DDL and SQL queries, etc. but you can't\n really try it out unless you do some massively parallel magic. So\n I figured I just ask.\n \n\n regards,\n \n -Gunther", "msg_date": "Tue, 11 Aug 2020 18:58:29 -0400", "msg_from": "Jim Jarvie <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a known bug with SKIP LOCKED and \"tuple to be locked was\n already moved to another partition due to concurrent update\"?" }, { "msg_contents": "Hi everyone,\n\nIs this a well know bug? I just hit the same issue. Below are steps to\nreproduce.\n\n\nCreate data table and partition it by state\n\nCREATE TABLE data (\n id bigserial not null,\n state smallint not null DEFAULT 1,\n updated_at timestamp without time zone default now()\n) partition by list(state);create table data_pending partition of data\nfor values in (1);create table data_processing partition of data for\nvalues in (2);create table data_done partition of data for values in\n(3);\n\nGenerate test data\n\nINSERT INTO transactionsSELECT generate_series(1,1000) AS id, 1 AS state, NOW();\n\nMove data from pending state to processing in batches\n\nUPDATE transactionsSET transactions.state = 2, updated_at = NOW()WHERE id = (\n SELECT id\n FROM transactions\n WHERE state = 1\n LIMIT 10 FOR UPDATE SKIP LOCKED\n)\nRETURNING id;\n\nYou can now process them in application and move to done state when\nfinished.\n\nHowever, this doesn't work as FOR UPDATE SKIP LOCKED fails for partitioned\ntables with following error\n\nERROR: tuple to be locked was already moved to another partition due to\nconcurrent update\n\nHere is full script to test this\n\ncat > test-skip-locked-with-partitions << 'SCRIPT'PSQL_CMD=\"psql -q -U\npostgres\"eval $PSQL_CMD > /dev/null << EOFCREATE TABLE IF NOT EXISTS\ndata ( id bigserial not null, state smallint not null DEFAULT 1,\nupdated_at timestamp without time zone default now()) partition by\nlist(state);CREATE TABLE IF NOT EXISTS data_pending partition of data\nfor values in (1);CREATE TABLE IF NOT EXISTS data_processing partition\nof data for values in (2);CREATE TABLE IF NOT EXISTS data_done\npartition of data for values in (3);INSERT INTO dataSELECT\ngenerate_series(1,1000) AS id, 1 AS state, NOW();EOFfunction run {eval\n$PSQL_CMD > /dev/null << EOF UPDATE data SET state = 2,\nupdated_at = NOW() WHERE id IN ( SELECT id FROM data\n WHERE state = 1 LIMIT 10 FOR UPDATE SKIP LOCKED )\nRETURNING id;EOF}for i in {1..100}; do (run &) doneSCRIPT\nbash test-skip-locked-with-partitions\n\nWhich results in\n\nroot@e92e3fa7fecf:/# bash test-skip-locked-with-partitions\nERROR: tuple to be locked was already moved to another partition due\nto concurrent update\nERROR: tuple to be locked was already moved to another partition due\nto concurrent update\nERROR: tuple to be locked was already moved to another partition due\nto concurrent update\nERROR: tuple to be locked was already moved to another partition due\nto concurrent update\nERROR: tuple to be locked was already moved to another partition due\nto concurrent update\nERROR: tuple to be locked was already moved to another partition due\nto concurrent update\nERROR: tuple to be locked was already moved to another partition due\nto concurrent update\nERROR: tuple to be locked was already moved to another partition due\nto concurrent update\nERROR: tuple to be locked was already moved to another partition due\nto concurrent update\n\nThis works well if table is not partitioned. However, query with WHERE\nstate=1 has hard time when there are millions of records. Adding index\ndoesn't help as Query Planner tends to ignore it and do Seq Scan anyway.\nWorkaround for now is to create separate queue table, which is like custom\npartitioning.\n\nINSERT SAD CAT PICTURE\n\nhttps://gist.github.com/arvenil/b46e927c943fa7495780ea2ae5492e78\n\n\nBest, Kamil Dziedzic\n\nOn Thu, Mar 11, 2021 at 10:08 PM Jim Jarvie <[email protected]> wrote:\n\n> Hi Gunther & List,\n>\n> I think I have an extremely similar issue and things point in the same\n> direction of a potential issue for skip locked on partitioned tables.\n>\n> Background is I had a queue table on v9.6 with fairly high volume (>50M\n> items, growth in the 1+M/daily).\n>\n> Processing the queue with FOR UPDATE SKIP LOCKED was reliable but traffic\n> volumes on v9.6 and the fact v12 is current let to migrating to v12 and\n> using a partitioned table.\n>\n> Queue has distinct categories of items, so the table is partitioned by\n> list on each category. Processing in 1 category results in it being\n> updated to the next logical category (i.e. it will migrate partition once\n> it is processed).\n>\n> Within each category, there can be 10'sM rows, so the list partition is\n> hash partitioned as well. I don't think this is the issue but is mentioned\n> for completeness.\n>\n> Now, when processing the queue, there are regular transaction aborts with\n> \"tuple to be locked was already moved to another partition due to\n> concurrent update\".\n>\n> From everything I can trace, it really does look like this is caused by\n> rows which should be locked/skipped as they are processed by a different\n> thread.\n>\n> I tried switching 'for update' to 'for key share' and that created a\n> cascade of deadlock aborts, so was worse for my situation.\n>\n> For now, I roll back and repeat the select for update skip locked until it\n> succeeds - which it eventually does.\n>\n> However, it really feels like these should just have been skipped by\n> PostgreSQL without the rollback/retry until success.\n>\n> So, am I missing something/doing it wrong? Or could there be a potential\n> issue that needs raised?\n>\n> Thanks\n>\n> Jim\n>\n>\n>\n> On 30-Jun.-2020 12:10, Gunther Schadow wrote:\n>\n> Hi all,\n>\n> long time ago I devised with your help a task queuing system which uses\n> SELECT ... FOR UPDATE SKIP LOCKED for many parallel workers to find tasks\n> in the queue, and it used a partitioned table where the hot part of the\n> queue is short and so the query for a job is quick and the skip locked\n> locking makes sure that one job is only assigned to one worker. And this\n> works pretty well for me, except that when we run many workers we find a\n> lot of these failures occurring:\n>\n> \"tuple to be locked was already moved to another partition due to\n> concurrent update\"\n>\n> This would not exactly look like a bug, because the message says \"to be\n> locked\", so at least it's not allowing two workers to lock the same tuple.\n> But it seems that the skip-locked mode should not make an error out of\n> this, but treat it as the tuple was already locked. Why would it want to\n> lock the tuple (representing the job) if another worker has already\n> finished his UPDATE of the job to mark it as \"done\" (which is what makes\n> the tuple move to the \"completed\" partition.)\n>\n> Either the SELECT for jobs to do returned a wrong tuple, which was already\n> update, or there is some lapse in the locking.\n>\n> Either way it would seem to be a waste of time throwing all these errors\n> when the tuple should not even have been selected for update and locking.\n>\n> I wonder if anybody knows anything about that issue? Of course you'll want\n> to see the DDL and SQL queries, etc. but you can't really try it out unless\n> you do some massively parallel magic. So I figured I just ask.\n>\n> regards,\n> -Gunther\n>\n>\n>\n>\n>\n>\n\n\nHi everyone,Is this a well know bug? I just hit the same issue. Below are steps to reproduce.Create data table and partition it by state\nCREATE TABLE data (\n id bigserial not null,\n state smallint not null DEFAULT 1,\n updated_at timestamp without time zone default now()\n) partition by list(state);\ncreate table data_pending partition of data for values in (1);\ncreate table data_processing partition of data for values in (2);\ncreate table data_done partition of data for values in (3);\nGenerate test data\nINSERT INTO transactions\nSELECT generate_series(1,1000) AS id, 1 AS state, NOW();\nMove data from pending state to processing in batches\nUPDATE transactions\nSET transactions.state = 2, updated_at = NOW()\nWHERE id = (\n SELECT id\n FROM transactions\n WHERE state = 1\n LIMIT 10 FOR UPDATE SKIP LOCKED\n)\nRETURNING id;\nYou can now process them in application and move to done state when finished.\nHowever, this doesn't work as FOR UPDATE SKIP LOCKED fails for partitioned tables with following error\n\nERROR: tuple to be locked was already moved to another partition due to concurrent update\n\nHere is full script to test this\ncat > test-skip-locked-with-partitions << 'SCRIPT'\nPSQL_CMD=\"psql -q -U postgres\"\n\neval $PSQL_CMD > /dev/null << EOF\nCREATE TABLE IF NOT EXISTS data (\n id bigserial not null,\n state smallint not null DEFAULT 1,\n updated_at timestamp without time zone default now()\n) partition by list(state);\nCREATE TABLE IF NOT EXISTS data_pending partition of data for values in (1);\nCREATE TABLE IF NOT EXISTS data_processing partition of data for values in (2);\nCREATE TABLE IF NOT EXISTS data_done partition of data for values in (3);\n\nINSERT INTO data\nSELECT generate_series(1,1000) AS id, 1 AS state, NOW();\nEOF\n\nfunction run {\neval $PSQL_CMD > /dev/null << EOF\n UPDATE data\n SET state = 2, updated_at = NOW()\n WHERE id IN (\n SELECT id\n FROM data\n WHERE state = 1\n LIMIT 10 FOR UPDATE SKIP LOCKED\n )\n RETURNING id;\nEOF\n}\n\nfor i in {1..100}; do (run &) done\nSCRIPT\nbash test-skip-locked-with-partitions\nWhich results in\nroot@e92e3fa7fecf:/# bash test-skip-locked-with-partitions\nERROR: tuple to be locked was already moved to another partition due to concurrent update\nERROR: tuple to be locked was already moved to another partition due to concurrent update\nERROR: tuple to be locked was already moved to another partition due to concurrent update\nERROR: tuple to be locked was already moved to another partition due to concurrent update\nERROR: tuple to be locked was already moved to another partition due to concurrent update\nERROR: tuple to be locked was already moved to another partition due to concurrent update\nERROR: tuple to be locked was already moved to another partition due to concurrent update\nERROR: tuple to be locked was already moved to another partition due to concurrent update\nERROR: tuple to be locked was already moved to another partition due to concurrent update\nThis works well if table is not partitioned.\nHowever, query with WHERE state=1 has hard time when there are millions of records.\nAdding index doesn't help as Query Planner tends to ignore it and do Seq Scan anyway.\nWorkaround for now is to create separate queue table, which is like custom partitioning.\nINSERT SAD CAT PICTUREhttps://gist.github.com/arvenil/b46e927c943fa7495780ea2ae5492e78Best, Kamil Dziedzic\nOn Thu, Mar 11, 2021 at 10:08 PM Jim Jarvie <[email protected]> wrote:\n\nHi Gunther & List,\nI think I have an extremely similar issue and things point in the\n same direction of a potential issue for skip locked on partitioned\n tables.\nBackground is I had a queue table on v9.6 with fairly high volume\n (>50M items, growth in the 1+M/daily).\nProcessing the queue with FOR UPDATE SKIP LOCKED was reliable but\n traffic volumes on v9.6 and the fact v12 is current let to\n migrating to v12 and using a partitioned table.\nQueue has distinct categories of items, so the table is\n partitioned by list on each category.  Processing in 1 category\n results in it being updated to the next logical category (i.e. it\n will migrate partition once it is processed).\n\nWithin each category, there can be 10'sM rows, so the list\n partition is hash partitioned as well.  I don't think this is the\n issue but is mentioned for completeness.\n\nNow, when processing the queue, there are regular transaction\n aborts with \"tuple to be locked was already moved to another\n partition due to concurrent update\".\nFrom everything I can trace, it really does look like this is\n caused by rows which should be locked/skipped as they are\n processed by a different thread.\nI tried switching 'for update' to 'for key share' and that\n created a cascade of deadlock aborts, so was worse for my\n situation.\n\nFor now, I roll back and repeat the select for update skip locked\n until it succeeds - which it eventually does.\nHowever, it really feels like these should just have been skipped\n by PostgreSQL without the rollback/retry until success.\n\nSo, am I missing something/doing it wrong?  Or could there be a\n potential issue that needs raised?\nThanks\nJim\n\n\n\n\nOn 30-Jun.-2020 12:10, Gunther Schadow\n wrote:\n\nHi\n all,\n \n\n long time ago I devised with your help a task queuing system which\n uses SELECT ... FOR UPDATE SKIP LOCKED for many parallel workers\n to find tasks in the queue, and it used a partitioned table where\n the hot part of the queue is short and so the query for a job is\n quick and the skip locked locking makes sure that one job is only\n assigned to one worker. And this works pretty well for me, except\n that when we run many workers we find a lot of these failures\n occurring:\n \n\n \"tuple to be locked was already moved to another partition due to\n concurrent update\"\n \n\n This would not exactly look like a bug, because the message says\n \"to be locked\", so at least it's not allowing two workers to lock\n the same tuple. But it seems that the skip-locked mode should not\n make an error out of this, but treat it as the tuple was already\n locked. Why would it want to lock the tuple (representing the job)\n if another worker has already finished his UPDATE of the job to\n mark it as \"done\" (which is what makes the tuple move to the\n \"completed\" partition.)\n \n\n Either the SELECT for jobs to do returned a wrong tuple, which was\n already update, or there is some lapse in the locking.\n \n\n Either way it would seem to be a waste of time throwing all these\n errors when the tuple should not even have been selected for\n update and locking.\n \n\n I wonder if anybody knows anything about that issue? Of course\n you'll want to see the DDL and SQL queries, etc. but you can't\n really try it out unless you do some massively parallel magic. So\n I figured I just ask.\n \n\n regards,\n \n -Gunther", "msg_date": "Thu, 11 Mar 2021 22:12:37 +0100", "msg_from": "Kamil Dziedzic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a known bug with SKIP LOCKED and \"tuple to be locked was\n already moved to another partition due to concurrent update\"?" }, { "msg_contents": "https://www.postgresql-archive.org/CPU-hogged-by-concurrent-SELECT-FOR-UPDATE-SKIP-LOCKED-td6150480.html\n\nDavid Rowley on 20 Aug 2020-\n\"When updates occur in a non-partitioned table we can follow item\npointer chains to find the live row and check if the WHERE clause\nstill matches to determine if the row should be updated, or in this\ncase just locked since it's a SELECT FOR UPDATE. However, with\npartitioned table, a concurrent UPDATE may have caused the row to have\nbeen moved off to another partition, in which case the tuple's item\npointer cannot point to it since we don't have enough address space,\nwe only have 6 bytes for a TID. To get around the fact that we can't\nfollow these update chains, we just throw the serialization error,\nwhich is what you're getting. Ideally, we'd figure out where the live\nversion of the tuple is and check if it matches the WHERE clause and\nlock it if it does, but we've no means to do that with the current\ndesign.\"\n\nMoving data between partitions is supported, but maybe another partitioning\ndesign is better suited for high concurrency use cases.\n\nhttps://www.postgresql-archive.org/CPU-hogged-by-concurrent-SELECT-FOR-UPDATE-SKIP-LOCKED-td6150480.htmlDavid Rowley on 20 Aug 2020-\"When updates occur in a non-partitioned table we can follow itempointer chains to find the live row and check if the WHERE clausestill matches to determine if the row should be updated, or in thiscase just locked since it's a SELECT FOR UPDATE. However, withpartitioned table, a concurrent UPDATE may have caused the row to havebeen moved off to another partition, in which case the tuple's itempointer cannot point to it since we don't have enough address space,we only have 6 bytes for a TID. To get around the fact that we can'tfollow these update chains, we just throw the serialization error,which is what you're getting. Ideally, we'd figure out where the liveversion of the tuple is and check if it matches the WHERE clause andlock it if it does, but we've no means to do that with the currentdesign.\"Moving data between partitions is supported, but maybe another partitioning design is better suited for high concurrency use cases.", "msg_date": "Fri, 12 Mar 2021 09:22:31 -0700", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a known bug with SKIP LOCKED and \"tuple to be locked was\n already moved to another partition due to concurrent update\"?" }, { "msg_contents": "To me it seems like bug because it clearly states it tries to lock on non\nexisting row:\n\"tuple to be locked was already moved to another partition due to\nconcurrent update\"\n\nBut there is SKIP LOCKED clause so why throw an error that it can't lock if\nwe explicitly ask to not bother if you can't lock and just skip it.\n\n\nBest, Saludos, Kamil Dziedzic\n\nOn Fri, Mar 12, 2021, 17:22 Michael Lewis <[email protected]> wrote:\n\n>\n> https://www.postgresql-archive.org/CPU-hogged-by-concurrent-SELECT-FOR-UPDATE-SKIP-LOCKED-td6150480.html\n>\n> David Rowley on 20 Aug 2020-\n> \"When updates occur in a non-partitioned table we can follow item\n> pointer chains to find the live row and check if the WHERE clause\n> still matches to determine if the row should be updated, or in this\n> case just locked since it's a SELECT FOR UPDATE. However, with\n> partitioned table, a concurrent UPDATE may have caused the row to have\n> been moved off to another partition, in which case the tuple's item\n> pointer cannot point to it since we don't have enough address space,\n> we only have 6 bytes for a TID. To get around the fact that we can't\n> follow these update chains, we just throw the serialization error,\n> which is what you're getting. Ideally, we'd figure out where the live\n> version of the tuple is and check if it matches the WHERE clause and\n> lock it if it does, but we've no means to do that with the current\n> design.\"\n>\n> Moving data between partitions is supported, but maybe another\n> partitioning design is better suited for high concurrency use cases.\n>\n\nTo me it seems like bug because it clearly states it tries to lock on non existing row:\"tuple to be locked was already moved to another partition due to concurrent update\"But there is SKIP LOCKED clause so why throw an error that it can't lock if we explicitly ask to not bother if you can't lock and just skip it.Best, Saludos, Kamil DziedzicOn Fri, Mar 12, 2021, 17:22 Michael Lewis <[email protected]> wrote:https://www.postgresql-archive.org/CPU-hogged-by-concurrent-SELECT-FOR-UPDATE-SKIP-LOCKED-td6150480.htmlDavid Rowley on 20 Aug 2020-\"When updates occur in a non-partitioned table we can follow itempointer chains to find the live row and check if the WHERE clausestill matches to determine if the row should be updated, or in thiscase just locked since it's a SELECT FOR UPDATE. However, withpartitioned table, a concurrent UPDATE may have caused the row to havebeen moved off to another partition, in which case the tuple's itempointer cannot point to it since we don't have enough address space,we only have 6 bytes for a TID. To get around the fact that we can'tfollow these update chains, we just throw the serialization error,which is what you're getting. Ideally, we'd figure out where the liveversion of the tuple is and check if it matches the WHERE clause andlock it if it does, but we've no means to do that with the currentdesign.\"Moving data between partitions is supported, but maybe another partitioning design is better suited for high concurrency use cases.", "msg_date": "Fri, 12 Mar 2021 17:30:23 +0100", "msg_from": "Kamil Dziedzic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a known bug with SKIP LOCKED and \"tuple to be locked was\n already moved to another partition due to concurrent update\"?" } ]
[ { "msg_contents": "Hello list,\n\nI am having issues with performance inserting data in Postgres and would\nlike\nto ask for help figuring out the problem as I ran out of ideas.\n\nI have a process that generates a CSV file with 1 million records in it\nevery\n5 minutes and each file is about 240MB. I need this data to be inserted\ninto a\ntable on Postgres. A peculiarity about it is that the data on these files\nmight be duplicate. I might have a row on the first file that is also\npresent\non the second or the third and so on. I don't care about the duplicates, so\nI\nhave a unique constraint on my table to discard those.\n\nThe data in the CSV is pretty simple:\n\n```\nuser_id,name,url\n```\n\nThe table is defined like this:\n\n```\ncreate unlogged table users_no_dups (\n created_ts timestamp without time zone,\n user_id bigint not null,\n name text,\n url text,\n unique(user_id)\n);\n```\n\nTable is created as `unlogged` as a way to improve performance. I am aware\nof the consequences of this possibly causing data loss.\n\nMy process for inserting data into the table is as follows:\n\n* Create an intermediary table `users` as follows:\n\n```\ncreate unlogged table users (\n created_ts timestamp without time zone default current_timestamp,\n user_id bigint,\n name text,\n url text\n) with (autovacuum_enabled = false, toast.autovacuum_enabled = false)\n```\n\n* Use `COPY` to copy the data from the CSV file into an intermediary table\n\n```\ncopy users(user_id, name, url) from\n'myfile.csv' with(format csv, header true, delimiter ',', quote '\"', escape\n'\\\\')\n```\n\n* Insert the data from the `users` table into the `users_no_dups` table\n\n```\ninsert into users_no_dups (\n created_ts,\n user_id,\n name,\n url\n) (\n select\n created_ts,\n user_id,\n name,\n url\n from\n users\n) on conflict do nothing\n```\n\n* Drop the `users` table\n\n* Repeat the whole thing for the next file.\n\n\nRunning the above loop worked fine for about 12 hours. Each file was taking\nabout 30 seconds to be processed. About 4 seconds to create the `users`\ntable\nand have the CSV data loaded into it and anything between 20 and 30 seconds\nto\ninsert the data from `users` into `users_no_dups`.\n\nAll of a sudden inserting from `users` into `users_no_dups` started taking\n20+\nminutes.\n\nI recreated the table with a `fillfactor` of `30` and tried again and things\nwere running well again with that same 30 seconds for processing. Again\nafter\nabout 12 hours, things got really slow.\n\nRecreating the table now isn't really providing any improvements. I tried\nrecreating it with a `fillfactor` of `10`, but it was taking too long and\ntoo\nmuch space (the table had 300GB with the fillfactor set to 30; with it set\nto\n10 it went up to almost 1TB).\n\nWatching on iotop, the `INSERT` statement `WRITE` speed is always between 20\nand 100 K/s now. When I first started inserting the `WRITE` speed is always\nabove 100M/s.\n\nIf I try to copy the `users_no_dups` table to another table (say\nusers_no_dups_2 with the same structure), the `WRITE` speed also goes to\n100M/s or more until it gets to the last 2 GB of data being copied. Then\nspeed\ngoes down to the 20 to 100K/s again and stays there (I know this from\nwatching\n`iotop`).\n\nI have the following custom configuration on my postgres installation that\nI've done in order to try to improve the performance:\n\n```\nssl = off\nshared_buffers = 8GB\nwork_mem = 12GB\nmaintenance_work_mem = 12GB\nmax_stack_depth = 4MB\nsynchronous_commit = off\nwal_writer_flush_after = 128MB\nmax_wal_size = 32GB\nmin_wal_size = 80MB\neffective_cache_size = 96GB\n```\n\nInformation about the machine:\n\n```\nProcessor: 2x Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz (6 cores each, 12\nthreads)\nRAM: 256GB\n\n\nDisk1: 2TB SSD SATA-3 Samsung Evo 860\nDisk2: 6TB HDD SATA-3 Seagate Exos Enterprise 7200RPM\nDisk3: 8TB HDD SATA-3 Seagate Exos Enterprise 7200RPM\n\nDisk1 and Disk2 are configured as a single logical volume.\n\nTable `users_no_dups` is in a tablespace on `Disk3`. The defaul tablespace\nis\nin the logical volume composed by `Disk1` and `Disk2`.\n\nOS: Ubuntu Linux 19.10\nPostgres version: PostgreSQL 11.7 (Ubuntu 11.7-0ubuntu0.19.10.1) on\nx86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1\n20191008, 64-bit\n```\n\nAny ideas why I am seeing this decrease in performance with the insert or\nany\nsuggestions on how I can try to figure this out?\n\nSorry for the wall of text. Just trying to give as much info as I have.\n\nHenrique\n\nHello list,I am having issues with performance inserting data in Postgres and would liketo ask for help figuring out the problem as I ran out of ideas.I have a process that generates a CSV file with 1 million records in it every5 minutes and each file is about 240MB. I need this data to be inserted into atable on Postgres. A peculiarity about it is that the data on these filesmight be duplicate. I might have a row on the first file that is also presenton the second or the third and so on. I don't care about the duplicates, so Ihave a unique constraint on my table to discard those.The data in the CSV is pretty simple:```user_id,name,url```The table is defined like this:```create unlogged table users_no_dups (    created_ts timestamp without time zone,    user_id bigint not null,    name text,    url text,    unique(user_id));```Table is created as `unlogged` as a way to improve performance. I am aware of the consequences of this possibly causing data loss.My process for inserting data into the table is as follows:* Create an intermediary table `users` as follows:```create unlogged table users (    created_ts timestamp without time zone default current_timestamp,    user_id bigint,    name text,    url text) with (autovacuum_enabled = false, toast.autovacuum_enabled = false)```* Use `COPY` to copy the data from the CSV file into an intermediary table```copy users(user_id, name, url) from'myfile.csv' with(format csv, header true, delimiter ',', quote '\"', escape '\\\\')```* Insert the data from the `users` table into the `users_no_dups` table```insert into users_no_dups (    created_ts,     user_id,     name,     url) (    select         created_ts,        user_id,         name,         url    from         users) on conflict do nothing```* Drop the `users` table* Repeat the whole thing for the next file.Running the above loop worked fine for about 12 hours. Each file was takingabout 30 seconds to be processed. About 4 seconds to create the `users` tableand have the CSV data loaded into it and anything between 20 and 30 seconds toinsert the data from `users` into `users_no_dups`.All of a sudden inserting from `users` into `users_no_dups` started taking 20+minutes.I recreated the table with a `fillfactor` of `30` and tried again and thingswere running well again with that same 30 seconds for processing. Again afterabout 12 hours, things got really slow.Recreating the table now isn't really providing any improvements. I triedrecreating it with a `fillfactor` of `10`, but it was taking too long and toomuch space (the table had 300GB with the fillfactor set to 30; with it set to10 it went up to almost 1TB).Watching on iotop, the `INSERT` statement `WRITE` speed is always between 20and 100 K/s now. When I first started inserting  the `WRITE` speed is alwaysabove 100M/s.If I try to copy the `users_no_dups` table to another table (sayusers_no_dups_2 with the same structure), the `WRITE` speed also goes to100M/s or more until it gets to the last 2 GB of data being copied. Then speedgoes down to the 20 to 100K/s again and stays there (I know this from watching`iotop`).I have the following custom configuration on my postgres installation thatI've done in order to try to improve the performance:```ssl = offshared_buffers = 8GBwork_mem = 12GBmaintenance_work_mem = 12GBmax_stack_depth = 4MBsynchronous_commit = offwal_writer_flush_after = 128MBmax_wal_size = 32GBmin_wal_size = 80MBeffective_cache_size = 96GB```Information about the machine:```Processor: 2x Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz (6 cores each, 12 threads)RAM: 256GBDisk1: 2TB SSD SATA-3 Samsung Evo 860Disk2: 6TB HDD SATA-3 Seagate Exos Enterprise 7200RPMDisk3: 8TB HDD SATA-3 Seagate Exos Enterprise 7200RPMDisk1 and Disk2 are configured as a single logical volume.Table `users_no_dups` is in a tablespace on `Disk3`. The defaul tablespace isin the logical volume composed by `Disk1` and `Disk2`.OS: Ubuntu Linux 19.10Postgres version: PostgreSQL 11.7 (Ubuntu 11.7-0ubuntu0.19.10.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit```Any ideas why I am seeing this decrease in performance with the insert or anysuggestions on how I can try to figure this out?Sorry for the wall of text. Just trying to give as much info as I have.Henrique", "msg_date": "Mon, 13 Jul 2020 10:23:16 -0400", "msg_from": "Henrique Montenegro <[email protected]>", "msg_from_op": true, "msg_subject": "Sudden insert performance degradation" }, { "msg_contents": "Hi Henrique,\r\n\r\nOn 13. Jul 2020, at 16:23, Henrique Montenegro <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n[...]\r\n\r\n* Insert the data from the `users` table into the `users_no_dups` table\r\n\r\n```\r\ninsert into users_no_dups (\r\n created_ts,\r\n user_id,\r\n name,\r\n url\r\n) (\r\n select\r\n created_ts,\r\n user_id,\r\n name,\r\n url\r\n from\r\n users\r\n) on conflict do nothing\r\n```\r\n\r\nHow do you check contraints here? Is this enforced with UK/PK?\r\n\r\nRunning the above loop worked fine for about 12 hours. Each file was taking\r\nabout 30 seconds to be processed. About 4 seconds to create the `users` table\r\nand have the CSV data loaded into it and anything between 20 and 30 seconds to\r\ninsert the data from `users` into `users_no_dups`.\r\n\r\nDo you see anything suspicious in the logs, i.e. something in the realms of running out of transaction IDs?\r\n\r\n[...]\r\n\r\nRecreating the table now isn't really providing any improvements. I tried\r\nrecreating it with a `fillfactor` of `10`, but it was taking too long and too\r\nmuch space (the table had 300GB with the fillfactor set to 30; with it set to\r\n10 it went up to almost 1TB).\r\n\r\nTo me it sounds like the UK/PK is getting too much to write. A possible solution could be to start partitioning the table.\r\n\r\n[...]\r\n```\r\nssl = off\r\nshared_buffers = 8GB\r\nwork_mem = 12GB\r\nmaintenance_work_mem = 12GB\r\nmax_stack_depth = 4MB\r\nsynchronous_commit = off\r\nwal_writer_flush_after = 128MB\r\nmax_wal_size = 32GB\r\nmin_wal_size = 80MB\r\neffective_cache_size = 96GB\r\n```\r\n\r\nAnother suggestion would be to increase the min_wal_size here, but since you use UNLOGGED tables it does not matter much.\r\n\r\n\r\nInformation about the machine:\r\n\r\n```\r\nProcessor: 2x Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz (6 cores each, 12 threads)\r\nRAM: 256GB\r\n\r\n\r\nDisk1: 2TB SSD SATA-3 Samsung Evo 860\r\nDisk2: 6TB HDD SATA-3 Seagate Exos Enterprise 7200RPM\r\nDisk3: 8TB HDD SATA-3 Seagate Exos Enterprise 7200RPM\r\n\r\nDisk1 and Disk2 are configured as a single logical volume.\r\n\r\nJust curious: does that mean you mix up SSD + HDD?\r\n\r\nCheers,\r\nSebastian\r\n\r\n\r\n--\r\n\r\nSebastian Dressler, Solution Architect\r\n+49 30 994 0496 72 | [email protected]<mailto:[email protected]>\r\n\r\nSwarm64 AS\r\nParkveien 41 B | 0258 Oslo | Norway\r\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\r\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck\r\n\r\nSwarm64 AS Zweigstelle Hive\r\nUllsteinstr. 120 | 12109 Berlin | Germany\r\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B\r\n\r\n\n\n\n\n\n\r\nHi Henrique,\r\n\n\n\nOn 13. Jul 2020, at 16:23, Henrique Montenegro <[email protected]> wrote:\n\n\n[...]\n\r\n* Insert the data from the `users` table into the `users_no_dups` table\n\r\n```\r\ninsert into users_no_dups (\r\n    created_ts, \r\n    user_id, \r\n    name, \r\n    url\r\n) (\r\n    select \r\n        created_ts,\r\n        user_id, \r\n        name, \r\n        url\r\n    from \r\n        users\r\n) on conflict do nothing\r\n```\n\n\n\n\n\nHow do you check contraints here? Is this enforced with UK/PK?\n\n\n\nRunning the above loop worked fine for about 12 hours. Each file was taking\r\nabout 30 seconds to be processed. About 4 seconds to create the `users` table\r\nand have the CSV data loaded into it and anything between 20 and 30 seconds to\r\ninsert the data from `users` into `users_no_dups`.\n\n\n\n\n\nDo you see anything suspicious in the logs, i.e. something in the realms of running out of transaction IDs?\n\n\n[...]\n\n\n\n\n\n\nRecreating the table now isn't really providing any improvements. I tried\r\nrecreating it with a `fillfactor` of `10`, but it was taking too long and too\r\nmuch space (the table had 300GB with the fillfactor set to 30; with it set to\r\n10 it went up to almost 1TB).\n\n\n\n\n\nTo me it sounds like the UK/PK is getting too much to write. A possible solution could be to start partitioning the table.\n\n\n\n\n[...]\r\n```\r\nssl = off\r\nshared_buffers = 8GB\r\nwork_mem = 12GB\r\nmaintenance_work_mem = 12GB\r\nmax_stack_depth = 4MB\r\nsynchronous_commit = off\r\nwal_writer_flush_after = 128MB\r\nmax_wal_size = 32GB\r\nmin_wal_size = 80MB\r\neffective_cache_size = 96GB\r\n```\n\n\n\n\n\nAnother suggestion would be to increase the min_wal_size here, but since you use UNLOGGED tables it does not matter much.\n\n\n\n\r\nInformation about the machine:\n\r\n```\r\nProcessor: 2x Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz (6 cores each, 12 threads)\r\nRAM: 256GB\n\n\r\nDisk1: 2TB SSD SATA-3 Samsung Evo 860\r\nDisk2: 6TB HDD SATA-3 Seagate Exos Enterprise 7200RPM\r\nDisk3: 8TB HDD SATA-3 Seagate Exos Enterprise 7200RPM\n\r\nDisk1 and Disk2 are configured as a single logical volume.\n\n\n\n\n\nJust curious: does that mean you mix up SSD + HDD?\n\n\nCheers,\nSebastian\n\n\n\n\n\n\r\n--\n\r\nSebastian Dressler, Solution Architect \r\n+49 30 994 0496 72 | [email protected] \n\r\nSwarm64 AS\r\nParkveien 41 B | 0258 Oslo | Norway\r\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\r\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck \n\r\nSwarm64 AS Zweigstelle Hive\r\nUllsteinstr. 120 | 12109 Berlin | Germany\r\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B", "msg_date": "Mon, 13 Jul 2020 15:20:27 +0000", "msg_from": "Sebastian Dressler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "Is this an insert only table and perhaps not being picked up by autovacuum?\nIf so, try a manual \"vacuum analyze\" before/after each batch run perhaps.\nYou don't mention updates, but also have been adjusting fillfactor so I am\nnot not sure.\n\nIs this an insert only table and perhaps not being picked up by autovacuum? If so, try a manual \"vacuum analyze\" before/after each batch run perhaps. You don't mention updates, but also have been adjusting fillfactor so I am not not sure.", "msg_date": "Mon, 13 Jul 2020 10:27:39 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "On Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler <[email protected]>\nwrote:\n\n> Hi Henrique,\n>\n> On 13. Jul 2020, at 16:23, Henrique Montenegro <[email protected]> wrote:\n>\n> [...]\n>\n> * Insert the data from the `users` table into the `users_no_dups` table\n>\n> ```\n> insert into users_no_dups (\n> created_ts,\n> user_id,\n> name,\n> url\n> ) (\n> select\n> created_ts,\n> user_id,\n> name,\n> url\n> from\n> users\n> ) on conflict do nothing\n> ```\n>\n>\n> How do you check contraints here? Is this enforced with UK/PK?\n>\n\nThe Unique Key is supposed to to the constraint enforcing here. The `users`\ntable will have data that is duplicate and the maximum number of records on\nit is 1 million. Then I just try to insert it into the `users_no_dups`\ntable with the `on conflict do nothing` to ignore the duplicates and\ndiscard them.\n\n\n\n> Running the above loop worked fine for about 12 hours. Each file was taking\n> about 30 seconds to be processed. About 4 seconds to create the `users`\n> table\n> and have the CSV data loaded into it and anything between 20 and 30\n> seconds to\n> insert the data from `users` into `users_no_dups`.\n>\n>\n> Do you see anything suspicious in the logs, i.e. something in the realms\n> of running out of transaction IDs?\n>\n\nI set the log to debug1. I haven't seen anything that called my attention,\nbut I am not really sure what to look for, so perhaps I missed it. Any\nsuggestions on what to look for or any specific log configuration to do?\n\n\n>\n> [...]\n>\n>\n> Recreating the table now isn't really providing any improvements. I tried\n> recreating it with a `fillfactor` of `10`, but it was taking too long and\n> too\n> much space (the table had 300GB with the fillfactor set to 30; with it set\n> to\n> 10 it went up to almost 1TB).\n>\n>\n> To me it sounds like the UK/PK is getting too much to write. A possible\n> solution could be to start partitioning the table.\n>\n\nI thought about partitioning it, but I can't figure out on what. The\n`user_id` column is a number that is somewhat random so I don't know what\nkinds of range I would use for it. I will try to look at the values again\nand see if there is something that I could perhaps use as a range. Any\nother suggestions?\n\n\n>\n> [...]\n> ```\n> ssl = off\n> shared_buffers = 8GB\n> work_mem = 12GB\n> maintenance_work_mem = 12GB\n> max_stack_depth = 4MB\n> synchronous_commit = off\n> wal_writer_flush_after = 128MB\n> max_wal_size = 32GB\n> min_wal_size = 80MB\n> effective_cache_size = 96GB\n> ```\n>\n>\n> Another suggestion would be to increase the min_wal_size here, but since\n> you use UNLOGGED tables it does not matter much.\n>\n>\n> Information about the machine:\n>\n> ```\n> Processor: 2x Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz (6 cores each, 12\n> threads)\n> RAM: 256GB\n>\n>\n> Disk1: 2TB SSD SATA-3 Samsung Evo 860\n> Disk2: 6TB HDD SATA-3 Seagate Exos Enterprise 7200RPM\n> Disk3: 8TB HDD SATA-3 Seagate Exos Enterprise 7200RPM\n>\n> Disk1 and Disk2 are configured as a single logical volume.\n>\n>\n> Just curious: does that mean you mix up SSD + HDD?\n>\n\nYeah, I did that. Probably not very smart of me. I plan on undoing it soon.\nI assumed that is not what is causing my issue since the tablespace where\nthe table is stored is on `Disk3` which is not part of the Logical Volume.\n\n\n>\n> Cheers,\n> Sebastian\n>\n>\n> --\n>\n> Sebastian Dressler, Solution Architect\n> +49 30 994 0496 72 | [email protected]\n>\n> Swarm64 AS\n> Parkveien 41 B | 0258 Oslo | Norway\n> Registered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\n> CEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender\n> (Styrets Leder): Dr. Sverre Munck\n>\n> Swarm64 AS Zweigstelle Hive\n> Ullsteinstr. 120 | 12109 Berlin | Germany\n> Registered at Amtsgericht Charlottenburg - HRB 154382 B\n>\n>\nThanks!\n\nHenrique\n\nOn Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler <[email protected]> wrote:\n\nHi Henrique,\n\n\n\nOn 13. Jul 2020, at 16:23, Henrique Montenegro <[email protected]> wrote:\n\n\n[...]\n\n* Insert the data from the `users` table into the `users_no_dups` table\n\n```\ninsert into users_no_dups (\n    created_ts, \n    user_id, \n    name, \n    url\n) (\n    select \n        created_ts,\n        user_id, \n        name, \n        url\n    from \n        users\n) on conflict do nothing\n```\n\n\n\n\n\nHow do you check contraints here? Is this enforced with UK/PK? The Unique Key is supposed to to the constraint enforcing here. The `users` table will have data that is duplicate and the maximum number of records on it is 1 million. Then I just try to insert it into the `users_no_dups` table with the `on conflict do nothing` to ignore the duplicates and discard them. \n\n\n\nRunning the above loop worked fine for about 12 hours. Each file was taking\nabout 30 seconds to be processed. About 4 seconds to create the `users` table\nand have the CSV data loaded into it and anything between 20 and 30 seconds to\ninsert the data from `users` into `users_no_dups`.\n\n\n\n\n\nDo you see anything suspicious in the logs, i.e. something in the realms of running out of transaction IDs?I set the log to debug1. I haven't seen anything that called my attention, but I am not really sure what to look for, so perhaps I missed it. Any suggestions on what to look for or any specific log configuration to do? \n\n\n[...]\n\n\n\n\n\n\nRecreating the table now isn't really providing any improvements. I tried\nrecreating it with a `fillfactor` of `10`, but it was taking too long and too\nmuch space (the table had 300GB with the fillfactor set to 30; with it set to\n10 it went up to almost 1TB).\n\n\n\n\n\nTo me it sounds like the UK/PK is getting too much to write. A possible solution could be to start partitioning the table.I thought about partitioning it, but I can't figure out on what. The `user_id` column is a number that is somewhat random so I don't know what kinds of range I would use for it. I will try to look at the values again and see if there is something that I could perhaps use as a range. Any other suggestions? \n\n\n\n\n[...]\n```\nssl = off\nshared_buffers = 8GB\nwork_mem = 12GB\nmaintenance_work_mem = 12GB\nmax_stack_depth = 4MB\nsynchronous_commit = off\nwal_writer_flush_after = 128MB\nmax_wal_size = 32GB\nmin_wal_size = 80MB\neffective_cache_size = 96GB\n```\n\n\n\n\n\nAnother suggestion would be to increase the min_wal_size here, but since you use UNLOGGED tables it does not matter much.\n\n\n\n\nInformation about the machine:\n\n```\nProcessor: 2x Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz (6 cores each, 12 threads)\nRAM: 256GB\n\n\nDisk1: 2TB SSD SATA-3 Samsung Evo 860\nDisk2: 6TB HDD SATA-3 Seagate Exos Enterprise 7200RPM\nDisk3: 8TB HDD SATA-3 Seagate Exos Enterprise 7200RPM\n\nDisk1 and Disk2 are configured as a single logical volume.\n\n\n\n\n\nJust curious: does that mean you mix up SSD + HDD?Yeah, I did that. Probably not very smart of me. I plan on undoing it soon. I assumed that is not what is causing my issue since the tablespace where the table is stored is on `Disk3` which is not part of the Logical Volume. \n\n\nCheers,\nSebastian\n\n\n\n\n\n\n--\n\nSebastian Dressler, Solution Architect \n+49 30 994 0496 72 | [email protected] \n\nSwarm64 AS\nParkveien 41 B | 0258 Oslo | Norway\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck \n\nSwarm64 AS Zweigstelle Hive\nUllsteinstr. 120 | 12109 Berlin | Germany\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B\n\nThanks!Henrique", "msg_date": "Mon, 13 Jul 2020 12:42:37 -0400", "msg_from": "Henrique Montenegro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "On Mon, Jul 13, 2020 at 12:28 PM Michael Lewis <[email protected]> wrote:\n\n> Is this an insert only table and perhaps not being picked up by\n> autovacuum? If so, try a manual \"vacuum analyze\" before/after each batch\n> run perhaps. You don't mention updates, but also have been adjusting\n> fillfactor so I am not not sure.\n>\n\nIt is mostly an insert table. Only queries I need to run on it are to\naggegate the count of IDs inserted per hour.\n\nI did the vacuuming of the table; Didn't help. I tried both vacuum(analyze)\nand vacuum(full) ... took a looooong time and no improvements.\n\nI adjusted the `fillfactor` because the documentation didn't make it too\nclear if by `updates to the table` it meant updating the value of existing\nrows, or updating the table itself (which in my understanding would mean\nthat adding new data into it would cause the table to be updated). I just\nstarted messing with the `fillfactor` to see if that would give me any\nimprovements. It seems to me it did since the first time I created the\ntable, I didn't change the fillfactor and stumbled upon the performance\nissue after 12 hours; I then recreated the table with a fillfactor of 30\nand was good again for about 12 hours more. Could be a coincidence though.\nI tried to recreate the table using fillfactor 10, but it was taking too\nlong to add the data to it (12+ hours running and it wasn't done yet and\nthe WRITE speed on iotop was around 20K/s .... I ended up just canceling\nit).\n\nAs of now, the table has about 280 million records in it.\n\nHenrique\n\nOn Mon, Jul 13, 2020 at 12:28 PM Michael Lewis <[email protected]> wrote:Is this an insert only table and perhaps not being picked up by autovacuum? If so, try a manual \"vacuum analyze\" before/after each batch run perhaps. You don't mention updates, but also have been adjusting fillfactor so I am not not sure.It is mostly an insert table. Only queries I need to run on it are to aggegate the count of IDs inserted per hour.I did the vacuuming of the table; Didn't help. I tried both vacuum(analyze) and vacuum(full) ... took a looooong time and no improvements. I adjusted the `fillfactor` because the documentation didn't make it too clear if by `updates to the table` it meant updating the value of existing rows, or updating the table itself (which in my understanding would mean that adding new data into it would cause the table to be updated). I just started messing with the `fillfactor` to see if that would give me any improvements. It seems to me it did since the first time I created the table, I didn't change the fillfactor and stumbled upon the performance issue after 12 hours; I then recreated the table with a fillfactor of 30 and was good again for about 12 hours more. Could be a coincidence though. I tried to recreate the table using fillfactor 10, but it was taking too long to add the data to it (12+ hours running and it wasn't done yet and the WRITE speed on iotop was around 20K/s .... I ended up just canceling it).As of now, the table has about 280 million records in it. Henrique", "msg_date": "Mon, 13 Jul 2020 12:48:53 -0400", "msg_from": "Henrique Montenegro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "Hi Henrique,\r\n\r\nOn 13. Jul 2020, at 18:42, Henrique Montenegro <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nOn Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\nRunning the above loop worked fine for about 12 hours. Each file was taking\r\nabout 30 seconds to be processed. About 4 seconds to create the `users` table\r\nand have the CSV data loaded into it and anything between 20 and 30 seconds to\r\ninsert the data from `users` into `users_no_dups`.\r\n\r\nDo you see anything suspicious in the logs, i.e. something in the realms of running out of transaction IDs?\r\n\r\nI set the log to debug1. I haven't seen anything that called my attention, but I am not really sure what to look for, so perhaps I missed it. Any suggestions on what to look for or any specific log configuration to do?\r\n\r\nNot necessarily, if you'd run out of tx IDs you would notice that cleary, I guess. I also think that this is not the issue.\r\n\r\n\r\n\r\n[...]\r\n\r\nRecreating the table now isn't really providing any improvements. I tried\r\nrecreating it with a `fillfactor` of `10`, but it was taking too long and too\r\nmuch space (the table had 300GB with the fillfactor set to 30; with it set to\r\n10 it went up to almost 1TB).\r\n\r\nTo me it sounds like the UK/PK is getting too much to write. A possible solution could be to start partitioning the table.\r\n\r\nI thought about partitioning it, but I can't figure out on what. The `user_id` column is a number that is somewhat random so I don't know what kinds of range I would use for it. I will try to look at the values again and see if there is something that I could perhaps use as a range. Any other suggestions?\r\n\r\nDepending on granularity, maybe partition on `created_ts`?\r\n\r\nCheers,\r\nSebastian\r\n\r\n--\r\n\r\nSebastian Dressler, Solution Architect\r\n+49 30 994 0496 72 | [email protected]<mailto:[email protected]>\r\n\r\nSwarm64 AS\r\nParkveien 41 B | 0258 Oslo | Norway\r\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\r\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck\r\n\r\nSwarm64 AS Zweigstelle Hive\r\nUllsteinstr. 120 | 12109 Berlin | Germany\r\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B\r\n\r\n\n\n\n\n\n\r\nHi Henrique,\n\n\nOn 13. Jul 2020, at 18:42, Henrique Montenegro <[email protected]> wrote:\n\n\nOn Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler <[email protected]> wrote:\n\n\n\n\n\n\n\n\nRunning the above loop worked fine for about 12 hours. Each file was taking\r\nabout 30 seconds to be processed. About 4 seconds to create the `users` table\r\nand have the CSV data loaded into it and anything between 20 and 30 seconds to\r\ninsert the data from `users` into `users_no_dups`.\n\n\n\n\n\nDo you see anything suspicious in the logs, i.e. something in the realms of running out of transaction IDs?\n\n\n\n\n\n\nI set the log to debug1. I haven't seen anything that called my attention, but I am not really sure what to look for, so perhaps I missed it. Any suggestions on what to look for or any specific log configuration to do?\n\n\n\n\n\n\nNot necessarily, if you'd run out of tx IDs you would notice that cleary, I guess. I also think that this is not the issue.\n\n\n\n\n \n\n\n\n\n\n[...]\n\n\n\n\n\n\nRecreating the table now isn't really providing any improvements. I tried\r\nrecreating it with a `fillfactor` of `10`, but it was taking too long and too\r\nmuch space (the table had 300GB with the fillfactor set to 30; with it set to\r\n10 it went up to almost 1TB).\n\n\n\n\n\nTo me it sounds like the UK/PK is getting too much to write. A possible solution could be to start partitioning the table.\n\n\n\n\n\n\nI thought about partitioning it, but I can't figure out on what. The `user_id` column is a number that is somewhat random so I don't know what kinds of range I would use for it. I will try to look at the values again and see if there is something\r\n that I could perhaps use as a range. Any other suggestions?\n\n\n\n\n\n\nDepending on granularity, maybe partition on `created_ts`? \n\n\nCheers,\nSebastian\n\n\n\r\n--\n\r\nSebastian Dressler, Solution Architect \r\n+49 30 994 0496 72 | [email protected] \n\r\nSwarm64 AS\r\nParkveien 41 B | 0258 Oslo | Norway\r\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\r\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck \n\r\nSwarm64 AS Zweigstelle Hive\r\nUllsteinstr. 120 | 12109 Berlin | Germany\r\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B", "msg_date": "Mon, 13 Jul 2020 16:50:55 +0000", "msg_from": "Sebastian Dressler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "On Mon, Jul 13, 2020 at 12:50 PM Sebastian Dressler <[email protected]>\nwrote:\n\n> Hi Henrique,\n>\n> On 13. Jul 2020, at 18:42, Henrique Montenegro <[email protected]> wrote:\n>\n> On Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler <[email protected]>\n> wrote:\n>\n>\n>> Running the above loop worked fine for about 12 hours. Each file was\n>> taking\n>> about 30 seconds to be processed. About 4 seconds to create the `users`\n>> table\n>> and have the CSV data loaded into it and anything between 20 and 30\n>> seconds to\n>> insert the data from `users` into `users_no_dups`.\n>>\n>>\n>> Do you see anything suspicious in the logs, i.e. something in the realms\n>> of running out of transaction IDs?\n>>\n>\n> I set the log to debug1. I haven't seen anything that called my attention,\n> but I am not really sure what to look for, so perhaps I missed it. Any\n> suggestions on what to look for or any specific log configuration to do?\n>\n>\n> Not necessarily, if you'd run out of tx IDs you would notice that cleary,\n> I guess. I also think that this is not the issue.\n>\n>\n>\n>>\n>> [...]\n>>\n>>\n>> Recreating the table now isn't really providing any improvements. I tried\n>> recreating it with a `fillfactor` of `10`, but it was taking too long and\n>> too\n>> much space (the table had 300GB with the fillfactor set to 30; with it\n>> set to\n>> 10 it went up to almost 1TB).\n>>\n>>\n>> To me it sounds like the UK/PK is getting too much to write. A possible\n>> solution could be to start partitioning the table.\n>>\n>\n> I thought about partitioning it, but I can't figure out on what. The\n> `user_id` column is a number that is somewhat random so I don't know what\n> kinds of range I would use for it. I will try to look at the values again\n> and see if there is something that I could perhaps use as a range. Any\n> other suggestions?\n>\n>\n> Depending on granularity, maybe partition on `created_ts`?\n>\n\nI could give it a try. The reason I didn't try that yet was that I thought\nthat since the UK is on the `user_id` column it wouldn't give me any\nbenefit, but I can't really justify why I was thinking that. I would assume\nthat the constraint would be validated against the index and not the whole\ntable, so this might work. I will give it a try.\n\nThanks!\n\nHenrique\n\n\n>\n> Cheers,\n> Sebastian\n>\n> --\n>\n> Sebastian Dressler, Solution Architect\n> +49 30 994 0496 72 | [email protected]\n>\n> Swarm64 AS\n> Parkveien 41 B | 0258 Oslo | Norway\n> Registered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\n> CEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender\n> (Styrets Leder): Dr. Sverre Munck\n>\n> Swarm64 AS Zweigstelle Hive\n> Ullsteinstr. 120 | 12109 Berlin | Germany\n> Registered at Amtsgericht Charlottenburg - HRB 154382 B\n>\n>\n\nOn Mon, Jul 13, 2020 at 12:50 PM Sebastian Dressler <[email protected]> wrote:\n\nHi Henrique,\n\n\nOn 13. Jul 2020, at 18:42, Henrique Montenegro <[email protected]> wrote:\n\n\nOn Mon, Jul 13, 2020 at 11:20 AM Sebastian Dressler <[email protected]> wrote:\n\n\n\n\n\n\n\n\nRunning the above loop worked fine for about 12 hours. Each file was taking\nabout 30 seconds to be processed. About 4 seconds to create the `users` table\nand have the CSV data loaded into it and anything between 20 and 30 seconds to\ninsert the data from `users` into `users_no_dups`.\n\n\n\n\n\nDo you see anything suspicious in the logs, i.e. something in the realms of running out of transaction IDs?\n\n\n\n\n\n\nI set the log to debug1. I haven't seen anything that called my attention, but I am not really sure what to look for, so perhaps I missed it. Any suggestions on what to look for or any specific log configuration to do?\n\n\n\n\n\n\nNot necessarily, if you'd run out of tx IDs you would notice that cleary, I guess. I also think that this is not the issue.\n\n\n\n\n \n\n\n\n\n\n[...]\n\n\n\n\n\n\nRecreating the table now isn't really providing any improvements. I tried\nrecreating it with a `fillfactor` of `10`, but it was taking too long and too\nmuch space (the table had 300GB with the fillfactor set to 30; with it set to\n10 it went up to almost 1TB).\n\n\n\n\n\nTo me it sounds like the UK/PK is getting too much to write. A possible solution could be to start partitioning the table.\n\n\n\n\n\n\nI thought about partitioning it, but I can't figure out on what. The `user_id` column is a number that is somewhat random so I don't know what kinds of range I would use for it. I will try to look at the values again and see if there is something\n that I could perhaps use as a range. Any other suggestions?\n\n\n\n\n\n\nDepending on granularity, maybe partition on `created_ts`? I could give it a try. The reason I didn't try that yet was that I thought that since the UK is on the `user_id` column it wouldn't give me any benefit, but I can't really justify why I was thinking that. I would assume that the constraint would be validated against the index and not the whole table, so this might work. I will give it a try.Thanks!Henrique \n\n\nCheers,\nSebastian\n\n\n\n--\n\nSebastian Dressler, Solution Architect \n+49 30 994 0496 72 | [email protected] \n\nSwarm64 AS\nParkveien 41 B | 0258 Oslo | Norway\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck \n\nSwarm64 AS Zweigstelle Hive\nUllsteinstr. 120 | 12109 Berlin | Germany\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B", "msg_date": "Mon, 13 Jul 2020 13:02:09 -0400", "msg_from": "Henrique Montenegro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "On Mon, Jul 13, 2020 at 10:23 AM Henrique Montenegro <[email protected]>\nwrote:\n\ninsert into users_no_dups (\n> created_ts,\n> user_id,\n> name,\n> url\n> ) (\n> select\n> created_ts,\n> user_id,\n> name,\n> url\n> from\n> users\n> ) on conflict do nothing\n>\n\nOnce the size of the only index exceeds shared_buffers by a bit (the amount\nof \"a bit\" depends on your RAM, kernel version, settings\nfor dirty_background_ratio, dirty_expire_centisecs, and probably other\nthings, and is not easy to predict) the performance falls off a cliff when\ninserting values in a random order. Every insert dirties a random index\nleaf page, which quickly gets evicted from shared_buffers to make room for\nother random leaf pages to be read in, and then turns into flush calls when\nthe kernel freaks out about the amount and age of dirty pages held in\nmemory.\n\nWhat happens if you add an \"ORDER BY user_id\" to your above select?\n\n\n> shared_buffers = 8GB\n> RAM: 256GB\n>\n\nOr, crank up shared_buffers by a lot. Like, beyond the size of the growing\nindex, or up to 240GB if the index ever becomes larger than that. And make\nthe time between checkpoints longer. If the dirty buffers are retained in\nshared_buffers longer, chances of them getting dirtied repeatedly\nbetween writes is much higher than if you just toss them to the kernel and\nhope for the best.\n\nCheers,\n\nJeff\n\nOn Mon, Jul 13, 2020 at 10:23 AM Henrique Montenegro <[email protected]> wrote:insert into users_no_dups (    created_ts,     user_id,     name,     url) (    select         created_ts,        user_id,         name,         url    from         users) on conflict do nothingOnce the size of the only index exceeds shared_buffers by a bit (the amount of \"a bit\" depends on your RAM, kernel version, settings for dirty_background_ratio, dirty_expire_centisecs, and probably other things, and is not easy to predict) the performance falls off a cliff when inserting values in a random order.  Every insert dirties a random index leaf page, which quickly gets evicted from shared_buffers to make room for other random leaf pages to be read in, and then turns into flush calls when the kernel freaks out about the amount and age of dirty pages held in memory.What happens if you add an \"ORDER BY user_id\" to your above select? shared_buffers = 8GBRAM: 256GBOr, crank up shared_buffers by a lot.  Like, beyond the size of the growing index, or up to 240GB if the index ever becomes larger than that.  And make the time between checkpoints longer.  If the dirty buffers are retained in shared_buffers longer, chances of them getting dirtied repeatedly between writes is much higher than if you just toss them to the kernel and hope for the best.Cheers,Jeff", "msg_date": "Mon, 13 Jul 2020 20:05:07 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "On Mon, Jul 13, 2020 at 8:05 PM Jeff Janes <[email protected]> wrote:\n\n> On Mon, Jul 13, 2020 at 10:23 AM Henrique Montenegro <[email protected]>\n> wrote:\n>\n> insert into users_no_dups (\n>> created_ts,\n>> user_id,\n>> name,\n>> url\n>> ) (\n>> select\n>> created_ts,\n>> user_id,\n>> name,\n>> url\n>> from\n>> users\n>> ) on conflict do nothing\n>>\n>\n> Once the size of the only index exceeds shared_buffers by a bit (the\n> amount of \"a bit\" depends on your RAM, kernel version, settings\n> for dirty_background_ratio, dirty_expire_centisecs, and probably other\n> things, and is not easy to predict) the performance falls off a cliff when\n> inserting values in a random order. Every insert dirties a random index\n> leaf page, which quickly gets evicted from shared_buffers to make room for\n> other random leaf pages to be read in, and then turns into flush calls when\n> the kernel freaks out about the amount and age of dirty pages held in\n> memory.\n>\n\nThat is interesting to know. I will do some research on those things.\n\n\n> What happens if you add an \"ORDER BY user_id\" to your above select?\n>\n\nI don't know. I will give it a try right now.\n\n>\n>\n>> shared_buffers = 8GB\n>> RAM: 256GB\n>>\n>\n> Or, crank up shared_buffers by a lot. Like, beyond the size of the\n> growing index, or up to 240GB if the index ever becomes larger than that.\n> And make the time between checkpoints longer. If the dirty buffers are\n> retained in shared_buffers longer, chances of them getting dirtied\n> repeatedly between writes is much higher than if you just toss them to the\n> kernel and hope for the best.\n>\n>\nI cranked it up to 160GB to see how it goes.\n\nCheers,\n>\n> Jeff\n>\n\nI created the partitions as well as mentioned before. I was able to\npartition the table based on the user_id (found some logic to it). I was\ntransferring the data from the original table (about 280 million records;\n320GB) to the new partitioned table and things were going well with write\nspeeds between 30MB/s and 50MB/s. After reading 270GB of the 320GB (in 4\nand a half hours) and writing it to the new partitioned table, write speed\nwent down to 7KB/s. It is so frustrating.\n\nI will keep the partitions and try your suggestions to see how it goes.\n\nI apologize for the long time between replies, it is just that testing this\nstuff takes 4+ hours each run.\n\nIf there are any other suggestions of things for me to look meanwhile as\nwell, please keep them coming.\n\nThanks!\n\nHenrique\n\nOn Mon, Jul 13, 2020 at 8:05 PM Jeff Janes <[email protected]> wrote:On Mon, Jul 13, 2020 at 10:23 AM Henrique Montenegro <[email protected]> wrote:insert into users_no_dups (    created_ts,     user_id,     name,     url) (    select         created_ts,        user_id,         name,         url    from         users) on conflict do nothingOnce the size of the only index exceeds shared_buffers by a bit (the amount of \"a bit\" depends on your RAM, kernel version, settings for dirty_background_ratio, dirty_expire_centisecs, and probably other things, and is not easy to predict) the performance falls off a cliff when inserting values in a random order.  Every insert dirties a random index leaf page, which quickly gets evicted from shared_buffers to make room for other random leaf pages to be read in, and then turns into flush calls when the kernel freaks out about the amount and age of dirty pages held in memory. That is interesting to  know. I will do some research on those things.What happens if you add an \"ORDER BY user_id\" to your above select?I don't know. I will give it a try right now.  shared_buffers = 8GBRAM: 256GBOr, crank up shared_buffers by a lot.  Like, beyond the size of the growing index, or up to 240GB if the index ever becomes larger than that.  And make the time between checkpoints longer.  If the dirty buffers are retained in shared_buffers longer, chances of them getting dirtied repeatedly between writes is much higher than if you just toss them to the kernel and hope for the best.I cranked it up to 160GB to see how it goes. Cheers,JeffI created the partitions as well as mentioned before. I was able to partition the table based on the user_id (found some logic to it). I was transferring the data from the original table (about 280 million records; 320GB) to the new partitioned table and things were going well with write speeds between 30MB/s and 50MB/s. After reading 270GB of the 320GB (in 4 and a half hours) and writing it to the new partitioned table, write speed went down to 7KB/s. It is so frustrating.I will keep the partitions and try your suggestions to see how it goes.I apologize for the long time between replies, it is just that testing this stuff takes 4+ hours each run.If there are any other suggestions of things for me to look meanwhile as well, please keep them coming.Thanks!Henrique", "msg_date": "Mon, 13 Jul 2020 21:02:14 -0400", "msg_from": "Henrique Montenegro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "Alright, so it seems like partitioning and changing the shared_buffers as\nwell as adding the order by helped to a certain extent, but the writes are\nstill slow. Inserting a 1 million records file is taking almost 3 minutes\n(way better than the 20+ minutes, but still pretty slow compared to the 20\nseconds it used to take).\n\nThe interesting thing for me right now is: If I try to insert the data from\na file that has already been inserted (meaning all the data will end up\nbeing rejected due to the unique constraint), it only takes between 1 and 4\nseconds for the insertion to finish executing. For regular files (which\nusually have 30% new unique records (meaning about 300k new records)), it\nis taking those 3 minutes.\n\n**UPDATE**\n\nI started writing this email and then it occurred to me something I should\ntry. Leaving the information above for historical reasons.\n\nBasically I went ahead and ran a `reindex` on all the partitions now to see\nif it would improve the performance and seems like that did it! I used the\nfollowing script to reindex all of the partitions (the name of my\npartitions all start with ubp_):\n\n```\nDO $$DECLARE r record;\nBEGIN\n FOR r IN select indexname from pg_indexes where tablename like 'ubp_%'\n LOOP\n EXECUTE 'reindex index ' || r.indexname;\n END LOOP;\nEND$$;\n```\n\nAfter doing this, processing of each file is taking anything between 8 and\n20 seconds (most of them seem to be taking 8 seconds though). So, this is\ngreat!\n\nIn summary, what I ended up having to do was:\n\n* Raise shared_buffers to 160GB\n* Add an `order by` to the `select` subquery in the `insert` statement\n* Partition the table\n* Tune postgres configurations as shown below:\n\n~~~\nssl = off\nshared_buffers = 160GB\nwork_mem = 12GB\nmaintenance_work_mem = 12GB\nmax_stack_depth = 4MB\nsynchronous_commit = off\nwal_writer_flush_after = 128MB\nmax_wal_size = 32GB\nmin_wal_size = 80MB\neffective_cache_size = 96GB\n~~~\n\nI can't tell if the raising of the `shared_buffers` was the reason for the\nperformance gains or the adding of the `order by` was the responsible.\nDoesn't hurt to do both anyways. I know for a fact that the `reindex` of\neach partition made a huge difference in the end as explained above\n(bringing insert time down from 3 minutes to 8 seconds).\n\nI have about 1800 files in my backlog to be processed now (18 billion\nrecords). I have started processing them and will report back in case\nperformance degrades once again.\n\nThanks everybody for the help so far! I really appreciate it.\n\nHenrique\n\nPS: I checked the `dirty` ratios for the OS:\n\n$ sysctl vm.dirty_ratio\nvm.dirty_ratio = 20\n\n$ sysctl vm.dirty_background_ratio\nvm.dirty_background_ratio = 10\n\n$ sysctl vm.dirty_expire_centisecs\nvm.dirty_expire_centisecs = 3000\n\nThese are default values; if what I understood from them is right, it seems\nto me that these values should be fine.\n\nOn Mon, Jul 13, 2020 at 9:02 PM Henrique Montenegro <[email protected]>\nwrote:\n\n>\n>\n> On Mon, Jul 13, 2020 at 8:05 PM Jeff Janes <[email protected]> wrote:\n>\n>> On Mon, Jul 13, 2020 at 10:23 AM Henrique Montenegro <[email protected]>\n>> wrote:\n>>\n>> insert into users_no_dups (\n>>> created_ts,\n>>> user_id,\n>>> name,\n>>> url\n>>> ) (\n>>> select\n>>> created_ts,\n>>> user_id,\n>>> name,\n>>> url\n>>> from\n>>> users\n>>> ) on conflict do nothing\n>>>\n>>\n>> Once the size of the only index exceeds shared_buffers by a bit (the\n>> amount of \"a bit\" depends on your RAM, kernel version, settings\n>> for dirty_background_ratio, dirty_expire_centisecs, and probably other\n>> things, and is not easy to predict) the performance falls off a cliff when\n>> inserting values in a random order. Every insert dirties a random index\n>> leaf page, which quickly gets evicted from shared_buffers to make room for\n>> other random leaf pages to be read in, and then turns into flush calls when\n>> the kernel freaks out about the amount and age of dirty pages held in\n>> memory.\n>>\n>\n> That is interesting to know. I will do some research on those things.\n>\n>\n>> What happens if you add an \"ORDER BY user_id\" to your above select?\n>>\n>\n> I don't know. I will give it a try right now.\n>\n>>\n>>\n>>> shared_buffers = 8GB\n>>> RAM: 256GB\n>>>\n>>\n>> Or, crank up shared_buffers by a lot. Like, beyond the size of the\n>> growing index, or up to 240GB if the index ever becomes larger than that.\n>> And make the time between checkpoints longer. If the dirty buffers are\n>> retained in shared_buffers longer, chances of them getting dirtied\n>> repeatedly between writes is much higher than if you just toss them to the\n>> kernel and hope for the best.\n>>\n>>\n> I cranked it up to 160GB to see how it goes.\n>\n> Cheers,\n>>\n>> Jeff\n>>\n>\n> I created the partitions as well as mentioned before. I was able to\n> partition the table based on the user_id (found some logic to it). I was\n> transferring the data from the original table (about 280 million records;\n> 320GB) to the new partitioned table and things were going well with write\n> speeds between 30MB/s and 50MB/s. After reading 270GB of the 320GB (in 4\n> and a half hours) and writing it to the new partitioned table, write speed\n> went down to 7KB/s. It is so frustrating.\n>\n> I will keep the partitions and try your suggestions to see how it goes.\n>\n> I apologize for the long time between replies, it is just that testing\n> this stuff takes 4+ hours each run.\n>\n> If there are any other suggestions of things for me to look meanwhile as\n> well, please keep them coming.\n>\n> Thanks!\n>\n> Henrique\n>\n\nAlright, so it seems like partitioning and changing the shared_buffers as well as adding the order by helped to a certain extent, but the writes are still slow. Inserting a 1 million records file is taking almost 3 minutes (way better than the 20+ minutes, but still pretty slow compared to the 20 seconds it used to take).The interesting thing for me right now is: If I try to insert the data from a file that has already been inserted (meaning all the data will end up being rejected due to the unique constraint), it only takes between 1 and 4 seconds for the insertion to finish executing. For regular files (which usually have 30% new unique records (meaning about 300k new records)), it is taking those 3 minutes.**UPDATE**I started writing this email and then it occurred to me something I should try. Leaving the information above for historical reasons.Basically I went ahead and ran a `reindex` on all the partitions now to see if it would improve the performance and seems like that did it! I used the following script to reindex all of the partitions (the name of my partitions all start with ubp_):```DO $$DECLARE r record;BEGIN    FOR r IN select indexname from pg_indexes where tablename like 'ubp_%'    LOOP        EXECUTE 'reindex index ' || r.indexname;    END LOOP;END$$;```After doing this, processing of each file is taking anything between 8 and 20 seconds (most of them seem to be taking 8 seconds though). So, this is great!In summary, what I ended up having to do was:* Raise shared_buffers to 160GB* Add an `order by` to the `select` subquery in the `insert` statement* Partition the table* Tune postgres configurations as shown below:~~~ssl = offshared_buffers = 160GBwork_mem = 12GBmaintenance_work_mem = 12GBmax_stack_depth = 4MBsynchronous_commit = offwal_writer_flush_after = 128MBmax_wal_size = 32GBmin_wal_size = 80MBeffective_cache_size = 96GB~~~I can't tell if the raising of the `shared_buffers` was the reason for the performance gains or the adding of the `order by` was the responsible. Doesn't hurt to do both anyways. I know for a fact that the `reindex` of each partition made a huge difference in the end as explained above (bringing insert time down from 3 minutes to 8 seconds). I have about 1800 files in my backlog to be processed now (18 billion records). I have started processing them and will report back in case performance degrades once again.Thanks everybody for the help so far! I really appreciate it.HenriquePS: I checked the `dirty` ratios for the OS:$ sysctl vm.dirty_ratiovm.dirty_ratio = 20$ sysctl vm.dirty_background_ratiovm.dirty_background_ratio = 10$ sysctl vm.dirty_expire_centisecsvm.dirty_expire_centisecs = 3000These are default values; if what I understood from them is right, it seems to me that these values should be fine.On Mon, Jul 13, 2020 at 9:02 PM Henrique Montenegro <[email protected]> wrote:On Mon, Jul 13, 2020 at 8:05 PM Jeff Janes <[email protected]> wrote:On Mon, Jul 13, 2020 at 10:23 AM Henrique Montenegro <[email protected]> wrote:insert into users_no_dups (    created_ts,     user_id,     name,     url) (    select         created_ts,        user_id,         name,         url    from         users) on conflict do nothingOnce the size of the only index exceeds shared_buffers by a bit (the amount of \"a bit\" depends on your RAM, kernel version, settings for dirty_background_ratio, dirty_expire_centisecs, and probably other things, and is not easy to predict) the performance falls off a cliff when inserting values in a random order.  Every insert dirties a random index leaf page, which quickly gets evicted from shared_buffers to make room for other random leaf pages to be read in, and then turns into flush calls when the kernel freaks out about the amount and age of dirty pages held in memory. That is interesting to  know. I will do some research on those things.What happens if you add an \"ORDER BY user_id\" to your above select?I don't know. I will give it a try right now.  shared_buffers = 8GBRAM: 256GBOr, crank up shared_buffers by a lot.  Like, beyond the size of the growing index, or up to 240GB if the index ever becomes larger than that.  And make the time between checkpoints longer.  If the dirty buffers are retained in shared_buffers longer, chances of them getting dirtied repeatedly between writes is much higher than if you just toss them to the kernel and hope for the best.I cranked it up to 160GB to see how it goes. Cheers,JeffI created the partitions as well as mentioned before. I was able to partition the table based on the user_id (found some logic to it). I was transferring the data from the original table (about 280 million records; 320GB) to the new partitioned table and things were going well with write speeds between 30MB/s and 50MB/s. After reading 270GB of the 320GB (in 4 and a half hours) and writing it to the new partitioned table, write speed went down to 7KB/s. It is so frustrating.I will keep the partitions and try your suggestions to see how it goes.I apologize for the long time between replies, it is just that testing this stuff takes 4+ hours each run.If there are any other suggestions of things for me to look meanwhile as well, please keep them coming.Thanks!Henrique", "msg_date": "Tue, 14 Jul 2020 09:05:19 -0400", "msg_from": "Henrique Montenegro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "On Tue, Jul 14, 2020 at 9:05 AM Henrique Montenegro <[email protected]>\nwrote:\n\n> Alright, so it seems like partitioning and changing the shared_buffers as\n> well as adding the order by helped to a certain extent, but the writes are\n> still slow. Inserting a 1 million records file is taking almost 3 minutes\n> (way better than the 20+ minutes, but still pretty slow compared to the 20\n> seconds it used to take).\n>\n> The interesting thing for me right now is: If I try to insert the data\n> from a file that has already been inserted (meaning all the data will end\n> up being rejected due to the unique constraint), it only takes between 1\n> and 4 seconds for the insertion to finish executing. For regular files\n> (which usually have 30% new unique records (meaning about 300k new\n> records)), it is taking those 3 minutes.\n>\n> **UPDATE**\n>\n> I started writing this email and then it occurred to me something I should\n> try. Leaving the information above for historical reasons.\n>\n> Basically I went ahead and ran a `reindex` on all the partitions now to\n> see if it would improve the performance and seems like that did it! I used\n> the following script to reindex all of the partitions (the name of my\n> partitions all start with ubp_):\n>\n> ```\n> DO $$DECLARE r record;\n> BEGIN\n> FOR r IN select indexname from pg_indexes where tablename like 'ubp_%'\n> LOOP\n> EXECUTE 'reindex index ' || r.indexname;\n> END LOOP;\n> END$$;\n> ```\n>\n> After doing this, processing of each file is taking anything between 8 and\n> 20 seconds (most of them seem to be taking 8 seconds though). So, this is\n> great!\n>\n> In summary, what I ended up having to do was:\n>\n> * Raise shared_buffers to 160GB\n> * Add an `order by` to the `select` subquery in the `insert` statement\n> * Partition the table\n> * Tune postgres configurations as shown below:\n>\n> ~~~\n> ssl = off\n> shared_buffers = 160GB\n> work_mem = 12GB\n> maintenance_work_mem = 12GB\n> max_stack_depth = 4MB\n> synchronous_commit = off\n> wal_writer_flush_after = 128MB\n> max_wal_size = 32GB\n> min_wal_size = 80MB\n> effective_cache_size = 96GB\n> ~~~\n>\n> I can't tell if the raising of the `shared_buffers` was the reason for the\n> performance gains or the adding of the `order by` was the responsible.\n> Doesn't hurt to do both anyways. I know for a fact that the `reindex` of\n> each partition made a huge difference in the end as explained above\n> (bringing insert time down from 3 minutes to 8 seconds).\n>\n> I have about 1800 files in my backlog to be processed now (18 billion\n> records). I have started processing them and will report back in case\n> performance degrades once again.\n>\n> Thanks everybody for the help so far! I really appreciate it.\n>\n> Henrique\n>\n> PS: I checked the `dirty` ratios for the OS:\n>\n> $ sysctl vm.dirty_ratio\n> vm.dirty_ratio = 20\n>\n> $ sysctl vm.dirty_background_ratio\n> vm.dirty_background_ratio = 10\n>\n> $ sysctl vm.dirty_expire_centisecs\n> vm.dirty_expire_centisecs = 3000\n>\n> These are default values; if what I understood from them is right, it\n> seems to me that these values should be fine.\n>\n> On Mon, Jul 13, 2020 at 9:02 PM Henrique Montenegro <[email protected]>\n> wrote:\n>\n>>\n>>\n>> On Mon, Jul 13, 2020 at 8:05 PM Jeff Janes <[email protected]> wrote:\n>>\n>>> On Mon, Jul 13, 2020 at 10:23 AM Henrique Montenegro <[email protected]>\n>>> wrote:\n>>>\n>>> insert into users_no_dups (\n>>>> created_ts,\n>>>> user_id,\n>>>> name,\n>>>> url\n>>>> ) (\n>>>> select\n>>>> created_ts,\n>>>> user_id,\n>>>> name,\n>>>> url\n>>>> from\n>>>> users\n>>>> ) on conflict do nothing\n>>>>\n>>>\n>>> Once the size of the only index exceeds shared_buffers by a bit (the\n>>> amount of \"a bit\" depends on your RAM, kernel version, settings\n>>> for dirty_background_ratio, dirty_expire_centisecs, and probably other\n>>> things, and is not easy to predict) the performance falls off a cliff when\n>>> inserting values in a random order. Every insert dirties a random index\n>>> leaf page, which quickly gets evicted from shared_buffers to make room for\n>>> other random leaf pages to be read in, and then turns into flush calls when\n>>> the kernel freaks out about the amount and age of dirty pages held in\n>>> memory.\n>>>\n>>\n>> That is interesting to know. I will do some research on those things.\n>>\n>>\n>>> What happens if you add an \"ORDER BY user_id\" to your above select?\n>>>\n>>\n>> I don't know. I will give it a try right now.\n>>\n>>>\n>>>\n>>>> shared_buffers = 8GB\n>>>> RAM: 256GB\n>>>>\n>>>\n>>> Or, crank up shared_buffers by a lot. Like, beyond the size of the\n>>> growing index, or up to 240GB if the index ever becomes larger than that.\n>>> And make the time between checkpoints longer. If the dirty buffers are\n>>> retained in shared_buffers longer, chances of them getting dirtied\n>>> repeatedly between writes is much higher than if you just toss them to the\n>>> kernel and hope for the best.\n>>>\n>>>\n>> I cranked it up to 160GB to see how it goes.\n>>\n>> Cheers,\n>>>\n>>> Jeff\n>>>\n>>\n>> I created the partitions as well as mentioned before. I was able to\n>> partition the table based on the user_id (found some logic to it). I was\n>> transferring the data from the original table (about 280 million records;\n>> 320GB) to the new partitioned table and things were going well with write\n>> speeds between 30MB/s and 50MB/s. After reading 270GB of the 320GB (in 4\n>> and a half hours) and writing it to the new partitioned table, write speed\n>> went down to 7KB/s. It is so frustrating.\n>>\n>> I will keep the partitions and try your suggestions to see how it goes.\n>>\n>> I apologize for the long time between replies, it is just that testing\n>> this stuff takes 4+ hours each run.\n>>\n>> If there are any other suggestions of things for me to look meanwhile as\n>> well, please keep them coming.\n>>\n>> Thanks!\n>>\n>> Henrique\n>>\n>\nHello again list,\n\nTurns out that the good performance didn't last long. After processing about\n300 CSV files with 1 million records each (inserting between 200k and 300k\nnew\nrecords per file into the DB), performance went downhill again :(\n\n\n\nTable `users_basic_profile_no_dups_partitioned` stats:\n- 1530 partitions (based on user_id)\n- 473,316,776 rows\n- Unlogged\n- Stored in an 8TB 7200 RPM HDD\n\nTable `users_basic_profile` stats:\n- Unlogged\n- 1 million rows\n- Stored in memory (using tmpfs)\n\nConfiguration file has the following custom configurations for the tests\nexecuted below:\n\n```\nssl = off\nshared_buffers = 160GB # min 128kB\nwork_mem = 96GB # min 64kB\nmaintenance_work_mem = 12GB # min 1MB\nmax_stack_depth = 4MB # min 100kB\ndynamic_shared_memory_type = posix # the default is the first option\nsynchronous_commit = off # synchronization level;\ncommit_delay = 100000 # range 0-100000, in microseconds\nmax_wal_size = 3GB\nmin_wal_size = 1GB\nmin_parallel_index_scan_size = 64kB\neffective_cache_size = 96GB\nlog_min_messages = debug1 # values in order of decreasing detail:\nlog_checkpoints = on\nlog_error_verbosity = verbose # terse, default, or verbose messages\nlog_line_prefix = '%m [%p] %q%u@%d ' # special values:\nlog_lock_waits = on # log lock waits >= deadlock_timeout\nlog_timezone = 'America/New_York'\nlog_executor_stats = on\ndatestyle = 'iso, mdy'\n```\n\n(max_wal_size was 80GB before and min_wal_size was 80MB; I changed the max\nbecause the first restart I did to the service took a long time since it had\nto sync 80+GB of data to the disk)\n\nI restarted the postgres service and ran this query:\n\n```\nselect user_id from users_basic_profile_no_dups_partitioned\nwhere\n user_id in (\n select user_id from users_basic_profile order by user_id\n );\n```\n\nThe above query took 659 seconds to run and read 73.64 GB of data from the\ndisk. From observing the `top` output I assume that all this data was loaded\ninto RAM and kept there.\n\nI then ran the same query again and it ran in 195 seconds. This second time,\nno data was read from the disk and CPU usage stayed at 100% the whole time.\nI am not sure why it took so long since it seems the whole data was in\nmemory.\n\nI then ran the following query 6 times while increasing the limit as shown\nin\nthe table below:\n\n```\nselect user_id from users_basic_profile_no_dups_partitioned\nwhere\n user_id in (\n select user_id from users_basic_profile order by user_id\n limit 10\n );\n```\n\n Limit | Time (seconds)\n---------|------------------\n10 | 0.6\n100 | 0.6\n1000 | 1.3\n10000 | 116.9\n100000 | 134.8\n1000000 | 193.2\n\nNotice the jump in time execution from a 1k limit to a 10k limit. Amount of\ndata raised 10x and execution time raised 100x.\n\nIt seems to me that inserting the data in this case is slow because the time\nit takes to identify the duplicate records (which I assume would be done in\na\nfashion similiar to the queries above) is taking a long time.\n\nI have attached the `explain analyze` output for the 1k and 10k queries to\nthis email (they are 4k+ lines each, didn't want to make this messager\nbigger\nthan it already is).\n\n* exp1k.txt\n* exp10k.txt\n\nOne thing to keep in mind is: all the data in the `users_basic_profile`\ntable\nalready exists in the `users_basic_profile_no_dups_partitioned` table. So\nif I\ntry to insert the data now again, it goes SUPER fast:\n\n```\n\ninsert into users_basic_profile_no_dups_partitioned(\n created_ts,\n user_id,\n name,\n profile_picture\n ) (\n select\n created_ts,\n user_id,\n name,\n profile_picture\n from\n users_basic_profile\n order by\n user_id limit 10000\n ) on conflict do nothing;\nINSERT 0 0\nTime: 276.905 ms\n```\n\nI droped the `users_basic_profile` table, recreated it and then and loaded a\nnew file into it that has not been previously loaded:\n\n```\ndrop table users_basic_profile;\n\ncreate unlogged table users_basic_profile (\n created_ts timestamp without time zone default current_timestamp,\n user_id bigint,\n name text,\n profile_picture text\n)\nwith (autovacuum_enabled = false, toast.autovacuum_enabled = false)\ntablespace ramdisk;\n\ncopy users_basic_profile(user_id, name, profile_picture)\nfrom '/tmp/myfile.csv' with (\n format csv,\n header true,\n delimiter ',',\n quote '\"',\n escape '\\'\n);\n```\n\nThe `COPY` command took 3 seconds.\n\nI then ran the `SELECT` queries above again:\n\n\n Limit | Time (seconds)\n---------|------------------\n10 | 0.7\n100 | 0.6\n1000 | 1\n10000 | 5.3\n100000 | 68.8\n1000000 | Did not complete\n\nThe 1 million query ran for 54 minutes when I finally decided to cancel it.\nDisk reads at this point were at 1.4MB per second by the process performing\nthe `SELECT`. No other process was using the disk.\n\nThis execution was not fair, since the new data was probably not cached in\nRAM\nyet. So I re-ran all the queries again:\n\n\n Limit | Time (seconds)\n---------|------------------\n10 | 0.7\n100 | 0.7\n1000 | 0.8\n10000 | 1.9\n100000 | 11.2\n1000000 | Did not complete\n\nThe 1 million query didn't complete again. The disk read speed was again at\n1.4MB/s and if it didn't complete in 10 minutes it wasn't gonna complete any\ntime soon.\n\nWhile these numbers look better, I find the 5x increase from the 10k to\n100k a bit suspicious.\n\nThe `explain analyze` plans for the 1k, 10k and 100k queries are attached:\n\n* exp1k-secondtime.txt\n* exp10k-secondtime.txt\n* exp100k-secondtime.txt\n\nThe `explain` for the 1million query is also attached:\n* exp1million.txt\n\nI then tried to insert the data into the table with this query:\n\n```\nbegin;\nexplain analyze insert into\nusers_basic_profile_no_dups_partitioned(created_ts,\n user_id,\n name,\n profile_picture\n) (\nselect\n created_ts,\n user_id,\n name,\n profile_picture\nfrom\n users_basic_profile\norder by\n user_id\n) on conflict do nothing;\n```\n\nDisk read speed during this query was around 9MB/s with writes around\n500KB/s.\n\nThe result of the explain analyze is as follows:\n\n```\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Insert on users_basic_profile_no_dups_partitioned\n (cost=386110.19..423009.25 rows=2951925 width=80) (actual\ntime=156773.296..156773.296 rows=0 loops=1)\n Conflict Resolution: NOTHING\n Tuples Inserted: 293182\n Conflicting Tuples: 706818\n -> Sort (cost=386110.19..393490.00 rows=2951925 width=80) (actual\ntime=777.295..1423.577 rows=1000000 loops=1)\n Sort Key: users_basic_profile.user_id\n Sort Method: quicksort Memory: 540206kB\n -> Seq Scan on users_basic_profile (cost=0.00..68878.25\nrows=2951925 width=80) (actual time=0.019..173.278 rows=1000000 loops=1)\n Planning Time: 0.139 ms\n Execution Time: 156820.603 ms\n(10 rows)\n```\n\nThis query took 156 seconds to complete. 156 seconds is not too bad, but I\nwas\ngetting between 8 seconds and 20 seconds this morning as I mentioned before.\nSo still something seems off. I was able to process 300 files this morning,\neach one containing 1 million records inserting anything between 200k and\n300k\nnew records into the table per file. This means that while runing these\ntests,\nI have about 70 million more rows in the table than I did this morning.\n\nAfter completing the `INSERT` I executed a `COMMIT` that took 0.03 seconds.\n\nI decided to run the `SELECT` queries one last time:\n\n Limit | Time (seconds)\n---------|------------------\n10 | 0.6\n100 | 0.6\n1000 | 0.7\n10000 | 1.6\n100000 | 10.7\n1000000 | 110.7\n\nThis time the 1 million query completed. Most likely due to some caching\nmechanism I'd guess. Still 110 seconds seems somewhat slow.\n\nSo, does anyone have any suggestions on what could be wrong? The questions\nthat come to mind are:\n\n* Why are these execution times so crazy?\n* Why is the read speed from the disk so low?\n* What is causing the sudden drop in performance?\n* Any idea how to fix any of this?\n* Any suggestions on what I should do/test/look for?\n\n= Extra Information =\n\nBefore starting all these tests, I had executed the following\n`REINDEX` command on all partitions of\n`users_basic_profile_no_dups_partitioned`:\n\n\n```\nDO $$DECLARE r record;\nBEGIN\n FOR r IN select indexname from pg_indexes where tablename like 'ubp_%'\n LOOP\n raise notice 'Processing index [%]', r.indexname;\n EXECUTE 'alter index ' || r.indexname || ' set (fillfactor=50)';\n EXECUTE 'reindex index ' || r.indexname;\n END LOOP;\nEND$$;\n```\n\nBefore setting the `fillfactor` to 50, I tried just a regular `REINDEX`\nkeeping the original `fillfactor` but the results were still the same.\n\nStructure of table `users_basic_profile_no_dups_partitioned`:\n\n```\n\n# \\d users_basic_profile_no_dups_partitioned\n\n Unlogged table \"public.users_basic_profile_no_dups_partitioned\"\n Column | Type | Collation | Nullable |\nDefault\n-----------------+-----------------------------+-----------+----------+---------\n created_ts | timestamp without time zone | | not null |\n user_id | bigint | | not null |\n name | text | | |\n profile_picture | text | | |\nPartition key: RANGE (user_id)\nIndexes:\n \"users_basic_profile_no_dups_partitioned_pkey\" PRIMARY KEY, btree\n(user_id)\nNumber of partitions: 1530 (Use \\d+ to list them.)\n```\n\nThe `profile_picture` column stores a `URL` to the picture, not a blob of\nthe\npicture.", "msg_date": "Tue, 14 Jul 2020 21:13:36 -0400", "msg_from": "Henrique Montenegro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "Hi Henrique,\r\n\r\nOn 15. Jul 2020, at 03:13, Henrique Montenegro <[email protected]<mailto:[email protected]>> wrote:\r\n[...]\r\n\r\n```\r\nssl = off\r\nshared_buffers = 160GB # min 128kB\r\nwork_mem = 96GB # min 64kB\r\nmaintenance_work_mem = 12GB # min 1MB\r\nmax_stack_depth = 4MB # min 100kB\r\ndynamic_shared_memory_type = posix # the default is the first option\r\nsynchronous_commit = off # synchronization level;\r\ncommit_delay = 100000 # range 0-100000, in microseconds\r\nmax_wal_size = 3GB\r\nmin_wal_size = 1GB\r\nmin_parallel_index_scan_size = 64kB\r\neffective_cache_size = 96GB\r\nlog_min_messages = debug1 # values in order of decreasing detail:\r\nlog_checkpoints = on\r\nlog_error_verbosity = verbose # terse, default, or verbose messages\r\nlog_line_prefix = '%m [%p] %q%u@%d ' # special values:\r\nlog_lock_waits = on # log lock waits >= deadlock_timeout\r\nlog_timezone = 'America/New_York'\r\nlog_executor_stats = on\r\ndatestyle = 'iso, mdy'\r\n```\r\n\r\n[...]\r\n\r\n Limit | Time (seconds)\r\n---------|------------------\r\n10 | 0.6\r\n100 | 0.6\r\n1000 | 1.3\r\n10000 | 116.9\r\n100000 | 134.8\r\n1000000 | 193.2\r\n\r\nNotice the jump in time execution from a 1k limit to a 10k limit. Amount of\r\ndata raised 10x and execution time raised 100x.\r\n\r\nIt seems to me that inserting the data in this case is slow because the time\r\nit takes to identify the duplicate records (which I assume would be done in a\r\nfashion similiar to the queries above) is taking a long time.\r\n\r\nI have attached the `explain analyze` output for the 1k and 10k queries to\r\nthis email (they are 4k+ lines each, didn't want to make this messager bigger\r\nthan it already is).\r\n\r\n* exp1k.txt\r\n* exp10k.txt\r\n\r\n[...]\r\n\r\nI quickly glanced at the exp10k plan and there are some things I noticed (sorry for not going over all the mail, have to re-read it again):\r\n\r\n- There are a lot of partitions now, you maybe want consider reducing the amount. To me it seems that you overload the system. Scan times are low but the overhead to start a scan is likely quite high.\r\n- work_mem = 96GB seems very high to me, I guess you'd be better with e.g. 4GB as a start but many more parallel workers. For instance, depending on your machine, try adjusting the max_worker_processes, max_parallel_workers and max_parallel_workers_per_gather. Values depend a bit on your system, make sure, that max_parallel_workers_per_gather are much lower than max_parallel_workers and that must be lower than max_worker_processes. You can try large values, for instance 128, 120, 12.\r\n- You may want to test with min_parallel_table_scan_size = 0\r\n- Did you enable partition pruning, partitionwise join and aggregate?\r\n\r\nThanks,\r\nSebastian\r\n\r\n--\r\n\r\nSebastian Dressler, Solution Architect\r\n+49 30 994 0496 72 | [email protected]<mailto:[email protected]>\r\n\r\nSwarm64 AS\r\nParkveien 41 B | 0258 Oslo | Norway\r\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\r\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck\r\n\r\nSwarm64 AS Zweigstelle Hive\r\nUllsteinstr. 120 | 12109 Berlin | Germany\r\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B\r\n\r\n\n\n\n\n\n\r\nHi Henrique,\r\n\n\nOn 15. Jul 2020, at 03:13, Henrique Montenegro <[email protected]> wrote:\n\n\n[...]\n\n\r\n```\r\nssl = off\r\nshared_buffers = 160GB                  # min 128kB\r\nwork_mem = 96GB                         # min 64kB\r\nmaintenance_work_mem = 12GB             # min 1MB\r\nmax_stack_depth = 4MB                   # min 100kB\r\ndynamic_shared_memory_type = posix      # the default is the first option\r\nsynchronous_commit = off                # synchronization level;\r\ncommit_delay = 100000                   # range 0-100000, in microseconds\r\nmax_wal_size = 3GB\r\nmin_wal_size = 1GB\r\nmin_parallel_index_scan_size = 64kB\r\neffective_cache_size = 96GB\r\nlog_min_messages = debug1 # values in order of decreasing detail:\r\nlog_checkpoints = on\r\nlog_error_verbosity = verbose # terse, default, or verbose messages\r\nlog_line_prefix = '%m [%p] %q%u@%d '            # special values:\r\nlog_lock_waits = on                     # log lock waits >= deadlock_timeout\r\nlog_timezone = 'America/New_York'\r\nlog_executor_stats = on\r\ndatestyle = 'iso, mdy'\r\n```\n\r\n[...]\n\r\n  Limit  |  Time (seconds)\r\n---------|------------------\r\n10       | 0.6\r\n100      | 0.6\r\n1000     | 1.3\r\n10000    | 116.9\r\n100000   | 134.8\r\n1000000  | 193.2\n\r\nNotice the jump in time execution from a 1k limit to a 10k limit. Amount of\r\ndata raised 10x and execution time raised 100x.\n\r\nIt seems to me that inserting the data in this case is slow because the time\r\nit takes to identify the duplicate records (which I assume would be done in a\r\nfashion similiar to the queries above) is taking a long time.\n\r\nI have attached the `explain analyze` output for the 1k and 10k queries to\r\nthis email (they are 4k+ lines each, didn't want to make this messager bigger\r\nthan it already is).\n\r\n* exp1k.txt\r\n* exp10k.txt\n\r\n[...]\n\n\n\n\n\n\n\nI quickly glanced at the exp10k plan and there are some things I noticed (sorry for not going over all the mail, have to re-read it again):\n\n\n- There are a lot of partitions now, you maybe want consider reducing the amount. To me it seems that you overload the system. Scan times are low but the overhead to start a scan is likely quite high.\n- work_mem = 96GB seems very high to me, I guess you'd be better with e.g. 4GB as a start but many more parallel workers. For instance, depending on your machine, try adjusting the max_worker_processes, max_parallel_workers and max_parallel_workers_per_gather.\r\n Values depend a bit on your system, make sure, that max_parallel_workers_per_gather are much lower than max_parallel_workers and that must be lower than max_worker_processes. You can try large values, for instance 128, 120, 12.\n- You may want to test with min_parallel_table_scan_size = 0\n- Did you enable partition pruning, partitionwise join and aggregate?\n\n\nThanks,\nSebastian\n\n\n\n\r\n--\n\r\nSebastian Dressler, Solution Architect \r\n+49 30 994 0496 72 | [email protected] \n\r\nSwarm64 AS\r\nParkveien 41 B | 0258 Oslo | Norway\r\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\r\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck \n\r\nSwarm64 AS Zweigstelle Hive\r\nUllsteinstr. 120 | 12109 Berlin | Germany\r\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B", "msg_date": "Wed, 15 Jul 2020 08:03:46 +0000", "msg_from": "Sebastian Dressler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "On Wed, Jul 15, 2020 at 4:03 AM Sebastian Dressler <[email protected]>\nwrote:\n\n> Hi Henrique,\n>\n> On 15. Jul 2020, at 03:13, Henrique Montenegro <[email protected]> wrote:\n> [...]\n>\n> ```\n> ssl = off\n> shared_buffers = 160GB # min 128kB\n> work_mem = 96GB # min 64kB\n> maintenance_work_mem = 12GB # min 1MB\n> max_stack_depth = 4MB # min 100kB\n> dynamic_shared_memory_type = posix # the default is the first option\n> synchronous_commit = off # synchronization level;\n> commit_delay = 100000 # range 0-100000, in microseconds\n> max_wal_size = 3GB\n> min_wal_size = 1GB\n> min_parallel_index_scan_size = 64kB\n> effective_cache_size = 96GB\n> log_min_messages = debug1 # values in order of decreasing detail:\n> log_checkpoints = on\n> log_error_verbosity = verbose # terse, default, or verbose messages\n> log_line_prefix = '%m [%p] %q%u@%d ' # special values:\n> log_lock_waits = on # log lock waits >=\n> deadlock_timeout\n> log_timezone = 'America/New_York'\n> log_executor_stats = on\n> datestyle = 'iso, mdy'\n> ```\n>\n> [...]\n>\n> Limit | Time (seconds)\n> ---------|------------------\n> 10 | 0.6\n> 100 | 0.6\n> 1000 | 1.3\n> 10000 | 116.9\n> 100000 | 134.8\n> 1000000 | 193.2\n>\n> Notice the jump in time execution from a 1k limit to a 10k limit. Amount of\n> data raised 10x and execution time raised 100x.\n>\n> It seems to me that inserting the data in this case is slow because the\n> time\n> it takes to identify the duplicate records (which I assume would be done\n> in a\n> fashion similiar to the queries above) is taking a long time.\n>\n> I have attached the `explain analyze` output for the 1k and 10k queries to\n> this email (they are 4k+ lines each, didn't want to make this messager\n> bigger\n> than it already is).\n>\n> * exp1k.txt\n> * exp10k.txt\n>\n> [...]\n>\n>\n> I quickly glanced at the exp10k plan and there are some things I noticed\n> (sorry for not going over all the mail, have to re-read it again):\n>\n> - There are a lot of partitions now, you maybe want consider reducing the\n> amount. To me it seems that you overload the system. Scan times are low but\n> the overhead to start a scan is likely quite high.\n> - work_mem = 96GB seems very high to me, I guess you'd be better with e.g.\n> 4GB as a start but many more parallel workers. For instance, depending on\n> your machine, try adjusting the max_worker_processes, max_parallel_workers\n> and max_parallel_workers_per_gather. Values depend a bit on your system,\n> make sure, that max_parallel_workers_per_gather are much lower than\n> max_parallel_workers and that must be lower than max_worker_processes. You\n> can try large values, for instance 128, 120, 12.\n> - You may want to test with min_parallel_table_scan_size = 0\n> - Did you enable partition pruning, partitionwise join and aggregate?\n>\n> Thanks,\n> Sebastian\n>\n> --\n>\n> Sebastian Dressler, Solution Architect\n> +49 30 994 0496 72 | [email protected]\n>\n> Swarm64 AS\n> Parkveien 41 B | 0258 Oslo | Norway\n> Registered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\n> CEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender\n> (Styrets Leder): Dr. Sverre Munck\n>\n> Swarm64 AS Zweigstelle Hive\n> Ullsteinstr. 120 | 12109 Berlin | Germany\n> Registered at Amtsgericht Charlottenburg - HRB 154382 B\n>\n>\nHi Sebastian,\n\nThat is a good idea about the parallel workers. I have tried to update them\nand will post the results as soon as I have them.\nRegarding the partition pruning it is set to the default (which is on).\npartitionwise_join and partitionwise_aggregate are both set to off. I will\nturn them on as well and see how it goes.\n\nThanks for the suggestions! I will keep the list updated.\n\nHenrique\n\nOn Wed, Jul 15, 2020 at 4:03 AM Sebastian Dressler <[email protected]> wrote:\n\nHi Henrique,\n\n\nOn 15. Jul 2020, at 03:13, Henrique Montenegro <[email protected]> wrote:\n\n\n[...]\n\n\n```\nssl = off\nshared_buffers = 160GB                  # min 128kB\nwork_mem = 96GB                         # min 64kB\nmaintenance_work_mem = 12GB             # min 1MB\nmax_stack_depth = 4MB                   # min 100kB\ndynamic_shared_memory_type = posix      # the default is the first option\nsynchronous_commit = off                # synchronization level;\ncommit_delay = 100000                   # range 0-100000, in microseconds\nmax_wal_size = 3GB\nmin_wal_size = 1GB\nmin_parallel_index_scan_size = 64kB\neffective_cache_size = 96GB\nlog_min_messages = debug1 # values in order of decreasing detail:\nlog_checkpoints = on\nlog_error_verbosity = verbose # terse, default, or verbose messages\nlog_line_prefix = '%m [%p] %q%u@%d '            # special values:\nlog_lock_waits = on                     # log lock waits >= deadlock_timeout\nlog_timezone = 'America/New_York'\nlog_executor_stats = on\ndatestyle = 'iso, mdy'\n```\n\n[...]\n\n  Limit  |  Time (seconds)\n---------|------------------\n10       | 0.6\n100      | 0.6\n1000     | 1.3\n10000    | 116.9\n100000   | 134.8\n1000000  | 193.2\n\nNotice the jump in time execution from a 1k limit to a 10k limit. Amount of\ndata raised 10x and execution time raised 100x.\n\nIt seems to me that inserting the data in this case is slow because the time\nit takes to identify the duplicate records (which I assume would be done in a\nfashion similiar to the queries above) is taking a long time.\n\nI have attached the `explain analyze` output for the 1k and 10k queries to\nthis email (they are 4k+ lines each, didn't want to make this messager bigger\nthan it already is).\n\n* exp1k.txt\n* exp10k.txt\n\n[...]\n\n\n\n\n\n\n\nI quickly glanced at the exp10k plan and there are some things I noticed (sorry for not going over all the mail, have to re-read it again):\n\n\n- There are a lot of partitions now, you maybe want consider reducing the amount. To me it seems that you overload the system. Scan times are low but the overhead to start a scan is likely quite high.\n- work_mem = 96GB seems very high to me, I guess you'd be better with e.g. 4GB as a start but many more parallel workers. For instance, depending on your machine, try adjusting the max_worker_processes, max_parallel_workers and max_parallel_workers_per_gather.\n Values depend a bit on your system, make sure, that max_parallel_workers_per_gather are much lower than max_parallel_workers and that must be lower than max_worker_processes. You can try large values, for instance 128, 120, 12.\n- You may want to test with min_parallel_table_scan_size = 0\n- Did you enable partition pruning, partitionwise join and aggregate?\n\n\nThanks,\nSebastian\n\n\n\n\n--\n\nSebastian Dressler, Solution Architect \n+49 30 994 0496 72 | [email protected] \n\nSwarm64 AS\nParkveien 41 B | 0258 Oslo | Norway\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck \n\nSwarm64 AS Zweigstelle Hive\nUllsteinstr. 120 | 12109 Berlin | Germany\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B\n\nHi Sebastian,That is a good idea about the parallel workers. I have tried to update them and will post the results as soon as I have them.Regarding the partition pruning it is set to the default (which is on). partitionwise_join and partitionwise_aggregate are both set to off. I will turn them on as well and see how it goes.Thanks for the suggestions! I will keep the list updated.Henrique", "msg_date": "Wed, 15 Jul 2020 08:24:25 -0400", "msg_from": "Henrique Montenegro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "On Wed, Jul 15, 2020 at 8:24 AM Henrique Montenegro <[email protected]>\nwrote:\n\n>\n>\n> On Wed, Jul 15, 2020 at 4:03 AM Sebastian Dressler <[email protected]>\n> wrote:\n>\n>> Hi Henrique,\n>>\n>> On 15. Jul 2020, at 03:13, Henrique Montenegro <[email protected]> wrote:\n>> [...]\n>>\n>> ```\n>> ssl = off\n>> shared_buffers = 160GB # min 128kB\n>> work_mem = 96GB # min 64kB\n>> maintenance_work_mem = 12GB # min 1MB\n>> max_stack_depth = 4MB # min 100kB\n>> dynamic_shared_memory_type = posix # the default is the first option\n>> synchronous_commit = off # synchronization level;\n>> commit_delay = 100000 # range 0-100000, in microseconds\n>> max_wal_size = 3GB\n>> min_wal_size = 1GB\n>> min_parallel_index_scan_size = 64kB\n>> effective_cache_size = 96GB\n>> log_min_messages = debug1 # values in order of decreasing detail:\n>> log_checkpoints = on\n>> log_error_verbosity = verbose # terse, default, or verbose messages\n>> log_line_prefix = '%m [%p] %q%u@%d ' # special values:\n>> log_lock_waits = on # log lock waits >=\n>> deadlock_timeout\n>> log_timezone = 'America/New_York'\n>> log_executor_stats = on\n>> datestyle = 'iso, mdy'\n>> ```\n>>\n>> [...]\n>>\n>> Limit | Time (seconds)\n>> ---------|------------------\n>> 10 | 0.6\n>> 100 | 0.6\n>> 1000 | 1.3\n>> 10000 | 116.9\n>> 100000 | 134.8\n>> 1000000 | 193.2\n>>\n>> Notice the jump in time execution from a 1k limit to a 10k limit. Amount\n>> of\n>> data raised 10x and execution time raised 100x.\n>>\n>> It seems to me that inserting the data in this case is slow because the\n>> time\n>> it takes to identify the duplicate records (which I assume would be done\n>> in a\n>> fashion similiar to the queries above) is taking a long time.\n>>\n>> I have attached the `explain analyze` output for the 1k and 10k queries to\n>> this email (they are 4k+ lines each, didn't want to make this messager\n>> bigger\n>> than it already is).\n>>\n>> * exp1k.txt\n>> * exp10k.txt\n>>\n>> [...]\n>>\n>>\n>> I quickly glanced at the exp10k plan and there are some things I noticed\n>> (sorry for not going over all the mail, have to re-read it again):\n>>\n>> - There are a lot of partitions now, you maybe want consider reducing the\n>> amount. To me it seems that you overload the system. Scan times are low but\n>> the overhead to start a scan is likely quite high.\n>> - work_mem = 96GB seems very high to me, I guess you'd be better with\n>> e.g. 4GB as a start but many more parallel workers. For instance, depending\n>> on your machine, try adjusting the max_worker_processes,\n>> max_parallel_workers and max_parallel_workers_per_gather. Values depend a\n>> bit on your system, make sure, that max_parallel_workers_per_gather are\n>> much lower than max_parallel_workers and that must be lower than\n>> max_worker_processes. You can try large values, for instance 128, 120, 12.\n>> - You may want to test with min_parallel_table_scan_size = 0\n>> - Did you enable partition pruning, partitionwise join and aggregate?\n>>\n>> Thanks,\n>> Sebastian\n>>\n>> --\n>>\n>> Sebastian Dressler, Solution Architect\n>> +49 30 994 0496 72 | [email protected]\n>>\n>> Swarm64 AS\n>> Parkveien 41 B | 0258 Oslo | Norway\n>> Registered at Brønnøysundregistrene in Norway under Org.-Number 911 662\n>> 787\n>> CEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender\n>> (Styrets Leder): Dr. Sverre Munck\n>>\n>> Swarm64 AS Zweigstelle Hive\n>> Ullsteinstr. 120 | 12109 Berlin | Germany\n>> Registered at Amtsgericht Charlottenburg - HRB 154382 B\n>>\n>>\n> Hi Sebastian,\n>\n> That is a good idea about the parallel workers. I have tried to update\n> them and will post the results as soon as I have them.\n> Regarding the partition pruning it is set to the default (which is on).\n> partitionwise_join and partitionwise_aggregate are both set to off. I will\n> turn them on as well and see how it goes.\n>\n> Thanks for the suggestions! I will keep the list updated.\n>\n> Henrique\n>\n>\nChanging those parameters had almost no effect in the performance. I just\nexecuted the following `SELECT` again:\n\n```\nexplain analyze\nselect user_id from users_basic_profile_no_dups_partitioned\nwhere\n user_id in (\n select user_id from users_basic_profile order by user_id\n );\n```\n\nI am looking at the plan and seeing things like this:\n\n```\n Index Only Scan using ubp_from_100036700000000_to_100036800000000_pkey on\nubp_from_100036700000000_to_100036800000000 (cost=0.42..1.99 rows=1\nwidth=8) (actual time=3.276..3.276 rows=1 loops=611)\n\n Index Cond: (user_id = users_basic_profile.user_id)\n Heap Fetches: 0\n Buffers: shared hit=1,688 read=146\n```\n\nAny idea why the actual time is in the 3ms range? If I query that partition\ndirectly, like this:\n\n```\nexplain analyze select user_id from\nubp_from_100036700000000_to_100036800000000 where user_id in (select\nuser_id from users_basic_profile order by user_id);\n```\n\nI get this:\n\n```\n -> Index Only Scan using\nubp_from_100036700000000_to_100036800000000_pkey on\nubp_from_100036700000000_to_100036800000000 (cost=0.42..4.12 rows=1\nwidth=8) (actual time=0.002..0.002 rows=0 loops=984904)\n Index Cond: (user_id = users_basic_profile.user_id)\n Heap Fetches: 0\n```\n\nAs you can see, the `actual_time` when querying the partition table\ndirectly goes to 0.002 which is almost 2000x faster.\n\nMy google fu is also coming short on figuring that out. Any suggestions?\n\nThanks!\n\nHenrique\n\nOn Wed, Jul 15, 2020 at 8:24 AM Henrique Montenegro <[email protected]> wrote:On Wed, Jul 15, 2020 at 4:03 AM Sebastian Dressler <[email protected]> wrote:\n\nHi Henrique,\n\n\nOn 15. Jul 2020, at 03:13, Henrique Montenegro <[email protected]> wrote:\n\n\n[...]\n\n\n```\nssl = off\nshared_buffers = 160GB                  # min 128kB\nwork_mem = 96GB                         # min 64kB\nmaintenance_work_mem = 12GB             # min 1MB\nmax_stack_depth = 4MB                   # min 100kB\ndynamic_shared_memory_type = posix      # the default is the first option\nsynchronous_commit = off                # synchronization level;\ncommit_delay = 100000                   # range 0-100000, in microseconds\nmax_wal_size = 3GB\nmin_wal_size = 1GB\nmin_parallel_index_scan_size = 64kB\neffective_cache_size = 96GB\nlog_min_messages = debug1 # values in order of decreasing detail:\nlog_checkpoints = on\nlog_error_verbosity = verbose # terse, default, or verbose messages\nlog_line_prefix = '%m [%p] %q%u@%d '            # special values:\nlog_lock_waits = on                     # log lock waits >= deadlock_timeout\nlog_timezone = 'America/New_York'\nlog_executor_stats = on\ndatestyle = 'iso, mdy'\n```\n\n[...]\n\n  Limit  |  Time (seconds)\n---------|------------------\n10       | 0.6\n100      | 0.6\n1000     | 1.3\n10000    | 116.9\n100000   | 134.8\n1000000  | 193.2\n\nNotice the jump in time execution from a 1k limit to a 10k limit. Amount of\ndata raised 10x and execution time raised 100x.\n\nIt seems to me that inserting the data in this case is slow because the time\nit takes to identify the duplicate records (which I assume would be done in a\nfashion similiar to the queries above) is taking a long time.\n\nI have attached the `explain analyze` output for the 1k and 10k queries to\nthis email (they are 4k+ lines each, didn't want to make this messager bigger\nthan it already is).\n\n* exp1k.txt\n* exp10k.txt\n\n[...]\n\n\n\n\n\n\n\nI quickly glanced at the exp10k plan and there are some things I noticed (sorry for not going over all the mail, have to re-read it again):\n\n\n- There are a lot of partitions now, you maybe want consider reducing the amount. To me it seems that you overload the system. Scan times are low but the overhead to start a scan is likely quite high.\n- work_mem = 96GB seems very high to me, I guess you'd be better with e.g. 4GB as a start but many more parallel workers. For instance, depending on your machine, try adjusting the max_worker_processes, max_parallel_workers and max_parallel_workers_per_gather.\n Values depend a bit on your system, make sure, that max_parallel_workers_per_gather are much lower than max_parallel_workers and that must be lower than max_worker_processes. You can try large values, for instance 128, 120, 12.\n- You may want to test with min_parallel_table_scan_size = 0\n- Did you enable partition pruning, partitionwise join and aggregate?\n\n\nThanks,\nSebastian\n\n\n\n\n--\n\nSebastian Dressler, Solution Architect \n+49 30 994 0496 72 | [email protected] \n\nSwarm64 AS\nParkveien 41 B | 0258 Oslo | Norway\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck \n\nSwarm64 AS Zweigstelle Hive\nUllsteinstr. 120 | 12109 Berlin | Germany\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B\n\nHi Sebastian,That is a good idea about the parallel workers. I have tried to update them and will post the results as soon as I have them.Regarding the partition pruning it is set to the default (which is on). partitionwise_join and partitionwise_aggregate are both set to off. I will turn them on as well and see how it goes.Thanks for the suggestions! I will keep the list updated.HenriqueChanging those parameters had almost no effect in the performance. I just executed the following `SELECT` again:```explain analyze select user_id from users_basic_profile_no_dups_partitioned where     user_id in (        select user_id from users_basic_profile order by user_id    );```I am looking at the plan and seeing things like this:``` Index Only Scan using ubp_from_100036700000000_to_100036800000000_pkey on ubp_from_100036700000000_to_100036800000000 (cost=0.42..1.99 rows=1 width=8) (actual time=3.276..3.276 rows=1 loops=611)    Index Cond: (user_id = users_basic_profile.user_id)    Heap Fetches: 0    Buffers: shared hit=1,688 read=146```Any idea why the actual time is in the 3ms range? If I query that partition directly, like this:```explain analyze select user_id from ubp_from_100036700000000_to_100036800000000 where user_id in (select user_id from users_basic_profile order by user_id);```I get this:```        ->  Index Only Scan using ubp_from_100036700000000_to_100036800000000_pkey on ubp_from_100036700000000_to_100036800000000  (cost=0.42..4.12 rows=1 width=8) (actual time=0.002..0.002 rows=0 loops=984904)               Index Cond: (user_id = users_basic_profile.user_id)               Heap Fetches: 0```As you can see, the `actual_time` when querying the partition table directly goes to 0.002 which is almost 2000x faster.My google fu is also coming short on figuring that out. Any suggestions?Thanks!Henrique", "msg_date": "Wed, 15 Jul 2020 14:49:16 -0400", "msg_from": "Henrique Montenegro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sudden insert performance degradation" }, { "msg_contents": "On Wed, Jul 15, 2020 at 02:49:16PM -0400, Henrique Montenegro wrote:\n> Any idea why the actual time is in the 3ms range? If I query that partition\n> directly, like this:\n> \n> As you can see, the `actual_time` when querying the partition table\n> directly goes to 0.002 which is almost 2000x faster.\n\nBecause querying parents of 1000s of tables is slow.\nThat's improved in v12. You can read a previous discussion about it here:\nhttps://www.postgresql.org/message-id/[email protected]\n\nBut I think you need to know more about partitioning. It doesn't magically\nmake things faster for you, and if you just guess, then it's likely to perform\nworse for reading and/or writing.\n\nPartitioning only helps for INSERTs if nearly all the insertions happening at a\ngiven time go into a small number of partitions. Like inserting data\npartitioned by \"timestamp\", where all the new data goes into a partition for\nthe current date. Otherwise instead of one gigantic index which doesn't fit in\nshared_buffers or RAM, you have some hundreds of indexes which also don't\nsimultaneously fit into RAM. That doesn't help writes, and hurts planning\ntime.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 15 Jul 2020 15:03:26 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden insert performance degradation" } ]
[ { "msg_contents": "Hi,\n\nI have two machines - one with 8GB RAM & 4core CPU and the other with 64GB\nRam & 24 core CPU. Both machines have the same DB (Postgres 12 + Postgis\n2.5.3). Same query is taking less time in low end machine whereas more\ntime in high end machine. Any thoughts on where to look? I have tuned the\ndb in both machines according to https://pgtune.leopard.in.ua/#/, the\nfunction will refer around 14 tables, since both the tables are have same\nindex and views. <https://pgtune.leopard.in.ua/#/>\n\n\n Please find the attachment for query explain & analyze and bonnie result\nof both the machines.\n\nLow End Machine\n\n-bash-4.2$ psql -p 5434\npsql (12.3)\nType \"help\" for help.\n\npostgres=# \\c IPDS_KSEB;\nYou are now connected to database \"IPDS_KSEB\" as user \"postgres\".\nIPDS_KSEB=# explain analyze select * from\nkseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2)\n;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Function Scan on kseb_geometry_trace_with_barrier_partition\n (cost=0.25..10.25 rows=1000 width=169) (actual time=11626.548..11626.568\nrows=254 loops=1)\n Planning Time: 0.212 ms\n Execution Time: *11628.590 ms*\n\nHigh End Machine\n\n-bash-4.2$ psql -p 5422\npsql (12.3)\nType \"help\" for help.\n\npostgres=# \\c IPDS_KSEB;\nYou are now connected to database \"IPDS_KSEB\" as user \"postgres\".\nIPDS_KSEB=# explain analyze select * from\nkseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2)\n;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Function Scan on kseb_geometry_trace_with_barrier_partition\n (cost=0.25..10.25 rows=1000 width=169) (actual time=22304.425..22304.448\nrows=254 loops=1)\n Planning Time: 0.219 ms\n Execution Time: *22352.219 ms*\n(3 rows)", "msg_date": "Thu, 16 Jul 2020 21:13:45 +0530", "msg_from": "Vishwa Kalyankar <[email protected]>", "msg_from_op": true, "msg_subject": "Same query taking less time in low configuration machine" }, { "msg_contents": "On Thu, Jul 16, 2020 at 09:13:45PM +0530, Vishwa Kalyankar wrote:\n> Hi,\n> \n> I have two machines - one with 8GB RAM & 4core CPU and the other with 64GB\n> Ram & 24 core CPU. Both machines have the same DB (Postgres 12 + Postgis\n> 2.5.3). Same query is taking less time in low end machine whereas more\n> time in high end machine. Any thoughts on where to look? I have tuned the\n\nWhen you say \"the same DB\" what do you mean ?\nIs one a pg_dump and restore of the other ?\nOr a physical copy like rsync/tar of the data dir ?\n\n> Please find the attachment for query explain & analyze and bonnie result\n> of both the machines.\n\nAre the DB settings the ame or how do they differ ?\n\nMaybe you could send explain(analyze,buffers,timing,settings) ?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 16 Jul 2020 11:04:23 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Same query taking less time in low configuration machine" }, { "msg_contents": "Hi Justin,\n\n I tried both the way, pg_dump and rsync of complete data_directory, but\nthe result is same.\n\nBoth the db's configurations are not same, I have tuned the db in both\nmachines according to https://pgtune.leopard.in.ua/#/\n\nBelow is the result of explain (analyze, buffer, settings) of both the db's.\n\nHigh End Machine\n\nIPDS_KSEB=# set track_io_timing TO on;\nSET\nIPDS_KSEB=# explain (analyze,buffers, settings) select * from\nkseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2)\n;\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------\n Function Scan on kseb_geometry_trace_with_barrier_partition\n (cost=0.25..10.25 rows=1000 width=169) (actual time=24708.020..24708.048\nrows=254 loops=1)\n Buffers: shared hit=254235 read=1484\n I/O Timings: read=827.509\n Settings: effective_cache_size = '30GB', effective_io_concurrency = '2',\nmax_parallel_workers = '24', max_parallel_workers_per_gather = '4',\nsearch_path = '\n\"$user\", public, topology', work_mem = '10MB'\n Planning Time: 0.064 ms\n Execution Time: 24772.587 ms\n(6 rows)\n\nLow End Machine\nIPDS_KSEB=# explain (analyze,buffers, settings) select * from\nkseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2)\n;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Function Scan on kseb_geometry_trace_with_barrier_partition\n (cost=0.25..10.25 rows=1000 width=169) (actual time=21870.311..21870.344\nrows=389 loops=1)\n Buffers: shared hit=774945\n Settings: search_path = '\"$user\", public, topology'\n Planning Time: 0.089 ms\n Execution Time: 21870.406 ms\n(5 rows)\n\n\n\n\n\nOn Thu, Jul 16, 2020 at 9:34 PM Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Jul 16, 2020 at 09:13:45PM +0530, Vishwa Kalyankar wrote:\n> > Hi,\n> >\n> > I have two machines - one with 8GB RAM & 4core CPU and the other with\n> 64GB\n> > Ram & 24 core CPU. Both machines have the same DB (Postgres 12 + Postgis\n> > 2.5.3). Same query is taking less time in low end machine whereas more\n> > time in high end machine. Any thoughts on where to look? I have tuned\n> the\n>\n> When you say \"the same DB\" what do you mean ?\n> Is one a pg_dump and restore of the other ?\n> Or a physical copy like rsync/tar of the data dir ?\n>\n> > Please find the attachment for query explain & analyze and bonnie\n> result\n> > of both the machines.\n>\n> Are the DB settings the ame or how do they differ ?\n>\n> Maybe you could send explain(analyze,buffers,timing,settings) ?\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> --\n> Justin\n>\n\nHi Justin,   I tried both the way, pg_dump and rsync of complete data_directory, but the result is same.Both the db's configurations are not same, I have tuned the db in both machines according to https://pgtune.leopard.in.ua/#/\nBelow is the result of explain (analyze, buffer, settings) of both the db's.High End MachineIPDS_KSEB=# set track_io_timing TO on;SETIPDS_KSEB=# explain (analyze,buffers, settings) select * from kseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2) ;                                                                                                QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Function Scan on kseb_geometry_trace_with_barrier_partition  (cost=0.25..10.25 rows=1000 width=169) (actual time=24708.020..24708.048 rows=254 loops=1)   Buffers: shared hit=254235 read=1484   I/O Timings: read=827.509 Settings: effective_cache_size = '30GB', effective_io_concurrency = '2', max_parallel_workers = '24', max_parallel_workers_per_gather = '4', search_path = '\"$user\", public, topology', work_mem = '10MB' Planning Time: 0.064 ms Execution Time: 24772.587 ms(6 rows)Low End MachineIPDS_KSEB=# explain (analyze,buffers, settings) select * from kseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2) ;                                                                       QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------- Function Scan on kseb_geometry_trace_with_barrier_partition  (cost=0.25..10.25 rows=1000 width=169) (actual time=21870.311..21870.344 rows=389 loops=1)   Buffers: shared hit=774945 Settings: search_path = '\"$user\", public, topology' Planning Time: 0.089 ms Execution Time: 21870.406 ms(5 rows) On Thu, Jul 16, 2020 at 9:34 PM Justin Pryzby <[email protected]> wrote:On Thu, Jul 16, 2020 at 09:13:45PM +0530, Vishwa Kalyankar wrote:\n> Hi,\n> \n> I have two machines - one with 8GB RAM & 4core CPU and the other with 64GB\n> Ram & 24 core CPU.  Both machines have the same DB (Postgres 12 + Postgis\n> 2.5.3).  Same query is taking less time in low end machine whereas more\n> time in high end machine.  Any thoughts on where to look?  I have tuned the\n\nWhen  you say \"the same DB\" what do you mean ?\nIs one a pg_dump and restore of the other ?\nOr a physical copy like rsync/tar of the data dir ?\n\n>   Please find the attachment for query explain & analyze and bonnie result\n> of  both the machines.\n\nAre the DB settings the ame or how do they differ ?\n\nMaybe you could send explain(analyze,buffers,timing,settings) ?\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n-- \nJustin", "msg_date": "Thu, 16 Jul 2020 22:21:35 +0530", "msg_from": "Vishwa Kalyankar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Same query taking less time in low configuration machine" }, { "msg_contents": "On Thu, Jul 16, 2020 at 09:13:45PM +0530, Vishwa Kalyankar wrote:\n> I have two machines - one with 8GB RAM & 4core CPU and the other with 64GB\n> Ram & 24 core CPU. Both machines have the same DB (Postgres 12 + Postgis\n\nIt looks like they're returning different number of rows, so definitely not the\nsame DB.\n\nAlso, they're performing about the same now...\n\nIt looks like you didn't set shared_buffers, even for a machine with 64GB RAM.\nI think it's unusual to keep the default.\n\nOn Thu, Jul 16, 2020 at 10:21:35PM +0530, Vishwa Kalyankar wrote:\n> Both the db's configurations are not same, I have tuned the db in both\n> machines according to https://pgtune.leopard.in.ua/#/\n\nIt looks like your low-end machine has no settings at all ?\nDid you forget to restart the server or use SET instead of ALTER SYSTEM SET ?\n\n> IPDS_KSEB=# set track_io_timing TO on;\n> IPDS_KSEB=# explain (analyze,buffers, settings) select * from kseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2);\n> \n> Function Scan on kseb_geometry_trace_with_barrier_partition\n> (cost=0.25..10.25 rows=1000 width=169) (actual time=24708.020..24708.048 rows=254 loops=1)\n> Buffers: shared hit=254235 read=1484\n> I/O Timings: read=827.509\n> Settings: effective_cache_size = '30GB', effective_io_concurrency = '2', max_parallel_workers = '24', max_parallel_workers_per_gather = '4', search_path = '\"$user\", public, topology', work_mem = '10MB'\n> Planning Time: 0.064 ms\n> Execution Time: 24772.587 ms\n> \n> Low End Machine\n> IPDS_KSEB=# explain (analyze,buffers, settings) select * from kseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2) ;\n> Function Scan on kseb_geometry_trace_with_barrier_partition (cost=0.25..10.25 rows=1000 width=169) (actual time=21870.311..21870.344 rows=389 loops=1)\n> Buffers: shared hit=774945\n> Settings: search_path = '\"$user\", public, topology'\n> Planning Time: 0.089 ms\n> Execution Time: 21870.406 ms\n\n\n", "msg_date": "Thu, 16 Jul 2020 12:03:38 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Same query taking less time in low configuration machine" }, { "msg_contents": "Hi Justin,\n\n I am pasting once again the output of low end server , explain result\nand shared_buffer size of high end machine.\n\n-bash-4.2$ psql -p 5422\npsql (12.3)\nType \"help\" for help.\n\npostgres=# \\c IPDS_KSEB;\nYou are now connected to database \"IPDS_KSEB\" as user \"postgres\".\nIPDS_KSEB=# set track_io_timing TO on;\nSET\nIPDS_KSEB=# explain (analyze,buffers, settings) select * from\nkseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2)\n;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Function Scan on kseb_geometry_trace_with_barrier_partition\n (cost=0.25..10.25 rows=1000 width=169) (actual time=22762.767..22762.800\nrows=389 loops=1)\n Buffers: shared hit=775445 read=2371\n I/O Timings: read=1061.060\n Settings: search_path = '\"$user\", public, topology'\n Planning Time: 0.091 ms\n Execution Time: 22781.896 ms\n(6 rows)\n\n#------------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#------------------------------------------------------------------------------\n\n# - Memory -\n\n*shared_buffers = 10GB * # min 128kB\n # (change requires restart)\n#huge_pages = try # on, off, or try\n # (change requires restart)\n#temp_buffers = 8MB # min 800kB\n#max_prepared_transactions = 0 # zero disables the feature\n # (change requires restart)\n# Caution: it is not advisable to set max_prepared_transactions nonzero\nunless\n# you actively intend to use prepared transactions.\nwork_mem = 10MB # min 64kB\nmaintenance_work_mem = 2GB # min 1MB\n#autovacuum_work_mem = -1 # min 1MB, or -1 to use\nmaintenance_work_mem\n#max_stack_depth = 2MB # min 100kB\n#shared_memory_type = mmap # the default is the first option\n # supported by the operating system:\n # mmap\n # sysv\n # windows\n # (change requires restart)\ndynamic_shared_memory_type = posix # the default is the first option\n # supported by the operating system:\n # posix\n\n\n\nOn Thu, Jul 16, 2020 at 10:33 PM Justin Pryzby <[email protected]> wrote:\n\n> On Thu, Jul 16, 2020 at 09:13:45PM +0530, Vishwa Kalyankar wrote:\n> > I have two machines - one with 8GB RAM & 4core CPU and the other with\n> 64GB\n> > Ram & 24 core CPU. Both machines have the same DB (Postgres 12 + Postgis\n>\n> It looks like they're returning different number of rows, so definitely\n> not the\n> same DB.\n>\n> Also, they're performing about the same now...\n>\n> It looks like you didn't set shared_buffers, even for a machine with 64GB\n> RAM.\n> I think it's unusual to keep the default.\n>\n> On Thu, Jul 16, 2020 at 10:21:35PM +0530, Vishwa Kalyankar wrote:\n> > Both the db's configurations are not same, I have tuned the db in both\n> > machines according to https://pgtune.leopard.in.ua/#/\n>\n> It looks like your low-end machine has no settings at all ?\n> Did you forget to restart the server or use SET instead of ALTER SYSTEM\n> SET ?\n>\n> > IPDS_KSEB=# set track_io_timing TO on;\n> > IPDS_KSEB=# explain (analyze,buffers, settings) select * from\n> kseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2);\n> >\n> > Function Scan on kseb_geometry_trace_with_barrier_partition\n> > (cost=0.25..10.25 rows=1000 width=169) (actual\n> time=24708.020..24708.048 rows=254 loops=1)\n> > Buffers: shared hit=254235 read=1484\n> > I/O Timings: read=827.509\n> > Settings: effective_cache_size = '30GB', effective_io_concurrency =\n> '2', max_parallel_workers = '24', max_parallel_workers_per_gather = '4',\n> search_path = '\"$user\", public, topology', work_mem = '10MB'\n> > Planning Time: 0.064 ms\n> > Execution Time: 24772.587 ms\n> >\n> > Low End Machine\n> > IPDS_KSEB=# explain (analyze,buffers, settings) select * from\n> kseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2)\n> ;\n> > Function Scan on kseb_geometry_trace_with_barrier_partition\n> (cost=0.25..10.25 rows=1000 width=169) (actual time=21870.311..21870.344\n> rows=389 loops=1)\n> > Buffers: shared hit=774945\n> > Settings: search_path = '\"$user\", public, topology'\n> > Planning Time: 0.089 ms\n> > Execution Time: 21870.406 ms\n>\n\nHi Justin,    I am pasting once again the output of low end server , explain result and shared_buffer size of high end machine.-bash-4.2$ psql -p 5422psql (12.3)Type \"help\" for help.postgres=# \\c IPDS_KSEB;You are now connected to database \"IPDS_KSEB\" as user \"postgres\".IPDS_KSEB=# set track_io_timing TO on;SETIPDS_KSEB=# explain (analyze,buffers, settings) select * from kseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2) ;                                                                       QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------- Function Scan on kseb_geometry_trace_with_barrier_partition  (cost=0.25..10.25 rows=1000 width=169) (actual time=22762.767..22762.800 rows=389 loops=1)   Buffers: shared hit=775445 read=2371   I/O Timings: read=1061.060 Settings: search_path = '\"$user\", public, topology' Planning Time: 0.091 ms Execution Time: 22781.896 ms(6 rows)#------------------------------------------------------------------------------# RESOURCE USAGE (except WAL)#------------------------------------------------------------------------------# - Memory -shared_buffers = 10GB                   # min 128kB                                        # (change requires restart)#huge_pages = try                       # on, off, or try                                        # (change requires restart)#temp_buffers = 8MB                     # min 800kB#max_prepared_transactions = 0          # zero disables the feature                                        # (change requires restart)# Caution: it is not advisable to set max_prepared_transactions nonzero unless# you actively intend to use prepared transactions.work_mem = 10MB                         # min 64kBmaintenance_work_mem = 2GB              # min 1MB#autovacuum_work_mem = -1               # min 1MB, or -1 to use maintenance_work_mem#max_stack_depth = 2MB                  # min 100kB#shared_memory_type = mmap              # the default is the first option                                        # supported by the operating system:                                        #   mmap                                        #   sysv                                        #   windows                                        # (change requires restart)dynamic_shared_memory_type = posix      # the default is the first option                                        # supported by the operating system:                                        #   posixOn Thu, Jul 16, 2020 at 10:33 PM Justin Pryzby <[email protected]> wrote:On Thu, Jul 16, 2020 at 09:13:45PM +0530, Vishwa Kalyankar wrote:\n> I have two machines - one with 8GB RAM & 4core CPU and the other with 64GB\n> Ram & 24 core CPU.  Both machines have the same DB (Postgres 12 + Postgis\n\nIt looks like they're returning different number of rows, so definitely not the\nsame DB.\n\nAlso, they're performing about the same now...\n\nIt looks like you didn't set shared_buffers, even for a machine with 64GB RAM.\nI think it's unusual to keep the default.\n\nOn Thu, Jul 16, 2020 at 10:21:35PM +0530, Vishwa Kalyankar wrote:\n> Both the db's configurations are not same, I have tuned the db in both\n> machines according to https://pgtune.leopard.in.ua/#/\n\nIt looks like your low-end machine has no settings at all ?\nDid you forget to restart the server or use SET instead of ALTER SYSTEM SET ?\n\n> IPDS_KSEB=# set track_io_timing TO on;\n> IPDS_KSEB=# explain (analyze,buffers, settings) select * from kseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2);\n> \n>  Function Scan on kseb_geometry_trace_with_barrier_partition\n>  (cost=0.25..10.25 rows=1000 width=169) (actual time=24708.020..24708.048 rows=254 loops=1)\n>    Buffers: shared hit=254235 read=1484\n>    I/O Timings: read=827.509\n>  Settings: effective_cache_size = '30GB', effective_io_concurrency = '2', max_parallel_workers = '24', max_parallel_workers_per_gather = '4', search_path = '\"$user\", public, topology', work_mem = '10MB'\n>  Planning Time: 0.064 ms\n>  Execution Time: 24772.587 ms\n> \n> Low End Machine\n> IPDS_KSEB=# explain (analyze,buffers, settings) select * from kseb_geometry_trace_with_barrier_partition(5,'kottarakara_version',437,'htline',2) ;\n>  Function Scan on kseb_geometry_trace_with_barrier_partition  (cost=0.25..10.25 rows=1000 width=169) (actual time=21870.311..21870.344 rows=389 loops=1)\n>    Buffers: shared hit=774945\n>  Settings: search_path = '\"$user\", public, topology'\n>  Planning Time: 0.089 ms\n>  Execution Time: 21870.406 ms", "msg_date": "Fri, 17 Jul 2020 13:29:33 +0530", "msg_from": "Vishwa Kalyankar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Same query taking less time in low configuration machine" } ]
[ { "msg_contents": "Hello,\n\nA description of what you are trying to achieve and what results you expect:\nOur database is growing on a daily basis by about 2.5million rows per table (2 at the moment). Because of that, we decided to partition the data, especially, as we are pre-aggregating the data for weeks, months, quarters and years. Every aggregation is stored in a separate partition:\n\nDays: ..._yYYYYmMMd (base data)\nWeeks: ..._yYYYYmMMw (aggregated all weeks of the month)\nmonth: ..._yYYYYmMM (aggregated month)\netc.\n\n\nOur problem is, that the planner always predicts one row to be returned, although only a part of the primary key is queried. This problem exceeds feasibility of performance rapidly - a query only involving a few days already takes dozens of seconds. All tables are analyzed and pg_stats looks reasonable IMHO.\n\n\nPostgreSQL version number you are running:\npostgres=# SELECT version();\n version\n------------------------------------------------------------------------------------------------------------------\n PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit\n(1 Zeile)\n\npostgres=# SELECT name, current_setting(name), source\npostgres-# FROM pg_settings\npostgres-# WHERE source NOT IN ('default', 'override');\npostgres=# SELECT name, current_setting(name), source\n FROM pg_settings\n WHERE source NOT IN ('default', 'override');\n name | current_setting | source\n--------------------------------+-----------------------------------------+----------------------\n application_name | psql | client\n checkpoint_completion_target | 0.9 | configuration file\n client_encoding | UTF8 | client\n cluster_name | 12/main | configuration file\n DateStyle | ISO, DMY | configuration file\n default_text_search_config | pg_catalog.german | configuration file\n dynamic_shared_memory_type | posix | configuration file\n effective_cache_size | 6GB | configuration file\n effective_io_concurrency | 200 | configuration file\n enable_partitionwise_aggregate | on | configuration file\n enable_partitionwise_join | on | configuration file\n external_pid_file | /var/run/postgresql/12-main.pid | configuration file\n lc_messages | de_DE.UTF-8 | configuration file\n lc_monetary | de_DE.UTF-8 | configuration file\n lc_numeric | de_DE.UTF-8 | configuration file\n lc_time | de_DE.UTF-8 | configuration file\n listen_addresses | * | configuration file\n log_line_prefix | %m [%p] %q%u@%d | configuration file\n log_timezone | Etc/UTC | configuration file\n maintenance_work_mem | 512MB | configuration file\n max_connections | 300 | configuration file\n max_parallel_workers | 2 | configuration file\n max_stack_depth | 2MB | environment variable\n max_wal_size | 2GB | configuration file\n max_worker_processes | 2 | configuration file\n min_wal_size | 256MB | configuration file\n port | 5432 | configuration file\n random_page_cost | 1.1 | configuration file\n shared_buffers | 2GB | configuration file\n ssl | on | configuration file\n ssl_cert_file | /etc/ssl/certs/ssl-cert-snakeoil.pem | configuration file\n ssl_key_file | /etc/ssl/private/ssl-cert-snakeoil.key | configuration file\n stats_temp_directory | /var/run/postgresql/12-main.pg_stat_tmp | configuration file\n temp_buffers | 256MB | configuration file\n TimeZone | Etc/UTC | configuration file\n unix_socket_directories | /var/run/postgresql | configuration file\n work_mem | 128MB | configuration file\n(37 Zeilen)\n\nOperating system and version:\nLinux dev 5.4.44-2-pve #1 SMP PVE 5.4.44-2 (Wed, 01 Jul 2020 16:37:57 +0200) x86_64 GNU/Linux\n\nOn a quad core virtualized machine with SSD storage and 16GB RAM.\n\n\nWhat program you're using to connect to PostgreSQL:\npsql and IntelliJ\n\nI'm trying to gather as much information as possible and focus just on one of the two tables (the problem persists in both though):\n\n-------------------------------------------------------------------------------------------------------\n\nStucture:\n\n\nCREATE TABLE location_statistics\n(\n daterange daterange NOT NULL,\n spatial_feature_id INTEGER,\n visitor_profile_id INTEGER,\n activity_type_combination_id INTEGER,\n activity_chain_id INTEGER NOT NULL,\n visitors REAL,\n dwell_time INTEGER,\n travel_time INTEGER,\n n INTEGER NOT NULL DEFAULT 1,\n\n PRIMARY KEY ( daterange, spatial_feature_id, visitor_profile_id, activity_type_combination_id,\n activity_chain_id),\n FOREIGN KEY (daterange) REFERENCES dateranges (daterange) ON DELETE CASCADE ON UPDATE CASCADE,\n FOREIGN KEY (spatial_feature_id) REFERENCES spatial_features (id) ON DELETE CASCADE ON UPDATE CASCADE,\n FOREIGN KEY (visitor_profile_id) REFERENCES visitor_profiles (id) ON DELETE CASCADE ON UPDATE CASCADE,\n FOREIGN KEY (activity_type_combination_id) REFERENCES activity_type_combinations (id) ON DELETE RESTRICT ON UPDATE CASCADE,\n FOREIGN KEY (activity_chain_id) REFERENCES activity_chains (id) ON DELETE CASCADE ON UPDATE CASCADE\n) PARTITION BY LIST (daterange);\n\n\n\n\n-------------------------------------------------------------------------------------------------------\n\nCreating of partitions:\n\n\nCREATE OR REPLACE FUNCTION create_partition_tables(additional_dates TEXT[] = NULL)\n RETURNS VOID\n VOLATILE\n LANGUAGE plpgsql\nAS\n$$\nDECLARE\n new_partition RECORD;\nBEGIN\n\n FOR new_partition IN\n (\n SELECT for_values_str,\n master_table,\n partition_name\n FROM resolve_existing_partitions((additional_dates))\n WHERE NOT existing\n )\n LOOP\n\n EXECUTE ' CREATE TABLE '\n || new_partition.partition_name\n || ' PARTITION OF '\n || new_partition.master_table\n || ' FOR VALUES IN (' || new_partition.for_values_str || ')';\n\n RAISE NOTICE 'Partition % for % created',new_partition.partition_name, new_partition.master_table;\n END LOOP;\nEND\n$$;\n\n\n-------------------------------------------------------------------------------------------------------\n\nSize of table:\n\nSELECT schemaname,relname,n_live_tup\nFROM pg_stat_user_tables\nwhere relname like 'location_statistics_y2019m03%'\nORDER BY n_live_tup DESC;\n\nschemaname relname n_live_tup\nmobility_insights location_statistics_y2019m03d 23569853\nmobility_insights location_statistics_y2019m03w 19264373\nmobility_insights location_statistics_y2019m03 18105295\n\n\n\n-------------------------------------------------------------------------------------------------------\n\n\n\nselect * from pg_stats\nwhere tablename = 'location_statistics_y2019m03w';\n\nschemaname tablename attname inherited null_frac avg_width n_distinct most_common_vals most_common_freqs histogram_bounds correlation most_common_elems most_common_elem_freqs elem_count_histogram\nmobility_insights location_statistics_y2019m03w daterange false 0 14 -1\nmobility_insights location_statistics_y2019m03w spatial_feature_id false 0 4 600 {12675,7869,7867,7892,7915,7963,12677,12683,12237,7909,7868,9478,7914,11309,7913,7911,12509,9510,7962,10547,9559,10471,11782,10590,9552,10554,9527,10488,12680,9546,11330,11409,9595,12293,10845,11469,10531,10467,9525,7927,11115,10541,10544,9509,9515,10637,10486,10859,9703,9591,11195,11657,7878,7938,7910,9560,9565,9532,11016,12435,12525,9578,7973,9558,10536,12650,9516,9547,7871,10537,10923,10812,12546,9574,12454,9511,10435,11840,7926,12540,8187,10469,7935,9504,9536,11203,7964,9484,10534,10538,12391,10888,8237,9501,9517,12516,10927,11102,7985,10527} {0.11813333630561829,0.06599999964237213,0.03723333403468132,0.031433332711458206,0.027033332735300064,0.023233333602547646,0.022333333268761635,0.0212333332747221,0.021166667342185974,0.02083333395421505,0.02033333294093609,0.0201666671782732,0.02006666734814644,0.019200000911951065,0.018833333626389503,0.01823333278298378,0.01510000042617321,0.014333332888782024,0.013633333146572113,0.013399999588727951,0.01146666705608368,0.011300000362098217,0.011233333498239517,0.011033332906663418,0.009666666388511658,0.009233333170413971,0.008433333598077297,0.007966666482388973,0.007966666482388973,0.007466666866093874,0.007300000172108412,0.007199999876320362,0.006566666532307863,0.006500000134110451,0.005799999926239252,0.00570000009611249,0.005166666582226753,0.004833333194255829,0.004766666796058416,0.004666666500270367,0.00423333328217268,0.0041333334520459175,0.004100000020116568,0.003966666758060455,0.0038333332631736994,0.0037666666321456432,0.003700000001117587,0.0035000001080334187,0.003433333244174719,0.0033666666131466627,0.0033333334140479565,0.003100000089034438,0.002933333395048976,0.00286666676402092,0.00283333333209157,0.00283333333209157,0.0026666666381061077,0.0024666667450219393,0.0024333333130925894,0.0024333333130925894,0.0024333333130925894,0.0023333332501351833,0.002266666619107127,0.002266666619107127,0.002266666619107127,0.002266666619107127,0.002233333420008421,0.002233333420008421,0.002199999988079071,0.002199999988079071,0.002199999988079071,0.002166666556149721,0.002166666556149721,0.002133333357051015,0.002099999925121665,0.0020666667260229588,0.0020666667260229588,0.0020666667260229588,0.002033333294093609,0.002033333294093609,0.0019333333475515246,0.0018666667165234685,0.0018333332845941186,0.0018333332845941186,0.0018333332845941186,0.0017999999690800905,0.0017666666535660625,0.0017666666535660625,0.0017666666535660625,0.0017666666535660625,0.0017666666535660625,0.0017333333380520344,0.0017000000225380063,0.0017000000225380063,0.0016666667070239782,0.0016333333915099502,0.0015999999595806003,0.0015999999595806003,0.001500000013038516,0.001500000013038516} {7870,7891,7906,7917,7954,7965,7966,7969,7974,7977,7979,7984,7986,8132,8171,8194,9479,9482,9488,9491,9493,9496,9498,9499,9503,9507,9512,9513,9520,9521,9524,9526,9530,9534,9537,9541,9544,9554,9562,9570,9573,9577,9581,9583,9586,9599,9675,9736,10436,10442,10450,10464,10482,10491,10495,10510,10513,10515,10516,10523,10529,10535,10539,10543,10553,10575,10602,10718,10756,10816,10882,10902,10928,11008,11025,11064,11158,11276,11316,11382,11486,11538,11602,11673,11731,11766,11775,11835,11906,12052,12088,12130,12277,12356,12383,12397,12408,12471,12545,12627,12678} 0.11252771\nmobility_insights location_statistics_y2019m03w visitor_profile_id false 0 4 9806 {3081,3114,2739,3642,2445,103,1625,1874,4005,2282,1550,3792,5564,750,1526,4427,2993,4881,1498,2682,5345,5601,8210,1613,2407,5019,1944,2266,3690,4529,4354,1218,11605,4126,5453,11698,11988,4207,6935,559,9151,12020,12048,12006,12049,3695,4874,5596,5945,6740,1366,7186,101,2026,5694,9152,4446,5788,8892,9365,11619,12027,871,5943,7567,7936,7939,8653,437,3971,5733,5961,7872,2728,3358,4154,4605,6187,9057,1967,4625,4837,5784,8910,1482,2036,6268,7557,8835,9,576,933,1686,2145,2229,3000,3692,4645,4666,5386} {0.0024666667450219393,0.0023333332501351833,0.002300000051036477,0.002199999988079071,0.0020666667260229588,0.002033333294093609,0.002033333294093609,0.002033333294093609,0.0019666666630655527,0.0019333333475515246,0.0019000000320374966,0.0018666667165234685,0.0018666667165234685,0.0018333332845941186,0.0018333332845941186,0.0018333332845941186,0.0017999999690800905,0.0017333333380520344,0.0016333333915099502,0.0015999999595806003,0.0015666666440665722,0.0015666666440665722,0.0015666666440665722,0.0015333333285525441,0.001500000013038516,0.001466666697524488,0.00143333338201046,0.00143333338201046,0.00143333338201046,0.00143333338201046,0.00139999995008111,0.001366666634567082,0.001366666634567082,0.0013333333190530539,0.0013333333190530539,0.0013333333190530539,0.0013333333190530539,0.0012666666880249977,0.0012666666880249977,0.0012333333725109696,0.0012333333725109696,0.0012333333725109696,0.0012333333725109696,0.0012000000569969416,0.0012000000569969416,0.0011666666250675917,0.0011666666250675917,0.0011666666250675917,0.0011666666250675917,0.0011666666250675917,0.0011333333095535636,0.0011333333095535636,0.0010999999940395355,0.0010999999940395355,0.0010999999940395355,0.0010999999940395355,0.0010666666785255075,0.0010666666785255075,0.0010666666785255075,0.0010666666785255075,0.0010666666785255075,0.0010666666785255075,0.0010333333630114794,0.0010333333630114794,0.0010333333630114794,0.0010333333630114794,0.0010333333630114794,0.0010333333630114794,0.0010000000474974513,0.0010000000474974513,0.0010000000474974513,0.0010000000474974513,0.0010000000474974513,0.0009666666737757623,0.0009666666737757623,0.0009666666737757623,0.0009666666737757623,0.0009666666737757623,0.0009666666737757623,0.0009333333582617342,0.0009333333582617342,0.0009333333582617342,0.0009333333582617342,0.0009333333582617342,0.0008999999845400453,0.0008999999845400453,0.0008999999845400453,0.0008999999845400453,0.0008999999845400453,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172} {1,89,222,365,497,628,786,886,987,1108,1200,1320,1459,1584,1677,1812,1953,2080,2183,2306,2436,2581,2690,2798,2871,3018,3138,3294,3391,3525,3678,3783,3917,3992,4097,4253,4362,4442,4564,4693,4788,4897,5045,5157,5285,5414,5520,5630,5722,5843,5941,6041,6217,6444,6683,6892,7117,7330,7544,7730,7906,8076,8273,8471,8645,8789,8931,9063,9227,9378,9519,9610,9657,10667,10998,11483,11760,11960,12181,12262,12336,12440,12519,12629,13608,13782,13974,14116,14278,15670,16742,17892,18814,20657,23107,26119,31244,39466,59333,68728,83799} -0.03462254\nmobility_insights location_statistics_y2019m03w activity_type_combination_id false 0 4 145 {6,1,8,10,59,28,5,2,67,14,4,11,12,3,9,133,23,90,25,45,92,32,213,37,50,182,71,89,29,33,46,195,61,84,43,17,20,106,18,160,95,137,15,125,203,214,206,218,107,105,143,85,211,27,38,221,126,79,135,217,175,128,42,108,120,159,208,76,130} {0.15360000729560852,0.14463333785533905,0.11789999902248383,0.06403333693742752,0.056533332914114,0.04636666551232338,0.035466667264699936,0.033533334732055664,0.02669999934732914,0.026133334264159203,0.023900000378489494,0.0203000009059906,0.019866665825247765,0.019233332946896553,0.01876666583120823,0.011966666206717491,0.01126666646450758,0.010066666640341282,0.009533333592116833,0.009499999694526196,0.00860000029206276,0.008366666734218597,0.0077666668221354485,0.00706666661426425,0.006899999920278788,0.006866666488349438,0.006599999964237213,0.006466666702181101,0.00566666666418314,0.004999999888241291,0.004533333238214254,0.004533333238214254,0.004333333112299442,0.0041333334520459175,0.004000000189989805,0.0033333334140479565,0.0031999999191612005,0.0031333332881331444,0.0025333333760499954,0.002166666556149721,0.0020666667260229588,0.0016333333915099502,0.0015333333285525441,0.00143333338201046,0.0013000000035390258,0.0013000000035390258,0.0012666666880249977,0.0012666666880249977,0.0011666666250675917,0.0010333333630114794,0.0010333333630114794,0.0009333333582617342,0.0009333333582617342,0.0007666666642762721,0.000733333348762244,0.000733333348762244,0.000699999975040555,0.0006666666595265269,0.0006666666595265269,0.0006666666595265269,0.0006333333440124989,0.0006000000284984708,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.0004333333345130086,0.0004333333345130086,0.000366666674381122,0.000366666674381122} {22,26,36,54,54,54,64,64,70,77,87,88,96,97,98,98,101,112,114,114,118,119,127,127,131,138,145,148,148,151,151,153,153,155,155,155,163,164,164,165,166,169,169,170,170,173,176,180,184,187,187,187,194,194,201,201,201,219,227,227,228,231,231,232,233,233,251,256,272,274,286,303,303,315,324,490} 0.027344994\nmobility_insights location_statistics_y2019m03w activity_chain_id false 0 4 75638 {5161,5206,5162,5184,5195,5323,5397,5815,6530,5216,7603,6545,5153,6332,6981,7432,5818,5415,5596,7121,7531,5359,5618,5967,6393,7884,14611,21593,355,5325,5986,6407,23475,5213,6039,6385,6621,6849,9910,10026,11114,15860,164,165,200,5165,5262,5890,6043,6231,6659,6950,7251,7284,8228,8456,8923,9212,9851,9886,12203,12983,14685,16472,21550,43,271,307,992,5220,5243,5481,5482,5509,5516,5532,5603,5621,5757,5917,6026,6063,6139,6146,6210,6214,6464,6499,6671,6728,6758,6889,7010,7173,7643,8032,8081,8290,9676,10875} {0.002133333357051015,0.0017999999690800905,0.00143333338201046,0.0011333333095535636,0.0010333333630114794,0.000699999975040555,0.000699999975040555,0.000699999975040555,0.0006666666595265269,0.0006333333440124989,0.0006333333440124989,0.0005000000237487257,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.0004333333345130086,0.000366666674381122,0.000366666674381122,0.000366666674381122,0.000366666674381122,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503} {16,3832,5935,6980,8254,9534,11187,13024,15280,17910,20278,23752,27191,30933,35166,39736,44912,84588,87937,91731,96462,98710,99978,101481,102822,104232,105743,107178,108599,109896,111309,112882,114244,115636,117258,118951,120523,122033,123500,124882,126475,127916,129472,131137,132751,134476,135966,137506,139103,140651,142235,143923,145489,147256,148803,150223,151772,153331,155019,156745,158504,160131,161734,163321,164954,166505,168223,169899,171482,173009,174615,176117,177796,179595,181180,182924,184591,186335,188152,189909,191799,193278,194998,196949,198845,200761,202607,204272,206366,208030,209664,211457,213181,214854,216416,218122,219912,221852,223592,225495,227061} -0.13226064\nmobility_insights location_statistics_y2019m03w visitors false 0 4 141556 {2.231728,2.515927,1.690992,2.716124,1.666667,4.006526,4.547657,2.685691,2.042206,2.0369,2.907664,3.202489,3.321924,5,2.21855,0.357143,1.781995,2.773392,2.430318,3.585561,0.251593,0.294118,0.333333,0.416667,0.47619,1.997838,2.901269,3.665649,0.083864,0.166667,0.228721,0.278577,0.284229,0.3125,0.375056,0.833333,2.434593,2.616505,2.744186,2.95092,3.26703,3.7,3.959243} {0.0008999999845400453,0.0007999999797903001,0.0006333333440124989,0.0005666666547767818,0.0005333333392627537,0.0005333333392627537,0.0005333333392627537,0.0005000000237487257,0.0004333333345130086,0.00039999998989515007,0.00039999998989515007,0.000366666674381122,0.000366666674381122,0.000366666674381122,0.00033333332976326346,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.00026666666963137686,0.00026666666963137686,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503} {2e-06,0.00196,0.003629,0.00529,0.00717,0.00941,0.011622,0.013755,0.016387,0.019173,0.022388,0.02522,0.028369,0.031243,0.03431,0.037177,0.04011,0.043427,0.046591,0.04976,0.052685,0.05561,0.058774333,0.061956,0.065245,0.068608,0.072032,0.0754775,0.078632,0.081756,0.084959,0.088382,0.091822,0.095209,0.098459,0.102495,0.106105,0.109757,0.113244,0.116785,0.120467,0.124337,0.128564,0.132854,0.136804,0.140986,0.145268,0.149572,0.153727,0.157896,0.162,0.166096,0.170477,0.174326,0.178639,0.182968,0.187422,0.191749,0.19638,0.200433,0.205387,0.209918,0.214573,0.218993,0.224327,0.229155,0.234454,0.239658,0.244123,0.249223,0.254667,0.260309,0.265922,0.271871,0.277339,0.283247,0.289332,0.296549,0.303343,0.309744,0.317473,0.325838,0.335268,0.344108,0.352898,0.363003,0.3743145,0.387081,0.401563,0.420192,0.440096,0.461973,0.490929,0.528797,0.574014,0.652174,0.7746,1.056453,1.79342,2.771285,14.935622} 0.010959746\nmobility_insights location_statistics_y2019m03w dwell_time false 0 4 45441 {84600,82800,3600,4500,5400,8100,22500,24300,19800,85499,6300,7200,20700,23400,28800,3722,9000,15300,21600,10800,9900,10802,17100,79200,85500,11700,13500,14400,18900,25200,12600,16200,18000,83700,900,3672,3785,3885,5395,5803,5882,7227,27000,27900,43200,80100} {0.002199999988079071,0.0010000000474974513,0.0008333333535119891,0.0007999999797903001,0.0006666666595265269,0.0006000000284984708,0.0006000000284984708,0.0006000000284984708,0.0005666666547767818,0.0005666666547767818,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.00039999998989515007,0.00039999998989515007,0.00039999998989515007,0.00039999998989515007,0.000366666674381122,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356} {901,2191,3633,3768,3915,4052,4205,4339,4491,4656,4827,5001,5185,5397,5634,5858,6082,6301,6551,6807,7083,7382,7726,8047,8396,8763,9194,9619,9983,10422,10807,11222,11641,12068,12558,13041,13493,13974,14398,14902,15401,15892,16457,16919,17431,17930,18442,18975,19508,20077,20672,21238,21709,22227,22779,23430,24002,24556,25239,26011,26758,27547,28312,29178,29973,30780,31617,32484,33460,34584,35745,36979,38294,39664,41203,42960,44652,46476,48492,50223,52200,54421,56359,58815,61658,64739,67538,70443,73060,75490,77594,79466,80991,82197,83188,83999,84836,85406,85738,86091,86400} -0.066642396\nmobility_insights location_statistics_y2019m03w travel_time false 0 4 11756 {0,5,2700,900,3600,1800,3599,10,425,810,1680,2245} {0.5346666574478149,0.0006666666595265269,0.0005000000237487257,0.0004666666791308671,0.00039999998989515007,0.000366666674381122,0.00026666666963137686,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356} {2,139,279,423,551,648,752,852,937,1024,1112,1195,1286,1375,1451,1540,1631,1760,1861,1958,2058,2162,2264,2367,2470,2575,2683,2805,2912,3013,3146,3270,3373,3513,3604,3709,3824,3951,4067,4205,4328,4437,4532,4681,4841,5002,5147,5291,5452,5602,5763,5924,6060,6223,6390,6554,6719,6917,7109,7294,7490,7698,7904,8095,8299,8537,8724,8982,9242,9536,9775,10066,10363,10632,10933,11273,11643,12014,12368,12776,13176,13580,14021,14450,14922,15462,15934,16468,17097,17693,18538,19456,20254,21245,22403,23780,25470,27648,31072,36178,62080} 0.31811374\nmobility_insights location_statistics_y2019m03w n false 0 4 7 {1,2,3,4,5,6,7} {0.9218999743461609,0.04879999905824661,0.014600000344216824,0.0075333332642912865,0.0038666666951030493,0.0026000000070780516,0.000699999975040555} 0.85469824\n\n\n-------------------------------------------------------------------------------------------------------\n\nQuery:\n\nEXPLAIN ( ANALYZE , BUFFERS )\n\nSELECT sum(visitors * n)\nFROM location_statistics st\nWHERE st.daterange = '[2019-03-04,2019-03-11)'::DATERANGE\n AND spatial_feature_id = 12675\n\nQUERY PLAN\nAggregate (cost=2.79..2.80 rows=1 width=8) (actual time=143.073..143.073 rows=1 loops=1)\n Buffers: shared hit=67334\n -> Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st (cost=0.56..2.78 rows=1 width=8) (actual time=0.026..117.284 rows=516277 loops=1)\n Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))\n Buffers: shared hit=67334\nPlanning Time: 0.082 ms\nExecution Time: 143.095 ms\n\n\nFor completeness sake:\n\n\nEXPLAIN (ANALYZE , BUFFERS)\nSELECT sum(visitors * n)\nFROM location_statistics_y2019m03w st\nWHERE st.daterange = '[2019-03-04,2019-03-11)'::DATERANGE\n AND spatial_feature_id = 12675\n\nQUERY PLAN\nAggregate (cost=2.79..2.80 rows=1 width=8) (actual time=156.304..156.305 rows=1 loops=1)\n Buffers: shared hit=66602 read=732\n -> Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st (cost=0.56..2.78 rows=1 width=8) (actual time=0.194..111.464 rows=516277 loops=1)\n Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))\n Buffers: shared hit=66602 read=732\nPlanning Time: 0.058 ms\nExecution Time: 156.326 ms\n\n\nAs can be seen, the planner predicts one row to be returned, although it should be around 3% (11% of the entries are of the given ID, which are distributed over 4 weeks = date ranges) of the table. Using the partition table directly, does not change this fact.\n\nHow can I solve this problem?\n\nThank you very much in advance.\n\n[http://www.invenium.io/images/invenium_triangle_64.png]\nJulian P. Wolf | Invenium Data Insights GmbH\[email protected] | +43 664 88 199 013\nHerrengasse 28 | 8010 Graz | www.invenium.io\n\n\n\n\n\n\n\n\nHello,\n\n\n\nA description of what you are trying to achieve and what results you expect:Our database is growing on a daily basis by about 2.5million rows per table (2 at the moment). Because of that, we decided to partition the data, especially, as we are pre-aggregating the data for weeks, months, quarters and years. Every aggregation is stored in a separate partition:Days: ..._yYYYYmMMd (base data)Weeks: ..._yYYYYmMMw (aggregated all weeks of the month)month: ..._yYYYYmMM (aggregated month)etc.Our problem is, that the planner always predicts one row to be returned, although only a part of the primary key is queried. This problem exceeds feasibility of performance rapidly - a query only involving a few days already takes dozens of seconds. All tables are analyzed and pg_stats looks reasonable IMHO.PostgreSQL version number you are running:\npostgres=# SELECT version();                                                     version                                                      ------------------------------------------------------------------------------------------------------------------ PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit(1 Zeile)postgres=# SELECT name, current_setting(name), sourcepostgres-#   FROM pg_settingspostgres-#   WHERE source NOT IN ('default', 'override');postgres=# SELECT name, current_setting(name), source  FROM pg_settings  WHERE source NOT IN ('default', 'override');              name              |             current_setting             |        source        --------------------------------+-----------------------------------------+---------------------- application_name               | psql                                    | client checkpoint_completion_target   | 0.9                                     | configuration file client_encoding                | UTF8                                    | client cluster_name                   | 12/main                                 | configuration file DateStyle                      | ISO, DMY                                | configuration file default_text_search_config     | pg_catalog.german                       | configuration file dynamic_shared_memory_type     | posix                                   | configuration file effective_cache_size           | 6GB                                     | configuration file effective_io_concurrency       | 200                                     | configuration file enable_partitionwise_aggregate | on                                      | configuration file enable_partitionwise_join      | on                                      | configuration file external_pid_file              | /var/run/postgresql/12-main.pid         | configuration file lc_messages                    | de_DE.UTF-8                             | configuration file lc_monetary                    | de_DE.UTF-8                             | configuration file lc_numeric                     | de_DE.UTF-8                             | configuration file lc_time                        | de_DE.UTF-8                             | configuration file listen_addresses               | *                                       | configuration file log_line_prefix                | %m [%p] %q%u@%d                         | configuration file log_timezone                   | Etc/UTC                                 | configuration file maintenance_work_mem           | 512MB                                   | configuration file max_connections                | 300                                     | configuration file max_parallel_workers           | 2                                       | configuration file max_stack_depth                | 2MB                                     | environment variable max_wal_size                   | 2GB                                     | configuration file max_worker_processes           | 2                                       | configuration file min_wal_size                   | 256MB                                   | configuration file port                           | 5432                                    | configuration file random_page_cost               | 1.1                                     | configuration file shared_buffers                 | 2GB                                     | configuration file ssl                            | on                                      | configuration file ssl_cert_file                  | /etc/ssl/certs/ssl-cert-snakeoil.pem    | configuration file ssl_key_file                   | /etc/ssl/private/ssl-cert-snakeoil.key  | configuration file stats_temp_directory           | /var/run/postgresql/12-main.pg_stat_tmp | configuration file temp_buffers                   | 256MB                                   | configuration file TimeZone                       | Etc/UTC                                 | configuration file unix_socket_directories        | /var/run/postgresql                     | configuration file work_mem                       | 128MB                                   | configuration file(37 Zeilen)\nOperating system and version:\nLinux dev 5.4.44-2-pve #1 SMP PVE 5.4.44-2 (Wed, 01 Jul 2020 16:37:57 +0200) x86_64 GNU/LinuxOn a quad core virtualized machine with SSD storage and 16GB RAM.\nWhat program you're using to connect to PostgreSQL:\npsql and IntelliJI'm trying to gather as much information as possible and focus just on one of the two tables (the problem persists in both though):-------------------------------------------------------------------------------------------------------Stucture:CREATE TABLE location_statistics(    daterange                daterange NOT NULL,    spatial_feature_id           INTEGER,    visitor_profile_id           INTEGER,    activity_type_combination_id INTEGER,    activity_chain_id            INTEGER NOT NULL,    visitors                     REAL,    dwell_time                   INTEGER,    travel_time                  INTEGER,    n                            INTEGER NOT NULL DEFAULT 1,    PRIMARY KEY ( daterange, spatial_feature_id, visitor_profile_id, activity_type_combination_id,                 activity_chain_id),    FOREIGN KEY (daterange) REFERENCES dateranges (daterange) ON DELETE CASCADE ON UPDATE CASCADE,    FOREIGN KEY (spatial_feature_id) REFERENCES spatial_features (id) ON DELETE CASCADE ON UPDATE CASCADE,    FOREIGN KEY (visitor_profile_id) REFERENCES visitor_profiles (id) ON DELETE CASCADE ON UPDATE CASCADE,    FOREIGN KEY (activity_type_combination_id) REFERENCES activity_type_combinations (id) ON DELETE RESTRICT ON UPDATE CASCADE,    FOREIGN KEY (activity_chain_id) REFERENCES activity_chains (id) ON DELETE CASCADE ON UPDATE CASCADE) PARTITION BY LIST (daterange);-------------------------------------------------------------------------------------------------------Creating of partitions:CREATE OR REPLACE FUNCTION create_partition_tables(additional_dates TEXT[] = NULL)    RETURNS VOID    VOLATILE    LANGUAGE plpgsqlAS$$DECLARE    new_partition RECORD;BEGIN    FOR new_partition IN        (            SELECT for_values_str,                   master_table,                   partition_name            FROM resolve_existing_partitions((additional_dates))            WHERE NOT existing        )        LOOP            EXECUTE ' CREATE TABLE '                        || new_partition.partition_name                        || ' PARTITION OF '                        || new_partition.master_table                        || ' FOR VALUES IN (' || new_partition.for_values_str || ')';            RAISE NOTICE 'Partition % for % created',new_partition.partition_name, new_partition.master_table;        END LOOP;END$$;-------------------------------------------------------------------------------------------------------Size of table:SELECT schemaname,relname,n_live_tupFROM pg_stat_user_tableswhere relname like 'location_statistics_y2019m03%'ORDER BY n_live_tup DESC;schemaname\trelname\tn_live_tupmobility_insights\tlocation_statistics_y2019m03d\t23569853mobility_insights\tlocation_statistics_y2019m03w\t19264373mobility_insights\tlocation_statistics_y2019m03\t18105295-------------------------------------------------------------------------------------------------------select * from pg_statswhere tablename = 'location_statistics_y2019m03w';schemaname\ttablename\tattname\tinherited\tnull_frac\tavg_width\tn_distinct\tmost_common_vals\tmost_common_freqs\thistogram_bounds\tcorrelation\tmost_common_elems\tmost_common_elem_freqs\telem_count_histogrammobility_insights\tlocation_statistics_y2019m03w\tdaterange\tfalse\t0\t14\t-1\t\t\t\t\t\t\tmobility_insights\tlocation_statistics_y2019m03w\tspatial_feature_id\tfalse\t0\t4\t600\t{12675,7869,7867,7892,7915,7963,12677,12683,12237,7909,7868,9478,7914,11309,7913,7911,12509,9510,7962,10547,9559,10471,11782,10590,9552,10554,9527,10488,12680,9546,11330,11409,9595,12293,10845,11469,10531,10467,9525,7927,11115,10541,10544,9509,9515,10637,10486,10859,9703,9591,11195,11657,7878,7938,7910,9560,9565,9532,11016,12435,12525,9578,7973,9558,10536,12650,9516,9547,7871,10537,10923,10812,12546,9574,12454,9511,10435,11840,7926,12540,8187,10469,7935,9504,9536,11203,7964,9484,10534,10538,12391,10888,8237,9501,9517,12516,10927,11102,7985,10527}\t{0.11813333630561829,0.06599999964237213,0.03723333403468132,0.031433332711458206,0.027033332735300064,0.023233333602547646,0.022333333268761635,0.0212333332747221,0.021166667342185974,0.02083333395421505,0.02033333294093609,0.0201666671782732,0.02006666734814644,0.019200000911951065,0.018833333626389503,0.01823333278298378,0.01510000042617321,0.014333332888782024,0.013633333146572113,0.013399999588727951,0.01146666705608368,0.011300000362098217,0.011233333498239517,0.011033332906663418,0.009666666388511658,0.009233333170413971,0.008433333598077297,0.007966666482388973,0.007966666482388973,0.007466666866093874,0.007300000172108412,0.007199999876320362,0.006566666532307863,0.006500000134110451,0.005799999926239252,0.00570000009611249,0.005166666582226753,0.004833333194255829,0.004766666796058416,0.004666666500270367,0.00423333328217268,0.0041333334520459175,0.004100000020116568,0.003966666758060455,0.0038333332631736994,0.0037666666321456432,0.003700000001117587,0.0035000001080334187,0.003433333244174719,0.0033666666131466627,0.0033333334140479565,0.003100000089034438,0.002933333395048976,0.00286666676402092,0.00283333333209157,0.00283333333209157,0.0026666666381061077,0.0024666667450219393,0.0024333333130925894,0.0024333333130925894,0.0024333333130925894,0.0023333332501351833,0.002266666619107127,0.002266666619107127,0.002266666619107127,0.002266666619107127,0.002233333420008421,0.002233333420008421,0.002199999988079071,0.002199999988079071,0.002199999988079071,0.002166666556149721,0.002166666556149721,0.002133333357051015,0.002099999925121665,0.0020666667260229588,0.0020666667260229588,0.0020666667260229588,0.002033333294093609,0.002033333294093609,0.0019333333475515246,0.0018666667165234685,0.0018333332845941186,0.0018333332845941186,0.0018333332845941186,0.0017999999690800905,0.0017666666535660625,0.0017666666535660625,0.0017666666535660625,0.0017666666535660625,0.0017666666535660625,0.0017333333380520344,0.0017000000225380063,0.0017000000225380063,0.0016666667070239782,0.0016333333915099502,0.0015999999595806003,0.0015999999595806003,0.001500000013038516,0.001500000013038516}\t{7870,7891,7906,7917,7954,7965,7966,7969,7974,7977,7979,7984,7986,8132,8171,8194,9479,9482,9488,9491,9493,9496,9498,9499,9503,9507,9512,9513,9520,9521,9524,9526,9530,9534,9537,9541,9544,9554,9562,9570,9573,9577,9581,9583,9586,9599,9675,9736,10436,10442,10450,10464,10482,10491,10495,10510,10513,10515,10516,10523,10529,10535,10539,10543,10553,10575,10602,10718,10756,10816,10882,10902,10928,11008,11025,11064,11158,11276,11316,11382,11486,11538,11602,11673,11731,11766,11775,11835,11906,12052,12088,12130,12277,12356,12383,12397,12408,12471,12545,12627,12678}\t0.11252771\t\t\tmobility_insights\tlocation_statistics_y2019m03w\tvisitor_profile_id\tfalse\t0\t4\t9806\t{3081,3114,2739,3642,2445,103,1625,1874,4005,2282,1550,3792,5564,750,1526,4427,2993,4881,1498,2682,5345,5601,8210,1613,2407,5019,1944,2266,3690,4529,4354,1218,11605,4126,5453,11698,11988,4207,6935,559,9151,12020,12048,12006,12049,3695,4874,5596,5945,6740,1366,7186,101,2026,5694,9152,4446,5788,8892,9365,11619,12027,871,5943,7567,7936,7939,8653,437,3971,5733,5961,7872,2728,3358,4154,4605,6187,9057,1967,4625,4837,5784,8910,1482,2036,6268,7557,8835,9,576,933,1686,2145,2229,3000,3692,4645,4666,5386}\t{0.0024666667450219393,0.0023333332501351833,0.002300000051036477,0.002199999988079071,0.0020666667260229588,0.002033333294093609,0.002033333294093609,0.002033333294093609,0.0019666666630655527,0.0019333333475515246,0.0019000000320374966,0.0018666667165234685,0.0018666667165234685,0.0018333332845941186,0.0018333332845941186,0.0018333332845941186,0.0017999999690800905,0.0017333333380520344,0.0016333333915099502,0.0015999999595806003,0.0015666666440665722,0.0015666666440665722,0.0015666666440665722,0.0015333333285525441,0.001500000013038516,0.001466666697524488,0.00143333338201046,0.00143333338201046,0.00143333338201046,0.00143333338201046,0.00139999995008111,0.001366666634567082,0.001366666634567082,0.0013333333190530539,0.0013333333190530539,0.0013333333190530539,0.0013333333190530539,0.0012666666880249977,0.0012666666880249977,0.0012333333725109696,0.0012333333725109696,0.0012333333725109696,0.0012333333725109696,0.0012000000569969416,0.0012000000569969416,0.0011666666250675917,0.0011666666250675917,0.0011666666250675917,0.0011666666250675917,0.0011666666250675917,0.0011333333095535636,0.0011333333095535636,0.0010999999940395355,0.0010999999940395355,0.0010999999940395355,0.0010999999940395355,0.0010666666785255075,0.0010666666785255075,0.0010666666785255075,0.0010666666785255075,0.0010666666785255075,0.0010666666785255075,0.0010333333630114794,0.0010333333630114794,0.0010333333630114794,0.0010333333630114794,0.0010333333630114794,0.0010333333630114794,0.0010000000474974513,0.0010000000474974513,0.0010000000474974513,0.0010000000474974513,0.0010000000474974513,0.0009666666737757623,0.0009666666737757623,0.0009666666737757623,0.0009666666737757623,0.0009666666737757623,0.0009666666737757623,0.0009333333582617342,0.0009333333582617342,0.0009333333582617342,0.0009333333582617342,0.0009333333582617342,0.0008999999845400453,0.0008999999845400453,0.0008999999845400453,0.0008999999845400453,0.0008999999845400453,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172,0.0008666666690260172}\t{1,89,222,365,497,628,786,886,987,1108,1200,1320,1459,1584,1677,1812,1953,2080,2183,2306,2436,2581,2690,2798,2871,3018,3138,3294,3391,3525,3678,3783,3917,3992,4097,4253,4362,4442,4564,4693,4788,4897,5045,5157,5285,5414,5520,5630,5722,5843,5941,6041,6217,6444,6683,6892,7117,7330,7544,7730,7906,8076,8273,8471,8645,8789,8931,9063,9227,9378,9519,9610,9657,10667,10998,11483,11760,11960,12181,12262,12336,12440,12519,12629,13608,13782,13974,14116,14278,15670,16742,17892,18814,20657,23107,26119,31244,39466,59333,68728,83799}\t-0.03462254\t\t\tmobility_insights\tlocation_statistics_y2019m03w\tactivity_type_combination_id\tfalse\t0\t4\t145\t{6,1,8,10,59,28,5,2,67,14,4,11,12,3,9,133,23,90,25,45,92,32,213,37,50,182,71,89,29,33,46,195,61,84,43,17,20,106,18,160,95,137,15,125,203,214,206,218,107,105,143,85,211,27,38,221,126,79,135,217,175,128,42,108,120,159,208,76,130}\t{0.15360000729560852,0.14463333785533905,0.11789999902248383,0.06403333693742752,0.056533332914114,0.04636666551232338,0.035466667264699936,0.033533334732055664,0.02669999934732914,0.026133334264159203,0.023900000378489494,0.0203000009059906,0.019866665825247765,0.019233332946896553,0.01876666583120823,0.011966666206717491,0.01126666646450758,0.010066666640341282,0.009533333592116833,0.009499999694526196,0.00860000029206276,0.008366666734218597,0.0077666668221354485,0.00706666661426425,0.006899999920278788,0.006866666488349438,0.006599999964237213,0.006466666702181101,0.00566666666418314,0.004999999888241291,0.004533333238214254,0.004533333238214254,0.004333333112299442,0.0041333334520459175,0.004000000189989805,0.0033333334140479565,0.0031999999191612005,0.0031333332881331444,0.0025333333760499954,0.002166666556149721,0.0020666667260229588,0.0016333333915099502,0.0015333333285525441,0.00143333338201046,0.0013000000035390258,0.0013000000035390258,0.0012666666880249977,0.0012666666880249977,0.0011666666250675917,0.0010333333630114794,0.0010333333630114794,0.0009333333582617342,0.0009333333582617342,0.0007666666642762721,0.000733333348762244,0.000733333348762244,0.000699999975040555,0.0006666666595265269,0.0006666666595265269,0.0006666666595265269,0.0006333333440124989,0.0006000000284984708,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.0004333333345130086,0.0004333333345130086,0.000366666674381122,0.000366666674381122}\t{22,26,36,54,54,54,64,64,70,77,87,88,96,97,98,98,101,112,114,114,118,119,127,127,131,138,145,148,148,151,151,153,153,155,155,155,163,164,164,165,166,169,169,170,170,173,176,180,184,187,187,187,194,194,201,201,201,219,227,227,228,231,231,232,233,233,251,256,272,274,286,303,303,315,324,490}\t0.027344994\t\t\tmobility_insights\tlocation_statistics_y2019m03w\tactivity_chain_id\tfalse\t0\t4\t75638\t{5161,5206,5162,5184,5195,5323,5397,5815,6530,5216,7603,6545,5153,6332,6981,7432,5818,5415,5596,7121,7531,5359,5618,5967,6393,7884,14611,21593,355,5325,5986,6407,23475,5213,6039,6385,6621,6849,9910,10026,11114,15860,164,165,200,5165,5262,5890,6043,6231,6659,6950,7251,7284,8228,8456,8923,9212,9851,9886,12203,12983,14685,16472,21550,43,271,307,992,5220,5243,5481,5482,5509,5516,5532,5603,5621,5757,5917,6026,6063,6139,6146,6210,6214,6464,6499,6671,6728,6758,6889,7010,7173,7643,8032,8081,8290,9676,10875}\t{0.002133333357051015,0.0017999999690800905,0.00143333338201046,0.0011333333095535636,0.0010333333630114794,0.000699999975040555,0.000699999975040555,0.000699999975040555,0.0006666666595265269,0.0006333333440124989,0.0006333333440124989,0.0005000000237487257,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.0004333333345130086,0.000366666674381122,0.000366666674381122,0.000366666674381122,0.000366666674381122,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503}\t{16,3832,5935,6980,8254,9534,11187,13024,15280,17910,20278,23752,27191,30933,35166,39736,44912,84588,87937,91731,96462,98710,99978,101481,102822,104232,105743,107178,108599,109896,111309,112882,114244,115636,117258,118951,120523,122033,123500,124882,126475,127916,129472,131137,132751,134476,135966,137506,139103,140651,142235,143923,145489,147256,148803,150223,151772,153331,155019,156745,158504,160131,161734,163321,164954,166505,168223,169899,171482,173009,174615,176117,177796,179595,181180,182924,184591,186335,188152,189909,191799,193278,194998,196949,198845,200761,202607,204272,206366,208030,209664,211457,213181,214854,216416,218122,219912,221852,223592,225495,227061}\t-0.13226064\t\t\tmobility_insights\tlocation_statistics_y2019m03w\tvisitors\tfalse\t0\t4\t141556\t{2.231728,2.515927,1.690992,2.716124,1.666667,4.006526,4.547657,2.685691,2.042206,2.0369,2.907664,3.202489,3.321924,5,2.21855,0.357143,1.781995,2.773392,2.430318,3.585561,0.251593,0.294118,0.333333,0.416667,0.47619,1.997838,2.901269,3.665649,0.083864,0.166667,0.228721,0.278577,0.284229,0.3125,0.375056,0.833333,2.434593,2.616505,2.744186,2.95092,3.26703,3.7,3.959243}\t{0.0008999999845400453,0.0007999999797903001,0.0006333333440124989,0.0005666666547767818,0.0005333333392627537,0.0005333333392627537,0.0005333333392627537,0.0005000000237487257,0.0004333333345130086,0.00039999998989515007,0.00039999998989515007,0.000366666674381122,0.000366666674381122,0.000366666674381122,0.00033333332976326346,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.00026666666963137686,0.00026666666963137686,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503,0.00019999999494757503}\t{2e-06,0.00196,0.003629,0.00529,0.00717,0.00941,0.011622,0.013755,0.016387,0.019173,0.022388,0.02522,0.028369,0.031243,0.03431,0.037177,0.04011,0.043427,0.046591,0.04976,0.052685,0.05561,0.058774333,0.061956,0.065245,0.068608,0.072032,0.0754775,0.078632,0.081756,0.084959,0.088382,0.091822,0.095209,0.098459,0.102495,0.106105,0.109757,0.113244,0.116785,0.120467,0.124337,0.128564,0.132854,0.136804,0.140986,0.145268,0.149572,0.153727,0.157896,0.162,0.166096,0.170477,0.174326,0.178639,0.182968,0.187422,0.191749,0.19638,0.200433,0.205387,0.209918,0.214573,0.218993,0.224327,0.229155,0.234454,0.239658,0.244123,0.249223,0.254667,0.260309,0.265922,0.271871,0.277339,0.283247,0.289332,0.296549,0.303343,0.309744,0.317473,0.325838,0.335268,0.344108,0.352898,0.363003,0.3743145,0.387081,0.401563,0.420192,0.440096,0.461973,0.490929,0.528797,0.574014,0.652174,0.7746,1.056453,1.79342,2.771285,14.935622}\t0.010959746\t\t\tmobility_insights\tlocation_statistics_y2019m03w\tdwell_time\tfalse\t0\t4\t45441\t{84600,82800,3600,4500,5400,8100,22500,24300,19800,85499,6300,7200,20700,23400,28800,3722,9000,15300,21600,10800,9900,10802,17100,79200,85500,11700,13500,14400,18900,25200,12600,16200,18000,83700,900,3672,3785,3885,5395,5803,5882,7227,27000,27900,43200,80100}\t{0.002199999988079071,0.0010000000474974513,0.0008333333535119891,0.0007999999797903001,0.0006666666595265269,0.0006000000284984708,0.0006000000284984708,0.0006000000284984708,0.0005666666547767818,0.0005666666547767818,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.0004666666791308671,0.00039999998989515007,0.00039999998989515007,0.00039999998989515007,0.00039999998989515007,0.000366666674381122,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.00033333332976326346,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.0003000000142492354,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00026666666963137686,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356}\t{901,2191,3633,3768,3915,4052,4205,4339,4491,4656,4827,5001,5185,5397,5634,5858,6082,6301,6551,6807,7083,7382,7726,8047,8396,8763,9194,9619,9983,10422,10807,11222,11641,12068,12558,13041,13493,13974,14398,14902,15401,15892,16457,16919,17431,17930,18442,18975,19508,20077,20672,21238,21709,22227,22779,23430,24002,24556,25239,26011,26758,27547,28312,29178,29973,30780,31617,32484,33460,34584,35745,36979,38294,39664,41203,42960,44652,46476,48492,50223,52200,54421,56359,58815,61658,64739,67538,70443,73060,75490,77594,79466,80991,82197,83188,83999,84836,85406,85738,86091,86400}\t-0.066642396\t\t\tmobility_insights\tlocation_statistics_y2019m03w\ttravel_time\tfalse\t0\t4\t11756\t{0,5,2700,900,3600,1800,3599,10,425,810,1680,2245}\t{0.5346666574478149,0.0006666666595265269,0.0005000000237487257,0.0004666666791308671,0.00039999998989515007,0.000366666674381122,0.00026666666963137686,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356,0.00023333333956543356}\t{2,139,279,423,551,648,752,852,937,1024,1112,1195,1286,1375,1451,1540,1631,1760,1861,1958,2058,2162,2264,2367,2470,2575,2683,2805,2912,3013,3146,3270,3373,3513,3604,3709,3824,3951,4067,4205,4328,4437,4532,4681,4841,5002,5147,5291,5452,5602,5763,5924,6060,6223,6390,6554,6719,6917,7109,7294,7490,7698,7904,8095,8299,8537,8724,8982,9242,9536,9775,10066,10363,10632,10933,11273,11643,12014,12368,12776,13176,13580,14021,14450,14922,15462,15934,16468,17097,17693,18538,19456,20254,21245,22403,23780,25470,27648,31072,36178,62080}\t0.31811374\t\t\tmobility_insights\tlocation_statistics_y2019m03w\tn\tfalse\t0\t4\t7\t{1,2,3,4,5,6,7}\t{0.9218999743461609,0.04879999905824661,0.014600000344216824,0.0075333332642912865,0.0038666666951030493,0.0026000000070780516,0.000699999975040555}\t\t0.85469824\t\t\t-------------------------------------------------------------------------------------------------------Query:EXPLAIN ( ANALYZE , BUFFERS )SELECT sum(visitors * n)FROM location_statistics stWHERE st.daterange = '[2019-03-04,2019-03-11)'::DATERANGE  AND spatial_feature_id = 12675QUERY PLANAggregate  (cost=2.79..2.80 rows=1 width=8) (actual time=143.073..143.073 rows=1 loops=1)  Buffers: shared hit=67334  ->  Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st  (cost=0.56..2.78 rows=1 width=8) (actual time=0.026..117.284 rows=516277 loops=1)        Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))        Buffers: shared hit=67334Planning Time: 0.082 msExecution Time: 143.095 msFor completeness sake:EXPLAIN (ANALYZE , BUFFERS)SELECT sum(visitors * n)FROM location_statistics_y2019m03w stWHERE st.daterange = '[2019-03-04,2019-03-11)'::DATERANGE  AND spatial_feature_id = 12675QUERY PLANAggregate  (cost=2.79..2.80 rows=1 width=8) (actual time=156.304..156.305 rows=1 loops=1)  Buffers: shared hit=66602 read=732  ->  Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st  (cost=0.56..2.78 rows=1 width=8) (actual time=0.194..111.464 rows=516277 loops=1)        Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))        Buffers: shared hit=66602 read=732Planning Time: 0.058 msExecution Time: 156.326 msAs can be seen, the planner predicts one row to be returned, although it should be around 3% (11% of the entries are of the given ID, which are distributed over 4 weeks = date ranges) of the table. Using the partition table directly, does not change this fact.How can I solve this problem?Thank you very much in advance.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nJulian P. Wolf | Invenium Data Insights GmbH\[email protected] | +43 664 88 199 013\nHerrengasse 28 \n| 8010 Graz | www.invenium.io", "msg_date": "Tue, 21 Jul 2020 13:09:22 +0000", "msg_from": "Julian Wolf <[email protected]>", "msg_from_op": true, "msg_subject": "Too few rows expected by Planner on partitioned tables" }, { "msg_contents": "On Tue, Jul 21, 2020 at 01:09:22PM +0000, Julian Wolf wrote:\n> Our problem is, that the planner always predicts one row to be returned, although only a part of the primary key is queried. This problem exceeds feasibility of performance rapidly - a query only involving a few days already takes dozens of seconds. All tables are analyzed and pg_stats looks reasonable IMHO.\n\n> daterange daterange NOT NULL,\n> spatial_feature_id INTEGER,\n\n> Aggregate (cost=2.79..2.80 rows=1 width=8) (actual time=143.073..143.073 rows=1 loops=1)\n> Buffers: shared hit=67334\n> -> Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st (cost=0.56..2.78 rows=1 width=8) (actual time=0.026..117.284 rows=516277 loops=1)\n> Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))\n> Buffers: shared hit=67334\n> \n> As can be seen, the planner predicts one row to be returned, although it should be around 3% (11% of the entries are of the given ID, which are distributed over 4 weeks = date ranges) of the table. Using the partition table directly, does not change this fact.\n\nIs there a correlation between daterange and spacial_feature_id ?\n\nAre the estimates good if you query on *only* daterange? spacial_feature_id ?\n\nMaybe what you need is:\nhttps://www.postgresql.org/docs/devel/sql-createstatistics.html\nCREATE STATISTICS stats (dependencies) ON daterange, spacial_feature_id FROM location_statistics;\nANALYZE location_statistics;\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 21 Jul 2020 12:27:13 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too few rows expected by Planner on partitioned tables" }, { "msg_contents": "Hello Justin,\n\n\nthank you very much for your fast response.\n\n> Is there a correlation between daterange and spacial_feature_id ?\n\nI am not entirely sure, what you mean by that. Basically, no, they are not correlated - spatial features are places on a map, date ranges are time periods. But, as they are both part of a primary key in this particular table, they are correlated in some way as to be a part of uniquely identifying a row.\n\n\n> Are the estimates good if you query on *only* daterange? spacial_feature_id ?\nUnfortunately no, they are not:\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------\n\nEXPLAIN (ANALYZE , BUFFERS)\nSELECT sum(visitors * n)\nFROM location_statistics st\nWHERE st.daterange = '[2019-03-04,2019-03-11)'::DATERANGE\n\nQUERY PLAN\nAggregate (cost=2.79..2.80 rows=1 width=8) (actual time=1143.393..1143.393 rows=1 loops=1)\n Buffers: shared hit=304958\n -> Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st (cost=0.56..2.78 rows=1 width=8) (actual time=0.024..931.645 rows=4296639 loops=1)\n Index Cond: (daterange = '[2019-03-04,2019-03-11)'::daterange)\n Buffers: shared hit=304958\nPlanning Time: 0.080 ms\nExecution Time: 1143.421 ms\n\n------------------------------------------------------------------------------------------------------------------------------------------------\n\nEXPLAIN (ANALYZE , BUFFERS)\nSELECT sum(visitors * n)\nFROM location_statistics_y2019m03w st\nWHERE st.daterange = '[2019-03-04,2019-03-11)'::DATERANGE\n\nQUERY PLAN\nAggregate (cost=2.79..2.80 rows=1 width=8) (actual time=1126.819..1126.820 rows=1 loops=1)\n Buffers: shared hit=304958\n -> Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st (cost=0.56..2.78 rows=1 width=8) (actual time=0.023..763.852 rows=4296639 loops=1)\n Index Cond: (daterange = '[2019-03-04,2019-03-11)'::daterange)\n Buffers: shared hit=304958\nPlanning Time: 0.046 ms\nExecution Time: 1126.845 ms\n\n------------------------------------------------------------------------------------------------------------------------------------------------\nChecking only on the spatial_feature is not the same query, as the table contains 4 different date ranges. Furthermore, there is no index for this operation. Because of that, I can only invoke this query on one partition, otherwise the query would take days.\n\nEXPLAIN (ANALYZE , BUFFERS)\nSELECT sum(visitors * n)\nFROM location_statistics_y2019m03w st\nWHERE spatial_feature_id = 12675\n\nQUERY PLAN\nFinalize Aggregate (cost=288490.25..288490.26 rows=1 width=8) (actual time=1131.593..1131.593 rows=1 loops=1)\n Buffers: shared hit=40156 read=139887\n -> Gather (cost=288490.03..288490.24 rows=2 width=8) (actual time=1131.499..1148.872 rows=2 loops=1)\n Workers Planned: 2\n Workers Launched: 1\n Buffers: shared hit=40156 read=139887\n -> Partial Aggregate (cost=287490.03..287490.04 rows=1 width=8) (actual time=1118.578..1118.579 rows=1 loops=2)\n Buffers: shared hit=40156 read=139887\n -> Parallel Seq Scan on location_statistics_y2019m03w st (cost=0.00..280378.27 rows=948235 width=8) (actual time=3.544..1032.899 rows=1134146 loops=2)\n Filter: (spatial_feature_id = 12675)\n Rows Removed by Filter: 8498136\n Buffers: shared hit=40156 read=139887\nPlanning Time: 0.218 ms\nJIT:\n Functions: 12\n Options: Inlining false, Optimization false, Expressions true, Deforming true\n Timing: Generation 0.929 ms, Inlining 0.000 ms, Optimization 0.426 ms, Emission 6.300 ms, Total 7.655 ms\nExecution Time: 1191.741 ms\n\nThe estimates seem to be good though.\n\nThanks in Advance\n\nJulian\n\n[http://www.invenium.io/images/invenium_triangle_64.png]\nJulian P. Wolf | Invenium Data Insights GmbH\[email protected] | +43 664 88 199 013\nHerrengasse 28 | 8010 Graz | www.invenium.io\n\n________________________________\nFrom: Justin Pryzby <[email protected]>\nSent: Tuesday, July 21, 2020 7:27 PM\nTo: Julian Wolf <[email protected]>\nCc: pgsql-performance Postgres Mailing List <[email protected]>\nSubject: Re: Too few rows expected by Planner on partitioned tables\n\nOn Tue, Jul 21, 2020 at 01:09:22PM +0000, Julian Wolf wrote:\n> Our problem is, that the planner always predicts one row to be returned, although only a part of the primary key is queried. This problem exceeds feasibility of performance rapidly - a query only involving a few days already takes dozens of seconds. All tables are analyzed and pg_stats looks reasonable IMHO.\n\n> daterange daterange NOT NULL,\n> spatial_feature_id INTEGER,\n\n> Aggregate (cost=2.79..2.80 rows=1 width=8) (actual time=143.073..143.073 rows=1 loops=1)\n> Buffers: shared hit=67334\n> -> Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st (cost=0.56..2.78 rows=1 width=8) (actual time=0.026..117.284 rows=516277 loops=1)\n> Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))\n> Buffers: shared hit=67334\n>\n> As can be seen, the planner predicts one row to be returned, although it should be around 3% (11% of the entries are of the given ID, which are distributed over 4 weeks = date ranges) of the table. Using the partition table directly, does not change this fact.\n\nIs there a correlation between daterange and spacial_feature_id ?\n\nAre the estimates good if you query on *only* daterange? spacial_feature_id ?\n\nMaybe what you need is:\nhttps://www.postgresql.org/docs/devel/sql-createstatistics.html\nCREATE STATISTICS stats (dependencies) ON daterange, spacial_feature_id FROM location_statistics;\nANALYZE location_statistics;\n\n--\nJustin\n\n\n\n\n\n\n\n\nHello Justin,\n\n\n\n\n\n\n\nthank you very much for your fast response. \n\n\n\n\n\n> Is there a correlation between daterange and spacial_feature_id ?\n\n\n\n\n\nI am not entirely sure, what you mean by that. Basically, no, they are not correlated - spatial features are places on a map, date ranges are time periods. But, as they are both part of a primary key in this particular table, they are correlated in some way\n as to be a part of uniquely identifying a row. \n\n\n\n\n\n\n> Are the estimates good if you query on *only* daterange?  spacial_feature_id ?\n\nUnfortunately no, they are not:\n\n\n\n\n\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n\nEXPLAIN (ANALYZE , BUFFERS)SELECT sum(visitors * n)FROM location_statistics stWHERE st.daterange = '[2019-03-04,2019-03-11)'::DATERANGE\n\n\nQUERY PLAN\n\nAggregate  (cost=2.79..2.80 rows=1 width=8) (actual time=1143.393..1143.393 rows=1 loops=1)\n\n  Buffers: shared hit=304958\n\n  ->  Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st  (cost=0.56..2.78 rows=1 width=8) (actual time=0.024..931.645 rows=4296639 loops=1)\n\n        Index Cond: (daterange = '[2019-03-04,2019-03-11)'::daterange)\n\n        Buffers: shared hit=304958\n\nPlanning Time: 0.080 ms\n\nExecution Time: 1143.421 ms\n\n\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------\n\n\nEXPLAIN (ANALYZE , BUFFERS)SELECT sum(visitors * n)FROM location_statistics_y2019m03w stWHERE st.daterange = '[2019-03-04,2019-03-11)'::DATERANGE\n\n\nQUERY PLAN\n\nAggregate  (cost=2.79..2.80 rows=1 width=8) (actual time=1126.819..1126.820 rows=1 loops=1)\n\n  Buffers: shared hit=304958\n\n  ->  Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st  (cost=0.56..2.78 rows=1 width=8) (actual time=0.023..763.852 rows=4296639 loops=1)\n\n        Index Cond: (daterange = '[2019-03-04,2019-03-11)'::daterange)\n\n        Buffers: shared hit=304958\n\nPlanning Time: 0.046 ms\n\nExecution Time: 1126.845 ms\n\n\n\n\n\n\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------\nChecking only on the spatial_feature is not the same query, as the table contains 4 different date ranges. Furthermore, there is no index for this operation. Because of that, I can only invoke this query on one partition, otherwise the query would take days.\n\n\n\nEXPLAIN (ANALYZE , BUFFERS)SELECT sum(visitors * n)FROM location_statistics_y2019m03w stWHERE spatial_feature_id = 12675\n\n\n\n\n\nQUERY PLAN\n\nFinalize Aggregate  (cost=288490.25..288490.26 rows=1 width=8) (actual time=1131.593..1131.593 rows=1 loops=1)\n\n  Buffers: shared hit=40156 read=139887\n\n  ->  Gather  (cost=288490.03..288490.24 rows=2 width=8) (actual time=1131.499..1148.872 rows=2 loops=1)\n\n        Workers Planned: 2\n\n        Workers Launched: 1\n\n        Buffers: shared hit=40156 read=139887\n\n        ->  Partial Aggregate  (cost=287490.03..287490.04 rows=1 width=8) (actual time=1118.578..1118.579 rows=1 loops=2)\n\n              Buffers: shared hit=40156 read=139887\n\n              ->  Parallel Seq Scan on location_statistics_y2019m03w st  (cost=0.00..280378.27 rows=948235 width=8) (actual time=3.544..1032.899 rows=1134146 loops=2)\n\n                    Filter: (spatial_feature_id = 12675)\n\n                    Rows Removed by Filter: 8498136\n\n                    Buffers: shared hit=40156 read=139887\n\nPlanning Time: 0.218 ms\n\nJIT:\n\n  Functions: 12\n\n  Options: Inlining false, Optimization false, Expressions true, Deforming true\n\n  Timing: Generation 0.929 ms, Inlining 0.000 ms, Optimization 0.426 ms, Emission 6.300 ms, Total 7.655 ms\n\nExecution Time: 1191.741 ms\n\n\n\n\n\n\nThe estimates seem to be good though.\n\n\n\n\nThanks in Advance\n\n\n\n\nJulian\n\n\n\n\n\n\n\n\n\n\n\n\n\nJulian P. Wolf | Invenium Data Insights GmbH\[email protected] | +43 664 88 199 013\nHerrengasse 28 \n| 8010 Graz | www.invenium.io\n\n\n\n\n\n\n\n\n\n\n\nFrom: Justin Pryzby <[email protected]>\nSent: Tuesday, July 21, 2020 7:27 PM\nTo: Julian Wolf <[email protected]>\nCc: pgsql-performance Postgres Mailing List <[email protected]>\nSubject: Re: Too few rows expected by Planner on partitioned tables\n \n\n\nOn Tue, Jul 21, 2020 at 01:09:22PM +0000, Julian Wolf wrote:\n> Our problem is, that the planner always predicts one row to be returned, although only a part of the primary key is queried. This problem exceeds feasibility of performance rapidly - a query only involving a few days already takes dozens of seconds. All tables\n are analyzed and pg_stats looks reasonable IMHO.\n\n>     daterange                daterange NOT NULL,\n>     spatial_feature_id           INTEGER,\n\n> Aggregate  (cost=2.79..2.80 rows=1 width=8) (actual time=143.073..143.073 rows=1 loops=1)\n>   Buffers: shared hit=67334\n>   ->  Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st  (cost=0.56..2.78 rows=1 width=8) (actual time=0.026..117.284 rows=516277 loops=1)\n>         Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))\n>         Buffers: shared hit=67334\n> \n> As can be seen, the planner predicts one row to be returned, although it should be around 3% (11% of the entries are of the given ID, which are distributed over 4 weeks = date ranges) of the table. Using the partition table directly, does not change this\n fact.\n\nIs there a correlation between daterange and spacial_feature_id ?\n\nAre the estimates good if you query on *only* daterange?  spacial_feature_id ?\n\nMaybe what you need is:\nhttps://www.postgresql.org/docs/devel/sql-createstatistics.html\nCREATE STATISTICS stats (dependencies) ON daterange, spacial_feature_id FROM location_statistics;\nANALYZE location_statistics;\n\n-- \nJustin", "msg_date": "Wed, 22 Jul 2020 06:33:17 +0000", "msg_from": "Julian Wolf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Too few rows expected by Planner on partitioned tables" }, { "msg_contents": "On Wed, Jul 22, 2020 at 06:33:17AM +0000, Julian Wolf wrote:\n> Hello Justin,\n> \n> \n> thank you very much for your fast response.\n> \n> > Is there a correlation between daterange and spacial_feature_id ?\n> \n> I am not entirely sure, what you mean by that. Basically, no, they are not correlated - spatial features are places on a map, date ranges are time periods. But, as they are both part of a primary key in this particular table, they are correlated in some way as to be a part of uniquely identifying a row.\n> \n> \n> > Are the estimates good if you query on *only* daterange? spacial_feature_id ?\n> Unfortunately no, they are not:\n\nI checked and found that range types don't have \"normal\" statistics, and in\nparticular seem to use a poor ndistinct estimate..\n\n /* Estimate that non-null values are unique */\n stats->stadistinct = -1.0 * (1.0 - stats->stanullfrac);\n\nYou could try to cheat and hardcode a different ndistinct that's \"less wrong\"\nby doing something like this:\n\nALTER TABLE t ALTER a SET (N_DISTINCT=-0.001); ANALYZE t;\n\nMaybe a better way is to create an index ON: lower(range),upper(range)\nAnd then query: WHERE (lower(a),upper(a)) = (1,112);\n\nSince you'd be storing the values separately in the index anyway, maybe this\nmeans that range types won't work well for you for primary, searchable columns.\n\nBut if you're stuck with the schema, another kludge, if you want to do\nsomething extra weird, is to remove statistics entirely by disabling\nautoanalyze on the table and then manually run ANALYZE(columns) where columns\ndoesn't include the range column. You'd have to remove the stats:\n\nbegin; DELETE FROM pg_statistic s USING pg_attribute a WHERE s.staattnum=a.attnum AND s.starelid=a.attrelid AND starelid='t'::regclass AND a.attname='a';\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 22 Jul 2020 07:28:47 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too few rows expected by Planner on partitioned tables" }, { "msg_contents": "On Tue, Jul 21, 2020 at 01:09:22PM +0000, Julian Wolf wrote:\n> Hello,\n> \n> A description of what you are trying to achieve and what results you expect:\n> Our database is growing on a daily basis by about 2.5million rows per table (2 at the moment). Because of that, we decided to partition the data, especially, as we are pre-aggregating the data for weeks, months, quarters and years. Every aggregation is stored in a separate partition:\n> \n...\n> Our problem is, that the planner always predicts one row to be returned, although only a part of the primary key is queried. This problem exceeds feasibility of performance rapidly - a query only involving a few days already takes dozens of seconds. All tables are analyzed and pg_stats looks reasonable IMHO.\n...\n> PRIMARY KEY ( daterange, spatial_feature_id, visitor_profile_id, activity_type_combination_id,\n> activity_chain_id),\n...\n> ) PARTITION BY LIST (daterange);\n\n> schemaname relname n_live_tup\n> mobility_insights location_statistics_y2019m03d 23569853\n> mobility_insights location_statistics_y2019m03w 19264373\n> mobility_insights location_statistics_y2019m03 18105295\n\n> Aggregate (cost=2.79..2.80 rows=1 width=8) (actual time=143.073..143.073 rows=1 loops=1)\n> Buffers: shared hit=67334\n> -> Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st (cost=0.56..2.78 rows=1 width=8) (actual time=0.026..117.284 rows=516277 loops=1)\n> Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))\n> Buffers: shared hit=67334\n\nI guess this isn't actually the problem query, since it takes 143ms and not\ndozens of seconds. I don't know what is the problem query, but maybe it might\nhelp to create an new index on spatial_feature_id, which could be scanned\nrather than scanning the unique index.\n\nAlso, if daterange *and* spatial_feature_id are always *both* included, then\nthis might work:\n\npostgres=# CREATE STATISTICS t_stats (mcv) ON daterange,spatial_feature_id FROM t ; ANALYZE t;\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 22 Jul 2020 09:40:16 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too few rows expected by Planner on partitioned tables" }, { "msg_contents": "Hi,\n\nThank you very much for your answers and sorry for the delayed response.\n\n\n\n> I checked and found that range types don't have \"normal\" statistics, and in\n> particular seem to use a poor ndistinct estimate..\n\n> /* Estimate that non-null values are unique */\n> stats->stadistinct = -1.0 * (1.0 - stats->stanullfrac);\n\nI investigated this idea and played around with the n_distinct value and you are absolutely right, the statistics do behave strangely with range types. Even creating statistics\n\n(CREATE STATISTICS t_stats (mcv) ON daterange,spatial_feature_id FROM t ; ANALYZE t;)\n\ndoesn't change the fact.\n\nI do get that range types were created with GIST and range comparison in mind, but as they are a really neat way to describe not only a date but also granularity dependency (i.e. \"this data represent this exact week\"), it would be really nice, if these data types would work with primary keys and thus b-tree too.\n\nIn my case, I switched the daterange type with a BIGINT, which holds the exact same information on byte level. This value can then be immutably converted back to daterange and vice versa. This solved the problem for me.\n\nThank you very much for your time and help.\n\nBest Regards\n\n\n[http://www.invenium.io/images/invenium_triangle_64.png]\nJulian P. Wolf | Invenium Data Insights GmbH\[email protected] | +43 664 88 199 013\nHerrengasse 28 | 8010 Graz | www.invenium.io\n\n________________________________\nFrom: Justin Pryzby <[email protected]>\nSent: Wednesday, July 22, 2020 4:40 PM\nTo: Julian Wolf <[email protected]>\nCc: pgsql-performance Postgres Mailing List <[email protected]>\nSubject: Re: Too few rows expected by Planner on partitioned tables\n\nOn Tue, Jul 21, 2020 at 01:09:22PM +0000, Julian Wolf wrote:\n> Hello,\n>\n> A description of what you are trying to achieve and what results you expect:\n> Our database is growing on a daily basis by about 2.5million rows per table (2 at the moment). Because of that, we decided to partition the data, especially, as we are pre-aggregating the data for weeks, months, quarters and years. Every aggregation is stored in a separate partition:\n>\n...\n> Our problem is, that the planner always predicts one row to be returned, although only a part of the primary key is queried. This problem exceeds feasibility of performance rapidly - a query only involving a few days already takes dozens of seconds. All tables are analyzed and pg_stats looks reasonable IMHO.\n...\n> PRIMARY KEY ( daterange, spatial_feature_id, visitor_profile_id, activity_type_combination_id,\n> activity_chain_id),\n...\n> ) PARTITION BY LIST (daterange);\n\n> schemaname relname n_live_tup\n> mobility_insights location_statistics_y2019m03d 23569853\n> mobility_insights location_statistics_y2019m03w 19264373\n> mobility_insights location_statistics_y2019m03 18105295\n\n> Aggregate (cost=2.79..2.80 rows=1 width=8) (actual time=143.073..143.073 rows=1 loops=1)\n> Buffers: shared hit=67334\n> -> Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st (cost=0.56..2.78 rows=1 width=8) (actual time=0.026..117.284 rows=516277 loops=1)\n> Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))\n> Buffers: shared hit=67334\n\nI guess this isn't actually the problem query, since it takes 143ms and not\ndozens of seconds. I don't know what is the problem query, but maybe it might\nhelp to create an new index on spatial_feature_id, which could be scanned\nrather than scanning the unique index.\n\nAlso, if daterange *and* spatial_feature_id are always *both* included, then\nthis might work:\n\npostgres=# CREATE STATISTICS t_stats (mcv) ON daterange,spatial_feature_id FROM t ; ANALYZE t;\n\n--\nJustin\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\nThank you very much for your answers and sorry for the delayed response.\n\n\n\n\n\n\n\n\n> I checked and found that range types don't have \"normal\" statistics, and in\n> particular seem to use a poor ndistinct estimate..\n\n>               /* Estimate that non-null values are unique */\n>                stats->stadistinct = -1.0 * (1.0 - stats->stanullfrac);\n\n\n\n\n\nI investigated this idea and played around with the n_distinct value and you are absolutely right, the statistics do behave strangely with range types. Even creating statistics\n\n\n\n\n\n\n(CREATE STATISTICS t_stats (mcv) ON daterange,spatial_feature_id FROM t ; ANALYZE t;)\n\n\n\n\n\n\ndoesn't change the fact. \n\n\n\n\n\nI do get that range types were created with GIST and range comparison in mind, but as they are a really neat way to describe not only a date but also granularity dependency (i.e. \"this data represent this exact week\"), it would be really nice, if these data\n types would work with primary keys and thus b-tree too.\n\n\n\n\nIn my case, I switched the daterange type with a BIGINT, which holds the exact same information on byte level. This value can then be immutably converted back to daterange and vice versa. This solved the problem for me.\n\n\n\n\nThank you very much for your time and help.\n\n\n\n\nBest Regards\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nJulian P. Wolf | Invenium Data Insights GmbH\[email protected] | +43 664 88 199 013\nHerrengasse 28 \n| 8010 Graz | www.invenium.io\n\n\n\n\n\n\n\n\n\n\n\nFrom: Justin Pryzby <[email protected]>\nSent: Wednesday, July 22, 2020 4:40 PM\nTo: Julian Wolf <[email protected]>\nCc: pgsql-performance Postgres Mailing List <[email protected]>\nSubject: Re: Too few rows expected by Planner on partitioned tables\n \n\n\nOn Tue, Jul 21, 2020 at 01:09:22PM +0000, Julian Wolf wrote:\n> Hello,\n> \n> A description of what you are trying to achieve and what results you expect:\n> Our database is growing on a daily basis by about 2.5million rows per table (2 at the moment). Because of that, we decided to partition the data, especially, as we are pre-aggregating the data for weeks, months, quarters and years. Every aggregation is stored\n in a separate partition:\n> \n...\n> Our problem is, that the planner always predicts one row to be returned, although only a part of the primary key is queried. This problem exceeds feasibility of performance rapidly - a query only involving a few days already takes dozens of seconds. All tables\n are analyzed and pg_stats looks reasonable IMHO.\n...\n>     PRIMARY KEY ( daterange, spatial_feature_id, visitor_profile_id, activity_type_combination_id,\n>                  activity_chain_id),\n...\n> ) PARTITION BY LIST (daterange);\n\n> schemaname relname n_live_tup\n> mobility_insights location_statistics_y2019m03d 23569853\n> mobility_insights location_statistics_y2019m03w 19264373\n> mobility_insights location_statistics_y2019m03 18105295\n\n> Aggregate  (cost=2.79..2.80 rows=1 width=8) (actual time=143.073..143.073 rows=1 loops=1)\n>   Buffers: shared hit=67334\n>   ->  Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st  (cost=0.56..2.78 rows=1 width=8) (actual time=0.026..117.284 rows=516277 loops=1)\n>         Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))\n>         Buffers: shared hit=67334\n\nI guess this isn't actually the problem query, since it takes 143ms and not\ndozens of seconds.  I don't know what is the problem query, but maybe it might\nhelp to create an new index on spatial_feature_id, which could be scanned\nrather than scanning the unique index.\n\nAlso, if daterange *and* spatial_feature_id are always *both* included, then\nthis might work:\n\npostgres=# CREATE STATISTICS t_stats (mcv) ON daterange,spatial_feature_id FROM t ; ANALYZE t;\n\n-- \nJustin", "msg_date": "Wed, 29 Jul 2020 06:17:06 +0000", "msg_from": "Julian Wolf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Too few rows expected by Planner on partitioned tables" }, { "msg_contents": "Hi Justin,\n\nthank you very much for your help and sorry for the late answer.\n\nAfter testing around with your suggestions, it actually was the daterange type which caused all the problems. Messing around with the statistics value improved performance drastically but did not solve the problem. We decided to replace the daterange type with a BIGINT and calculate the \"id\" of the daterange by just using the BIGINT (2x 4 bytes) representation of the daterange. Thus, it can be transformed in both directions immutably.\n\n\nCREATE OR REPLACE FUNCTION to_daterange_id(daterange DATERANGE)\n RETURNS BIGINT\n IMMUTABLE\n LANGUAGE plpgsql\nAS\n$$\nBEGIN\n return (extract(EPOCH FROM lower(daterange))::BIGINT << 32) |\n extract(EPOCH FROM upper(daterange))::BIGINT;\nend;\n\n--------------------------------------------------------------------------------------------------------------\nCREATE OR REPLACE FUNCTION to_daterange(daterange_id BIGINT)\n RETURNS DATERANGE\n IMMUTABLE\n LANGUAGE plpgsql\nAS\n$$\nBEGIN\n RETURN daterange(to_timestamp(daterange_id >> 32)::DATE, to_timestamp(daterange_id & x'FFFFFFFF'::BIGINT)::DATE);\nEND;\n$$;\n\nSo there is no daterange object messing up the primary key index. Your other suggestions sadly didn't work, as the daterange was the partition key of the table too, this field was inevitably the first criterion in all queries and thus overruled every other index.\n\nWith that said and done, it would be nice, if daterange objects could be used in unique indexes too. They are a great way to identify data which represents a week, month, etc. worth of data (similar to a two-column-date representation).\n\nThank you very much again for your time and help\n\nJulian\n\n[http://www.invenium.io/images/invenium_triangle_64.png]\nJulian P. Wolf | Invenium Data Insights GmbH\[email protected] | +43 664 88 199 013\nHerrengasse 28 | 8010 Graz | www.invenium.io\n\n________________________________\nFrom: Justin Pryzby <[email protected]>\nSent: Wednesday, July 22, 2020 4:40 PM\nTo: Julian Wolf <[email protected]>\nCc: pgsql-performance Postgres Mailing List <[email protected]>\nSubject: Re: Too few rows expected by Planner on partitioned tables\n\nOn Tue, Jul 21, 2020 at 01:09:22PM +0000, Julian Wolf wrote:\n> Hello,\n>\n> A description of what you are trying to achieve and what results you expect:\n> Our database is growing on a daily basis by about 2.5million rows per table (2 at the moment). Because of that, we decided to partition the data, especially, as we are pre-aggregating the data for weeks, months, quarters and years. Every aggregation is stored in a separate partition:\n>\n...\n> Our problem is, that the planner always predicts one row to be returned, although only a part of the primary key is queried. This problem exceeds feasibility of performance rapidly - a query only involving a few days already takes dozens of seconds. All tables are analyzed and pg_stats looks reasonable IMHO.\n...\n> PRIMARY KEY ( daterange, spatial_feature_id, visitor_profile_id, activity_type_combination_id,\n> activity_chain_id),\n...\n> ) PARTITION BY LIST (daterange);\n\n> schemaname relname n_live_tup\n> mobility_insights location_statistics_y2019m03d 23569853\n> mobility_insights location_statistics_y2019m03w 19264373\n> mobility_insights location_statistics_y2019m03 18105295\n\n> Aggregate (cost=2.79..2.80 rows=1 width=8) (actual time=143.073..143.073 rows=1 loops=1)\n> Buffers: shared hit=67334\n> -> Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st (cost=0.56..2.78 rows=1 width=8) (actual time=0.026..117.284 rows=516277 loops=1)\n> Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))\n> Buffers: shared hit=67334\n\nI guess this isn't actually the problem query, since it takes 143ms and not\ndozens of seconds. I don't know what is the problem query, but maybe it might\nhelp to create an new index on spatial_feature_id, which could be scanned\nrather than scanning the unique index.\n\nAlso, if daterange *and* spatial_feature_id are always *both* included, then\nthis might work:\n\npostgres=# CREATE STATISTICS t_stats (mcv) ON daterange,spatial_feature_id FROM t ; ANALYZE t;\n\n--\nJustin\n\n\n\n\n\n\n\n\nHi Justin,\n\n\n\n\nthank you very much for your help and sorry for the late answer. \n\n\n\n\n\nAfter testing around with your suggestions, it actually was the daterange type which caused all the problems. Messing around with the statistics value improved performance drastically but did not solve the problem. We decided to replace the daterange type with\n a BIGINT and calculate the \"id\" of the daterange by just using the BIGINT (2x 4 bytes) representation of the daterange. Thus, it can be transformed in both directions immutably.\n\n\n\n\nCREATE OR REPLACE FUNCTION to_daterange_id(daterange DATERANGE) RETURNS BIGINT IMMUTABLE LANGUAGE plpgsqlAS$$BEGIN return (extract(EPOCH FROM lower(daterange))::BIGINT << 32) | extract(EPOCH FROM upper(daterange))::BIGINT;end;\n\n\n\n--------------------------------------------------------------------------------------------------------------CREATE OR REPLACE FUNCTION to_daterange(daterange_id BIGINT) RETURNS DATERANGE IMMUTABLE LANGUAGE plpgsqlAS$$BEGIN RETURN daterange(to_timestamp(daterange_id >> 32)::DATE, to_timestamp(daterange_id & x'FFFFFFFF'::BIGINT)::DATE);END;$$;\n\n\n\n\nSo there is no daterange object messing up the primary key index. Your other suggestions sadly didn't work, as the daterange was the partition key of the table too, this field was inevitably the first criterion in all queries and thus overruled every other\n index. \n\n\n\n\n\nWith that said and done, it would be nice, if daterange objects could be used in unique indexes too. They are a great way to identify data which represents a week, month, etc. worth of data (similar to a two-column-date representation).\n\n\n\n\n\n\nThank you very much again for your time and help\n\n\n\n\nJulian\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nJulian P. Wolf | Invenium Data Insights GmbH\[email protected] | +43 664 88 199 013\nHerrengasse 28 \n| 8010 Graz | www.invenium.io\n\n\n\n\n\n\n\n\n\n\n\nFrom: Justin Pryzby <[email protected]>\nSent: Wednesday, July 22, 2020 4:40 PM\nTo: Julian Wolf <[email protected]>\nCc: pgsql-performance Postgres Mailing List <[email protected]>\nSubject: Re: Too few rows expected by Planner on partitioned tables\n \n\n\nOn Tue, Jul 21, 2020 at 01:09:22PM +0000, Julian Wolf wrote:\n> Hello,\n> \n> A description of what you are trying to achieve and what results you expect:\n> Our database is growing on a daily basis by about 2.5million rows per table (2 at the moment). Because of that, we decided to partition the data, especially, as we are pre-aggregating the data for weeks, months, quarters and years. Every aggregation is stored\n in a separate partition:\n> \n...\n> Our problem is, that the planner always predicts one row to be returned, although only a part of the primary key is queried. This problem exceeds feasibility of performance rapidly - a query only involving a few days already takes dozens of seconds. All tables\n are analyzed and pg_stats looks reasonable IMHO.\n...\n>     PRIMARY KEY ( daterange, spatial_feature_id, visitor_profile_id, activity_type_combination_id,\n>                  activity_chain_id),\n...\n> ) PARTITION BY LIST (daterange);\n\n> schemaname relname n_live_tup\n> mobility_insights location_statistics_y2019m03d 23569853\n> mobility_insights location_statistics_y2019m03w 19264373\n> mobility_insights location_statistics_y2019m03 18105295\n\n> Aggregate  (cost=2.79..2.80 rows=1 width=8) (actual time=143.073..143.073 rows=1 loops=1)\n>   Buffers: shared hit=67334\n>   ->  Index Scan using location_statistics_y2019m03w_pkey on location_statistics_y2019m03w st  (cost=0.56..2.78 rows=1 width=8) (actual time=0.026..117.284 rows=516277 loops=1)\n>         Index Cond: ((daterange = '[2019-03-04,2019-03-11)'::daterange) AND (spatial_feature_id = 12675))\n>         Buffers: shared hit=67334\n\nI guess this isn't actually the problem query, since it takes 143ms and not\ndozens of seconds.  I don't know what is the problem query, but maybe it might\nhelp to create an new index on spatial_feature_id, which could be scanned\nrather than scanning the unique index.\n\nAlso, if daterange *and* spatial_feature_id are always *both* included, then\nthis might work:\n\npostgres=# CREATE STATISTICS t_stats (mcv) ON daterange,spatial_feature_id FROM t ; ANALYZE t;\n\n-- \nJustin", "msg_date": "Wed, 26 Aug 2020 06:54:39 +0000", "msg_from": "Julian Wolf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Too few rows expected by Planner on partitioned tables" }, { "msg_contents": "On Wed, Aug 26, 2020, 1:37 AM Julian Wolf <[email protected]> wrote:\n\n> Hi Justin,\n>\n> thank you very much for your help and sorry for the late answer.\n>\n> After testing around with your suggestions, it actually was the daterange\n> type which caused all the problems. Messing around with the statistics\n> value improved performance drastically but did not solve the problem. We\n> decided to replace the daterange type with a BIGINT and calculate the \"id\"\n> of the daterange by just using the BIGINT (2x 4 bytes) representation of\n> the daterange. Thus, it can be transformed in both directions immutably.\n>\n> CREATE OR REPLACE FUNCTION to_daterange_id(daterange DATERANGE)\n> RETURNS BIGINT\n> IMMUTABLE\n> LANGUAGE plpgsql\n> AS\n> $$\n> BEGIN\n> return (extract(EPOCH FROM lower(daterange))::BIGINT << 32) |\n> extract(EPOCH FROM upper(daterange))::BIGINT;\n> end;\n>\n> --------------------------------------------------------------------------------------------------------------\n> CREATE OR REPLACE FUNCTION to_daterange(daterange_id BIGINT)\n> RETURNS DATERANGE\n> IMMUTABLE\n> LANGUAGE plpgsql\n> AS\n> $$\n> BEGIN\n> RETURN daterange(to_timestamp(daterange_id >> 32)::DATE, to_timestamp(daterange_id & x'FFFFFFFF'::BIGINT)::DATE);\n> END;\n> $$;\n>\n>\nYou might want to consider changing that language declaration to SQL.\n\n>\n\nOn Wed, Aug 26, 2020, 1:37 AM Julian Wolf <[email protected]> wrote:\n\n\nHi Justin,\n\n\n\n\nthank you very much for your help and sorry for the late answer. \n\n\n\n\n\nAfter testing around with your suggestions, it actually was the daterange type which caused all the problems. Messing around with the statistics value improved performance drastically but did not solve the problem. We decided to replace the daterange type with\n a BIGINT and calculate the \"id\" of the daterange by just using the BIGINT (2x 4 bytes) representation of the daterange. Thus, it can be transformed in both directions immutably.\n\n\n\n\nCREATE OR REPLACE FUNCTION to_daterange_id(daterange DATERANGE) RETURNS BIGINT IMMUTABLE LANGUAGE plpgsqlAS$$BEGIN return (extract(EPOCH FROM lower(daterange))::BIGINT << 32) | extract(EPOCH FROM upper(daterange))::BIGINT;end;\n\n\n\n--------------------------------------------------------------------------------------------------------------CREATE OR REPLACE FUNCTION to_daterange(daterange_id BIGINT) RETURNS DATERANGE IMMUTABLE LANGUAGE plpgsqlAS$$BEGIN RETURN daterange(to_timestamp(daterange_id >> 32)::DATE, to_timestamp(daterange_id & x'FFFFFFFF'::BIGINT)::DATE);END;$$;You might want to consider changing that language declaration to SQL.", "msg_date": "Wed, 26 Aug 2020 07:32:41 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too few rows expected by Planner on partitioned tables" } ]
[ { "msg_contents": "Hi,\n\nI have problem with my Postgres production server.\nSometimes there a lot of Multiexact* LWlocks:\nMultiXactOffsetControlLock\nMultiXactMemberControlLock\nthat significantly impact on users queries performance.\nHow to find what is the reason of Multiexact* LWlocks?\n\nHardware:\n110 CPU\n620 GB RAM\nCentOS 7.4.1708\nkernel 3.10.0-693.5.2.el7.x86_64\nDatabase:\nsize 4,5 TB\nserver_version 9.6\ncheckpoint_timeout 1800 s\neffective_cache_size 449280 MB\nmaintenance_work_mem 4096 MB\nmax_connections 440\nshared_buffers 149760 MB\nwork_mem 190 MB\n\nAny ideas?\nBest regards\nHi,I have problem with my Postgres production server. Sometimes there a lot of Multiexact* LWlocks:MultiXactOffsetControlLockMultiXactMemberControlLockthat significantly impact on users queries performance.How to find what is the reason of Multiexact* LWlocks?Hardware:110 CPU620 GB RAMCentOS 7.4.1708kernel 3.10.0-693.5.2.el7.x86_64Database:size 4,5 TBserver_version 9.6checkpoint_timeout 1800 seffective_cache_size 449280 MBmaintenance_work_mem 4096 MBmax_connections\t440shared_buffers 149760 MBwork_mem 190 MBAny ideas?Best regards", "msg_date": "Wed, 29 Jul 2020 14:18:44 +0000", "msg_from": "Czarek <[email protected]>", "msg_from_op": true, "msg_subject": "Problems with Multixacts LWLocks" } ]
[ { "msg_contents": "Hi all,\n\nI need help in full text search optimization for hstore type. I added my\nquery to explain.depesz, you can check the query and also i added explain\nanalyze result in this link: https://explain.depesz.com/s/QT1e\n\ntable_ord.ops column type is hstore. I couldn't find the effective index\nthat would reduce the run time of the query.\n\nWhen I tried to add an gin index to the hstore column, I got the following\nerror:\n\ncreate index on table_ord USING gin (ops);\nERROR: index row size 2728 exceeds maximum 2712 for index \" table_ord_\nops_idx\"\n\nHow can I fix this query ?\n\nBest Regards,\n\nBurhan Akbulut\nDBA - Cooksoft\n\nHi all,I need help in full text search optimization for hstore type. I added my query to explain.depesz, you can check the query and also i added explain analyze result in this link: https://explain.depesz.com/s/QT1etable_ord.ops column type is hstore. I couldn't find the effective index that would reduce the run time of the query.When I tried to add an gin index to the hstore column, I got the following error:create index on table_ord USING gin (ops);ERROR:  index row size 2728 exceeds maximum 2712 for index \"\n\ntable_ord_\n\nops_idx\"How can I fix this query ?Best Regards,Burhan AkbulutDBA - Cooksoft", "msg_date": "Tue, 11 Aug 2020 23:52:53 +0300", "msg_from": "Burhan Akbulut <[email protected]>", "msg_from_op": true, "msg_subject": "Hstore index for full text search" }, { "msg_contents": "Hash Cond: (o.courier_id = cc.id)\nFilter: (((o.tracker_code)::text ~~* '%1654323%'::text) OR\n((table_cus.name)::text\n~~* '%1654323%'::text) OR ((au.username)::text ~~ '%1654323%'::text) OR\n((o.source)::text ~~* '%1654323%'::text) OR ((o.ops -> 'shop'::text) ~~*\n'%1654323%'::text) OR ((o.ops -> 'camp_code'::text) ~~* '%1654323%'::text)\nOR ((city.name)::text ~~* '%1654323%'::text) OR ((co.name)::text ~~*\n'%1654323%'::text) OR ((o.tr_code)::text ~~* '%1654323%'::text) OR ((o.ops\n? 'shipping_company'::text) AND ((o.ops -> 'shipping_company'::text) ~~*\n'%1654323%'::text)) OR ((cc.name)::text ~~* '%1654323%'::text))\n\n\nAll those OR conditions on different tables and fields seems like it will\nbe unlikely that the planner will do anything with the index you are trying\nto create (for this query).\n\nOn the error, I came across discussions on dba.stackexchange.com\nreferencing a limit of about 1/3 of the page size (8192) for every\nkey because of it being a btree underneath. It could be one or more of your\nkeys in ops (like shop, camp_code, and shipping_company) is much longer\nthan those examples shown in the query.\n\nHash Cond: (o.courier_id = cc.id)Filter: (((o.tracker_code)::text ~~* '%1654323%'::text) OR ((table_cus.name)::text ~~* '%1654323%'::text) OR ((au.username)::text ~~ '%1654323%'::text) OR ((o.source)::text ~~* '%1654323%'::text) OR ((o.ops -> 'shop'::text) ~~* '%1654323%'::text) OR ((o.ops -> 'camp_code'::text) ~~* '%1654323%'::text) OR ((city.name)::text ~~* '%1654323%'::text) OR ((co.name)::text ~~* '%1654323%'::text) OR ((o.tr_code)::text ~~* '%1654323%'::text) OR ((o.ops ? 'shipping_company'::text) AND ((o.ops -> 'shipping_company'::text) ~~* '%1654323%'::text)) OR ((cc.name)::text ~~* '%1654323%'::text))All those OR conditions on different tables and fields seems like it will be unlikely that the planner will do anything with the index you are trying to create (for this query).On the error, I came across discussions on dba.stackexchange.com referencing a limit of about 1/3 of the page size (8192) for every key because of it being a btree underneath. It could be one or more of your keys in ops (like shop, camp_code, and shipping_company) is much longer than those examples shown in the query.", "msg_date": "Tue, 11 Aug 2020 15:55:16 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hstore index for full text search" }, { "msg_contents": "Michael Lewis <[email protected]> writes:\n> Hash Cond: (o.courier_id = cc.id)\n> Filter: (((o.tracker_code)::text ~~* '%1654323%'::text) OR\n> ((table_cus.name)::text\n> ~~* '%1654323%'::text) OR ((au.username)::text ~~ '%1654323%'::text) OR\n> ((o.source)::text ~~* '%1654323%'::text) OR ((o.ops -> 'shop'::text) ~~*\n> '%1654323%'::text) OR ((o.ops -> 'camp_code'::text) ~~* '%1654323%'::text)\n> OR ((city.name)::text ~~* '%1654323%'::text) OR ((co.name)::text ~~*\n> '%1654323%'::text) OR ((o.tr_code)::text ~~* '%1654323%'::text) OR ((o.ops\n> ? 'shipping_company'::text) AND ((o.ops -> 'shipping_company'::text) ~~*\n> '%1654323%'::text)) OR ((cc.name)::text ~~* '%1654323%'::text))\n\n> All those OR conditions on different tables and fields seems like it will\n> be unlikely that the planner will do anything with the index you are trying\n> to create (for this query).\n\nA GIN index on an hstore column only provides the ability to search for\nexact matches to hstore key strings. There are a few bells and whistles,\nlike the ability to AND or OR such conditions. But basically it's just an\nexact-match engine, and it doesn't index the hstore's data values at all\n(which is why the implementors weren't too concerned about having a length\nlimit on the index entries). There is 0 chance of this index type being\nuseful for what the OP wants to do.\n\nGiven these examples, I'd think about setting up a collection of pg_trgm\nindexes on the specific hstore keys you care about, ie something like\n\nCREATE INDEX ON mytable USING GIST ((ops -> 'camp_code') gist_trgm_ops);\nCREATE INDEX ON mytable USING GIST ((ops -> 'shipping_company') gist_trgm_ops);\n...\n\nwhich'd allow indexing queries like\n\n... WHERE (ops -> 'camp_code') LIKE '%1654323%'\n OR (ops -> 'shipping_company') LIKE '%1654323%'\n\nI'm not sure how far this will get you, though; if there's a whole lot\nof different keys of interest, maintaining a separate index for each\none is probably too much overhead. Another point is that you will only\nget an indexscan if *every* OR'd clause matches some index. The example\nquery looks sufficiently unstructured that that might be hard to ensure.\n\nI kind of wonder whether this data design is actually a good idea.\nIt doesn't seem to match your querying style terribly well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Aug 2020 18:46:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hstore index for full text search" }, { "msg_contents": "On Tue, Aug 11, 2020 at 4:46 PM Tom Lane <[email protected]> wrote:\n\n> A GIN index on an hstore column only provides the ability to search for\n> exact matches to hstore key strings. There are a few bells and whistles,\n> like the ability to AND or OR such conditions. But basically it's just an\n> exact-match engine, and it doesn't index the hstore's data values at all\n> (which is why the implementors weren't too concerned about having a length\n> limit on the index entries). There is 0 chance of this index type being\n> useful for what the OP wants to do.\n>\n\nThanks for sharing. More like json path ops and not the full key and value.\nInteresting.\n\n\n> Another point is that you will only\n> get an indexscan if *every* OR'd clause matches some index. The example\n> query looks sufficiently unstructured that that might be hard to ensure.\n>\n\nDoes this still apply when the where clauses are on several tables and not\njust one?\n\nOn Tue, Aug 11, 2020 at 4:46 PM Tom Lane <[email protected]> wrote:A GIN index on an hstore column only provides the ability to search for\nexact matches to hstore key strings.  There are a few bells and whistles,\nlike the ability to AND or OR such conditions.  But basically it's just an\nexact-match engine, and it doesn't index the hstore's data values at all\n(which is why the implementors weren't too concerned about having a length\nlimit on the index entries).  There is 0 chance of this index type being\nuseful for what the OP wants to do.Thanks for sharing. More like json path ops and not the full key and value. Interesting. Another point is that you will only\nget an indexscan if *every* OR'd clause matches some index.  The example\nquery looks sufficiently unstructured that that might be hard to ensure.Does this still apply when the where clauses are on several tables and not just one?", "msg_date": "Tue, 11 Aug 2020 16:57:40 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hstore index for full text search" }, { "msg_contents": "Michael Lewis <[email protected]> writes:\n> On Tue, Aug 11, 2020 at 4:46 PM Tom Lane <[email protected]> wrote:\n>> Another point is that you will only\n>> get an indexscan if *every* OR'd clause matches some index. The example\n>> query looks sufficiently unstructured that that might be hard to ensure.\n\n> Does this still apply when the where clauses are on several tables and not\n> just one?\n\nYeah. In that case there's no hope of an indexscan at all, since for all\nthe planner knows, the query might match some table rows that don't meet\nany of the conditions mentioned for that table's columns. If you can\nwrite\n\nWHERE (condition-on-t1.a OR condition-on-t1.b OR ...)\n AND (condition-on-t2.x OR condition-on-t2.y OR ...)\n\nthen you've got a chance of making the OR'd conditions into index\nqualifications on t1 or t2 respectively. But if it's\n\nWHERE condition-on-t1.a OR condition-on-t1.b OR ...\n OR condition-on-t2.x OR condition-on-t2.y OR ...\n\nthen you're in for full-table scans. (This is another thing that\nwas bothering me about the data design, though I failed to think\nit through clearly enough to state before.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Aug 2020 19:30:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hstore index for full text search" } ]
[ { "msg_contents": "Hi. I've got a query that runs fine (~50ms). When I add a \"LIMIT 25\" to\nit though, it takes way longer. The query itself then takes about 4.5\nseconds. And when I do an explain, it takes 90+ seconds for the same query!\n\nExplains and detailed table/view info below. tbl_log has 1.2M records,\ntbl_reference has 550k. This is 9.6.19 on CentOS 6 with PDGG packages.\n\nI know the query itself could be re-written, but it's coming from an ORM,\nso I'm really focused on why the adding a limit is causing such performance\ndegradation, and what to do about it. Any help or insight would be\nappreciated. Also the discrepancy between the actual query and the\nexplain. Thanks!\n\nKen\n\n\n*The good query (no LIMIT):*\n\nagency=> EXPLAIN (ANALYZE,VERBOSE,BUFFERS,TIMING) SELECT * FROM Log WHERE\nlog_id IN (SELECT CASE WHEN to_table='client' THEN from_id WHEN\nfrom_table='client' THEN to_id END FROM reference WHERE ((from_id_field =\n E'client_id'\n AND from_id = E'34918'\n AND from_table = E'client'\n AND to_table = E'log'\n )\n OR (to_id_field = E'client_id'\n AND to_id = E'34918'\n AND to_table = E'client'\n AND from_table = E'log'\n ))) ORDER BY added_at DESC;\n\n\n QUERY PLAN\n\n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------\n Sort (cost=167065.81..168594.77 rows=611586 width=336) (actual\ntime=43.942..46.566 rows=1432 loops=1)\n Output: tbl_log.log_id, tbl_log.log_type_code, tbl_log.subject,\ntbl_log.log_text, tbl_log.occurred_at, tbl_log.shift_report,\ntbl_log.written_by, tbl_log.added_by, tbl_log.added_at, tbl_log.changed_by,\ntbl_log.changed_at, tbl_log.sys_log\n Sort Key: tbl_log.added_at DESC\n Sort Method: quicksort Memory: 999kB\n Buffers: shared hit=7026\n -> Nested Loop (cost=4313.36..14216.18 rows=611586 width=336) (actual\ntime=10.837..38.177 rows=1432 loops=1)\n Output: tbl_log.log_id, tbl_log.log_type_code, tbl_log.subject,\ntbl_log.log_text, tbl_log.occurred_at, tbl_log.shift_report,\ntbl_log.written_by, tbl_log.added_by, tbl_log.added_at, tbl_log.changed_by,\ntbl_log.changed_at, tbl_log.sys_log\n Buffers: shared hit=7026\n -> HashAggregate (cost=4312.93..4325.68 rows=1275 width=136)\n(actual time=10.802..13.800 rows=1433 loops=1)\n Output: tbl_reference.to_table, tbl_reference.from_id,\ntbl_reference.from_table, tbl_reference.to_id\n Group Key: CASE WHEN (tbl_reference.to_table =\n'client'::name) THEN tbl_reference.from_id WHEN (tbl_reference.from_table =\n'client'::name) THEN tbl_reference.to_id ELSE NULL::integer END\n Buffers: shared hit=1288\n -> Bitmap Heap Scan on public.tbl_reference\n (cost=46.69..4309.74 rows=1276 width=136) (actual time=0.747..6.822\nrows=1433 loops=1)\n Output: tbl_reference.to_table, tbl_reference.from_id,\ntbl_reference.from_table, tbl_reference.to_id, CASE WHEN\n(tbl_reference.to_table = 'client'::name) THEN tbl_reference.from_id WHEN\n(tbl_reference.from_table = 'client'::name) THEN tbl_reference.to_id ELSE\nNULL::integer END\n Recheck Cond: ((tbl_reference.from_id_field =\n'client_id'::name) OR ((tbl_reference.to_table = 'client'::name) AND\n(tbl_reference.to_id = 34918)))\n Filter: ((NOT tbl_reference.is_deleted) AND\n(((tbl_reference.from_id_field = 'client_id'::name) AND\n(tbl_reference.from_id = 34918) AND (tbl_reference.from_table =\n'client'::name) AND (tbl_reference.to_table = 'log'::name)) OR\n((tbl_reference.to_id_field = 'client_id'::name) AND (tbl_reference.to_id =\n34918) AND (tbl_reference.to_table = 'client'::na\nme) AND (tbl_reference.from_table = 'log'::name))))\n Rows Removed by Filter: 15\n Heap Blocks: exact=1275\n Buffers: shared hit=1288\n -> BitmapOr (cost=46.69..46.69 rows=1319 width=0)\n(actual time=0.453..0.454 rows=0 loops=1)\n Buffers: shared hit=13\n -> Bitmap Index Scan on\nindex_tbl_reference_from_id_field (cost=0.00..4.43 rows=1 width=0) (actual\ntime=0.025..0.026 rows=0 loops=1)\n Index Cond: (tbl_reference.from_id_field =\n'client_id'::name)\n Buffers: shared hit=3\n -> Bitmap Index Scan on\nindex_tbl_reference_to_table_id (cost=0.00..41.61 rows=1319 width=0)\n(actual time=0.421..0.423 rows=1448 loops=1)\n Index Cond: ((tbl_reference.to_table =\n'client'::name) AND (tbl_reference.to_id = 34918))\n Buffers: shared hit=10\n -> Index Scan using tbl_log_pkey on public.tbl_log\n (cost=0.43..7.75 rows=1 width=336) (actual time=0.007..0.009 rows=1\nloops=1433)\n Output: tbl_log.log_id, tbl_log.log_type_code,\ntbl_log.subject, tbl_log.log_text, tbl_log.occurred_at, tbl_log.written_by,\ntbl_log.added_by, tbl_log.added_at, tbl_log.changed_by, tbl_log.changed_at,\ntbl_log.sys_log, tbl_log.shift_report\n Index Cond: (tbl_log.log_id = CASE WHEN\n(tbl_reference.to_table = 'client'::name) THEN tbl_reference.from_id WHEN\n(tbl_reference.from_table = 'client'::name) THEN tbl_reference.to_id ELSE\nNULL::integer END)\n Buffers: shared hit=5738\n Planning time: 0.866 ms\n Execution time: 48.915 ms\n(33 rows)\n\n*The bad query (LIMIT):*\n\nagency=> EXPLAIN (ANALYZE,VERBOSE,BUFFERS,TIMING) SELECT * FROM Log WHERE\nlog_id IN (SELECT CASE WHEN to_table='client' THEN from_id WHEN\nfrom_table='client' THEN to_id END FROM reference WHERE ((from_id_field =\n E'client_id'\n AND from_id = E'34918'\n AND from_table = E'client'\n AND to_table = E'log'\n )\n OR (to_id_field = E'client_id'\n AND to_id = E'34918'\n AND to_table = E'client'\n AND from_table = E'log'\n ))) ORDER BY added_at DESC LIMIT 25;\n\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------\n Limit (cost=47.11..1329.32 rows=25 width=336) (actual\ntime=47.103..97236.235 rows=25 loops=1)\n Output: tbl_log.log_id, tbl_log.log_type_code, tbl_log.subject,\ntbl_log.log_text, tbl_log.occurred_at, tbl_log.shift_report,\ntbl_log.written_by, tbl_log.added_by, tbl_log.added_at, tbl_log.changed_by,\ntbl_log.changed_at, tbl_log.sys_log\n Buffers: shared hit=3820\n -> Nested Loop Semi Join (cost=47.11..31367302.81 rows=611586\nwidth=336) (actual time=47.098..97236.123 rows=25 loops=1)\n Output: tbl_log.log_id, tbl_log.log_type_code, tbl_log.subject,\ntbl_log.log_text, tbl_log.occurred_at, tbl_log.shift_report,\ntbl_log.written_by, tbl_log.added_by, tbl_log.added_at, tbl_log.changed_by,\ntbl_log.changed_at, tbl_log.sys_log\n Join Filter: (tbl_log.log_id = CASE WHEN (tbl_reference.to_table =\n'client'::name) THEN tbl_reference.from_id WHEN (tbl_reference.from_table =\n'client'::name) THEN tbl_reference.to_id ELSE NULL::integer END)\n Rows Removed by Join Filter: 28364477\n Buffers: shared hit=3820\n -> Index Scan Backward using tbl_log_added_at on public.tbl_log\n (cost=0.43..147665.96 rows=1223171 width=336) (actual time=0.016..123.097\nrows=19794 loops=1)\n Output: tbl_log.log_id, tbl_log.log_type_code,\ntbl_log.subject, tbl_log.log_text, tbl_log.occurred_at, tbl_log.written_by,\ntbl_log.added_by, tbl_log.added_at, tbl_log.changed_by, tbl_log.changed_at,\ntbl_log.sys_log, tbl_log.shift_report\n Buffers: shared hit=2532\n -> Materialize (cost=46.69..4316.12 rows=1276 width=136) (actual\ntime=0.002..2.351 rows=1433 loops=19794)\n Output: tbl_reference.to_table, tbl_reference.from_id,\ntbl_reference.from_table, tbl_reference.to_id\n Buffers: shared hit=1288\n -> Bitmap Heap Scan on public.tbl_reference\n (cost=46.69..4309.74 rows=1276 width=136) (actual time=0.508..5.594\nrows=1433 loops=1)\n Output: tbl_reference.to_table, tbl_reference.from_id,\ntbl_reference.from_table, tbl_reference.to_id\n Recheck Cond: ((tbl_reference.from_id_field =\n'client_id'::name) OR ((tbl_reference.to_table = 'client'::name) AND\n(tbl_reference.to_id = 34918)))\n Filter: ((NOT tbl_reference.is_deleted) AND\n(((tbl_reference.from_id_field = 'client_id'::name) AND\n(tbl_reference.from_id = 34918) AND (tbl_reference.from_table =\n'client'::name) AND (tbl_reference.to_table = 'log'::name)) OR\n((tbl_reference.to_id_field = 'client_id'::name) AND (tbl_reference.to_id =\n34918) AND (tbl_reference.to_table = 'client'::na\nme) AND (tbl_reference.from_table = 'log'::name))))\n Rows Removed by Filter: 15\n Heap Blocks: exact=1275\n Buffers: shared hit=1288\n -> BitmapOr (cost=46.69..46.69 rows=1319 width=0)\n(actual time=0.313..0.315 rows=0 loops=1)\n Buffers: shared hit=13\n -> Bitmap Index Scan on\nindex_tbl_reference_from_id_field (cost=0.00..4.43 rows=1 width=0) (actual\ntime=0.011..0.013 rows=0 loops=1)\n Index Cond: (tbl_reference.from_id_field =\n'client_id'::name)\n Buffers: shared hit=3\n -> Bitmap Index Scan on\nindex_tbl_reference_to_table_id (cost=0.00..41.61 rows=1319 width=0)\n(actual time=0.296..0.298 rows=1448 loops=1)\n Index Cond: ((tbl_reference.to_table =\n'client'::name) AND (tbl_reference.to_id = 34918))\n Buffers: shared hit=10\n Planning time: 0.650 ms\n Execution time: 97236.582 ms\n(31 rows)\n\nTime: 97238.387 ms\n\n*The bad query, as actual query, not explain:*\n\nagency=> SELECT * FROM Log WHERE log_id IN (SELECT CASE WHEN\nto_table='client' THEN from_id WHEN from_table='client' THEN to_id END FROM\nreference WHERE ((from_id_field = E'client_id'\n AND from_id = E'34918'\n AND from_table = E'client'\n AND to_table = E'log'\n )\n OR (to_id_field = E'client_id'\n AND to_id = E'34918'\n AND to_table = E'client'\n AND from_table = E'log'\n ))) ORDER BY added_at DESC LIMIT 25;\n log_id | log_type_code | subject\n |\n\n\n log_text\n\n | occurred_at | shift_report | written_by | added_by |\n added_at | changed_by | changed_at | sys_log\n---------+---------------+---------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------+---------------------+--------------+------------+----------+---------------------+------------+---------------------+---------\n(actual results snipped)\n\n*Time: 4654.190 ms*\n\n*Description of tables and views:*\n\nagency=> \\d+ log\n View \"public.log\"\n Column | Type | Modifiers | Storage |\nDescription\n---------------+--------------------------------+-----------+----------+-------------\n log_id | integer | | plain |\n log_type_code | character varying(10)[] | | extended |\n subject | character varying(80) | | extended |\n log_text | text | | extended |\n occurred_at | timestamp(0) without time zone | | plain |\n shift_report | boolean | | plain |\n written_by | integer | | plain |\n added_by | integer | | plain |\n added_at | timestamp(0) without time zone | | plain |\n changed_by | integer | | plain |\n changed_at | timestamp(0) without time zone | | plain |\n sys_log | text | | extended |\nView definition:\n SELECT tbl_log.log_id,\n tbl_log.log_type_code,\n tbl_log.subject,\n tbl_log.log_text,\n tbl_log.occurred_at,\n tbl_log.shift_report,\n tbl_log.written_by,\n tbl_log.added_by,\n tbl_log.added_at,\n tbl_log.changed_by,\n tbl_log.changed_at,\n tbl_log.sys_log\n FROM tbl_log;\n\nagency=> \\d tbl_log\n Table \"public.tbl_log\"\n Column | Type |\n Modifiers\n---------------+--------------------------------+----------------------------------------------------------\n log_id | integer | not null default\nnextval('tbl_log_log_id_seq'::regclass)\n log_type_code | character varying(10)[] | not null\n subject | character varying(80) | not null\n log_text | text |\n occurred_at | timestamp(0) without time zone |\n written_by | integer | not null\n added_by | integer | not null\n added_at | timestamp(0) without time zone | not null default now()\n changed_by | integer | not null\n changed_at | timestamp(0) without time zone | not null default now()\n sys_log | text |\n shift_report | boolean | default false\nIndexes:\n \"tbl_log_pkey\" PRIMARY KEY, btree (log_id)\n \"index_tbl_log_log_type_code\" btree (log_type_code)\n \"tbl_log_added_at\" btree (added_at)\n \"tbl_log_added_by\" btree (added_by)\n \"tbl_log_added_by2\" btree (added_at DESC)\n \"tbl_log_event_time\" btree ((COALESCE(occurred_at, added_at)))\n \"tbl_log_log_type_code\" btree (log_type_code)\n \"tbl_log_log_type_code_gin\" gin (log_type_code)\n \"tbl_log_occurred_at\" btree (occurred_at)\n \"tbl_log_subject\" btree (subject)\n \"tbl_log_test\" btree (added_at, log_type_code)\n \"tbl_log_test2\" btree (log_type_code, added_at)\n \"tbl_log_written_by\" btree (written_by)\nForeign-key constraints:\n \"tbl_log_added_by_fkey\" FOREIGN KEY (added_by) REFERENCES\ntbl_staff(staff_id)\n \"tbl_log_changed_by_fkey\" FOREIGN KEY (changed_by) REFERENCES\ntbl_staff(staff_id)\n \"tbl_log_written_by_fkey\" FOREIGN KEY (written_by) REFERENCES\ntbl_staff(staff_id)\nTriggers:\n tbl_log_changed_at_update BEFORE UPDATE ON tbl_log FOR EACH ROW EXECUTE\nPROCEDURE auto_changed_at_update()\n tbl_log_log_chg AFTER INSERT OR DELETE OR UPDATE ON tbl_log FOR EACH\nROW EXECUTE PROCEDURE table_log()\n\nagency=> \\d+ reference\n View \"public.reference\"\n Column | Type | Modifiers | Storage |\nDescription\n-----------------+--------------------------------+-----------+----------+-------------\n reference_id | integer | | plain |\n from_table | name | | plain |\n from_id_field | name | | plain |\n from_id | integer | | plain |\n to_table | name | | plain |\n to_id_field | name | | plain |\n to_id | integer | | plain |\n added_at | timestamp(0) without time zone | | plain |\n added_by | integer | | plain |\n changed_at | timestamp(0) without time zone | | plain |\n changed_by | integer | | plain |\n is_deleted | boolean | | plain |\n deleted_at | timestamp(0) without time zone | | plain |\n deleted_by | integer | | plain |\n deleted_comment | text | | extended |\n sys_log | text | | extended |\nView definition:\n SELECT tbl_reference.reference_id,\n tbl_reference.from_table,\n tbl_reference.from_id_field,\n tbl_reference.from_id,\n tbl_reference.to_table,\n tbl_reference.to_id_field,\n tbl_reference.to_id,\n tbl_reference.added_at,\n tbl_reference.added_by,\n tbl_reference.changed_at,\n tbl_reference.changed_by,\n tbl_reference.is_deleted,\n tbl_reference.deleted_at,\n tbl_reference.deleted_by,\n tbl_reference.deleted_comment,\n tbl_reference.sys_log\n FROM tbl_reference\n WHERE NOT tbl_reference.is_deleted;\n\nagency=> \\d+ tbl_reference\n Table\n\"public.tbl_reference\"\n Column | Type |\n Modifiers | Storage | Stats target |\nDescription\n-----------------+--------------------------------+----------------------------------------------------------------------+----------+--------------+-------------\n reference_id | integer | not null default\nnextval('tbl_reference_reference_id_seq'::regclass) | plain |\n |\n from_table | name | not null\n | plain | |\n from_id_field | name | not null\n | plain | |\n from_id | integer | not null\n | plain | |\n to_table | name | not null\n | plain | |\n to_id_field | name | not null\n | plain | |\n to_id | integer | not null\n | plain | |\n added_at | timestamp(0) without time zone | not null default now()\n | plain | |\n added_by | integer | not null\n | plain | |\n changed_at | timestamp(0) without time zone | not null default now()\n | plain | |\n changed_by | integer | not null\n | plain | |\n is_deleted | boolean | not null default false\n | plain | |\n deleted_at | timestamp(0) without time zone |\n | plain | |\n deleted_by | integer |\n | plain | |\n deleted_comment | text |\n | extended | |\n sys_log | text |\n | extended | |\nIndexes:\n \"tbl_reference_pkey\" PRIMARY KEY, btree (reference_id)\n \"unique_index_tbl_reference\" UNIQUE, btree (from_table, from_id_field,\nfrom_id, to_table, to_id_field, to_id)\n \"index_tbl_reference_from_id\" btree (from_id)\n \"index_tbl_reference_from_id_field\" btree (from_id_field)\n \"index_tbl_reference_from_table\" btree (from_table)\n \"index_tbl_reference_is_deleted\" btree (is_deleted)\n \"index_tbl_reference_to_id\" btree (to_id)\n \"index_tbl_reference_to_id_field\" btree (to_id_field)\n \"index_tbl_reference_to_table\" btree (to_table)\n \"index_tbl_reference_to_table_id\" btree (to_table, to_id)\nForeign-key constraints:\n \"tbl_reference_added_by_fkey\" FOREIGN KEY (added_by) REFERENCES\ntbl_staff(staff_id)\n \"tbl_reference_changed_by_fkey\" FOREIGN KEY (changed_by) REFERENCES\ntbl_staff(staff_id)\n \"tbl_reference_deleted_by_fkey\" FOREIGN KEY (deleted_by) REFERENCES\ntbl_staff(staff_id)\nTriggers:\n tbl_reference_alert_notify AFTER INSERT OR DELETE OR UPDATE ON\ntbl_reference FOR EACH ROW EXECUTE PROCEDURE table_alert_notify()\n tbl_reference_changed_at_update BEFORE UPDATE ON tbl_reference FOR EACH\nROW EXECUTE PROCEDURE auto_changed_at_update()\n tbl_reference_log_chg AFTER INSERT OR DELETE OR UPDATE ON tbl_reference\nFOR EACH ROW EXECUTE PROCEDURE table_log()\n\n\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing list\n<[email protected]?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nHi.  I've got a query that runs fine (~50ms).  When I add a \"LIMIT 25\" to it though, it takes way longer.  The query itself then takes about 4.5 seconds.  And when I do an explain, it takes 90+ seconds for the same query!Explains and detailed table/view info below.  tbl_log has 1.2M records, tbl_reference has 550k.  This is 9.6.19 on CentOS 6 with PDGG packages.I know the query itself could be re-written, but it's coming from an ORM, so I'm really focused on why the adding a limit is causing such performance degradation, and what to do about it.  Any help or insight would be appreciated. Also the discrepancy between the actual query and the explain.  Thanks!KenThe good query (no LIMIT):agency=> EXPLAIN (ANALYZE,VERBOSE,BUFFERS,TIMING)  SELECT * FROM Log WHERE log_id IN (SELECT CASE WHEN to_table='client' THEN from_id WHEN from_table='client' THEN to_id END FROM reference WHERE ((from_id_field =  E'client_id'        AND from_id =  E'34918'        AND from_table =  E'client'        AND to_table =  E'log'        )        OR  (to_id_field =  E'client_id'        AND to_id =  E'34918'        AND to_table =  E'client'        AND from_table =  E'log'        ))) ORDER BY added_at DESC;                                                                                                                                                                                                               QUERY PLAN                                                                                                                                                                                                                ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=167065.81..168594.77 rows=611586 width=336) (actual time=43.942..46.566 rows=1432 loops=1)   Output: tbl_log.log_id, tbl_log.log_type_code, tbl_log.subject, tbl_log.log_text, tbl_log.occurred_at, tbl_log.shift_report, tbl_log.written_by, tbl_log.added_by, tbl_log.added_at, tbl_log.changed_by, tbl_log.changed_at, tbl_log.sys_log   Sort Key: tbl_log.added_at DESC   Sort Method: quicksort  Memory: 999kB   Buffers: shared hit=7026   ->  Nested Loop  (cost=4313.36..14216.18 rows=611586 width=336) (actual time=10.837..38.177 rows=1432 loops=1)         Output: tbl_log.log_id, tbl_log.log_type_code, tbl_log.subject, tbl_log.log_text, tbl_log.occurred_at, tbl_log.shift_report, tbl_log.written_by, tbl_log.added_by, tbl_log.added_at, tbl_log.changed_by, tbl_log.changed_at, tbl_log.sys_log         Buffers: shared hit=7026         ->  HashAggregate  (cost=4312.93..4325.68 rows=1275 width=136) (actual time=10.802..13.800 rows=1433 loops=1)               Output: tbl_reference.to_table, tbl_reference.from_id, tbl_reference.from_table, tbl_reference.to_id               Group Key: CASE WHEN (tbl_reference.to_table = 'client'::name) THEN tbl_reference.from_id WHEN (tbl_reference.from_table = 'client'::name) THEN tbl_reference.to_id ELSE NULL::integer END               Buffers: shared hit=1288               ->  Bitmap Heap Scan on public.tbl_reference  (cost=46.69..4309.74 rows=1276 width=136) (actual time=0.747..6.822 rows=1433 loops=1)                     Output: tbl_reference.to_table, tbl_reference.from_id, tbl_reference.from_table, tbl_reference.to_id, CASE WHEN (tbl_reference.to_table = 'client'::name) THEN tbl_reference.from_id WHEN (tbl_reference.from_table = 'client'::name) THEN tbl_reference.to_id ELSE NULL::integer END                     Recheck Cond: ((tbl_reference.from_id_field = 'client_id'::name) OR ((tbl_reference.to_table = 'client'::name) AND (tbl_reference.to_id = 34918)))                     Filter: ((NOT tbl_reference.is_deleted) AND (((tbl_reference.from_id_field = 'client_id'::name) AND (tbl_reference.from_id = 34918) AND (tbl_reference.from_table = 'client'::name) AND (tbl_reference.to_table = 'log'::name)) OR ((tbl_reference.to_id_field = 'client_id'::name) AND (tbl_reference.to_id = 34918) AND (tbl_reference.to_table = 'client'::name) AND (tbl_reference.from_table = 'log'::name))))                     Rows Removed by Filter: 15                     Heap Blocks: exact=1275                     Buffers: shared hit=1288                     ->  BitmapOr  (cost=46.69..46.69 rows=1319 width=0) (actual time=0.453..0.454 rows=0 loops=1)                           Buffers: shared hit=13                           ->  Bitmap Index Scan on index_tbl_reference_from_id_field  (cost=0.00..4.43 rows=1 width=0) (actual time=0.025..0.026 rows=0 loops=1)                                 Index Cond: (tbl_reference.from_id_field = 'client_id'::name)                                 Buffers: shared hit=3                           ->  Bitmap Index Scan on index_tbl_reference_to_table_id  (cost=0.00..41.61 rows=1319 width=0) (actual time=0.421..0.423 rows=1448 loops=1)                                 Index Cond: ((tbl_reference.to_table = 'client'::name) AND (tbl_reference.to_id = 34918))                                 Buffers: shared hit=10         ->  Index Scan using tbl_log_pkey on public.tbl_log  (cost=0.43..7.75 rows=1 width=336) (actual time=0.007..0.009 rows=1 loops=1433)               Output: tbl_log.log_id, tbl_log.log_type_code, tbl_log.subject, tbl_log.log_text, tbl_log.occurred_at, tbl_log.written_by, tbl_log.added_by, tbl_log.added_at, tbl_log.changed_by, tbl_log.changed_at, tbl_log.sys_log, tbl_log.shift_report               Index Cond: (tbl_log.log_id = CASE WHEN (tbl_reference.to_table = 'client'::name) THEN tbl_reference.from_id WHEN (tbl_reference.from_table = 'client'::name) THEN tbl_reference.to_id ELSE NULL::integer END)               Buffers: shared hit=5738 Planning time: 0.866 ms Execution time: 48.915 ms(33 rows)The bad query (LIMIT):agency=> EXPLAIN (ANALYZE,VERBOSE,BUFFERS,TIMING)  SELECT * FROM Log WHERE log_id IN (SELECT CASE WHEN to_table='client' THEN from_id WHEN from_table='client' THEN to_id END FROM reference WHERE ((from_id_field =  E'client_id'        AND from_id =  E'34918'        AND from_table =  E'client'        AND to_table =  E'log'        )        OR  (to_id_field =  E'client_id'        AND to_id =  E'34918'        AND to_table =  E'client'        AND from_table =  E'log'        ))) ORDER BY added_at DESC LIMIT 25;                                                                                                                                                                                                               QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=47.11..1329.32 rows=25 width=336) (actual time=47.103..97236.235 rows=25 loops=1)   Output: tbl_log.log_id, tbl_log.log_type_code, tbl_log.subject, tbl_log.log_text, tbl_log.occurred_at, tbl_log.shift_report, tbl_log.written_by, tbl_log.added_by, tbl_log.added_at, tbl_log.changed_by, tbl_log.changed_at, tbl_log.sys_log   Buffers: shared hit=3820   ->  Nested Loop Semi Join  (cost=47.11..31367302.81 rows=611586 width=336) (actual time=47.098..97236.123 rows=25 loops=1)         Output: tbl_log.log_id, tbl_log.log_type_code, tbl_log.subject, tbl_log.log_text, tbl_log.occurred_at, tbl_log.shift_report, tbl_log.written_by, tbl_log.added_by, tbl_log.added_at, tbl_log.changed_by, tbl_log.changed_at, tbl_log.sys_log         Join Filter: (tbl_log.log_id = CASE WHEN (tbl_reference.to_table = 'client'::name) THEN tbl_reference.from_id WHEN (tbl_reference.from_table = 'client'::name) THEN tbl_reference.to_id ELSE NULL::integer END)         Rows Removed by Join Filter: 28364477         Buffers: shared hit=3820         ->  Index Scan Backward using tbl_log_added_at on public.tbl_log  (cost=0.43..147665.96 rows=1223171 width=336) (actual time=0.016..123.097 rows=19794 loops=1)               Output: tbl_log.log_id, tbl_log.log_type_code, tbl_log.subject, tbl_log.log_text, tbl_log.occurred_at, tbl_log.written_by, tbl_log.added_by, tbl_log.added_at, tbl_log.changed_by, tbl_log.changed_at, tbl_log.sys_log, tbl_log.shift_report               Buffers: shared hit=2532         ->  Materialize  (cost=46.69..4316.12 rows=1276 width=136) (actual time=0.002..2.351 rows=1433 loops=19794)               Output: tbl_reference.to_table, tbl_reference.from_id, tbl_reference.from_table, tbl_reference.to_id               Buffers: shared hit=1288               ->  Bitmap Heap Scan on public.tbl_reference  (cost=46.69..4309.74 rows=1276 width=136) (actual time=0.508..5.594 rows=1433 loops=1)                     Output: tbl_reference.to_table, tbl_reference.from_id, tbl_reference.from_table, tbl_reference.to_id                     Recheck Cond: ((tbl_reference.from_id_field = 'client_id'::name) OR ((tbl_reference.to_table = 'client'::name) AND (tbl_reference.to_id = 34918)))                     Filter: ((NOT tbl_reference.is_deleted) AND (((tbl_reference.from_id_field = 'client_id'::name) AND (tbl_reference.from_id = 34918) AND (tbl_reference.from_table = 'client'::name) AND (tbl_reference.to_table = 'log'::name)) OR ((tbl_reference.to_id_field = 'client_id'::name) AND (tbl_reference.to_id = 34918) AND (tbl_reference.to_table = 'client'::name) AND (tbl_reference.from_table = 'log'::name))))                     Rows Removed by Filter: 15                     Heap Blocks: exact=1275                     Buffers: shared hit=1288                     ->  BitmapOr  (cost=46.69..46.69 rows=1319 width=0) (actual time=0.313..0.315 rows=0 loops=1)                           Buffers: shared hit=13                           ->  Bitmap Index Scan on index_tbl_reference_from_id_field  (cost=0.00..4.43 rows=1 width=0) (actual time=0.011..0.013 rows=0 loops=1)                                 Index Cond: (tbl_reference.from_id_field = 'client_id'::name)                                 Buffers: shared hit=3                           ->  Bitmap Index Scan on index_tbl_reference_to_table_id  (cost=0.00..41.61 rows=1319 width=0) (actual time=0.296..0.298 rows=1448 loops=1)                                 Index Cond: ((tbl_reference.to_table = 'client'::name) AND (tbl_reference.to_id = 34918))                                 Buffers: shared hit=10 Planning time: 0.650 ms Execution time: 97236.582 ms(31 rows)Time: 97238.387 msThe bad query, as actual query, not explain:agency=> SELECT * FROM Log WHERE log_id IN (SELECT CASE WHEN to_table='client' THEN from_id WHEN from_table='client' THEN to_id END FROM reference WHERE ((from_id_field =  E'client_id'        AND from_id =  E'34918'        AND from_table =  E'client'        AND to_table =  E'log'        )        OR  (to_id_field =  E'client_id'        AND to_id =  E'34918'        AND to_table =  E'client'        AND from_table =  E'log'        ))) ORDER BY added_at DESC LIMIT 25; log_id  | log_type_code |                                     subject                                     |                                                                                                                                                                                                   log_text           |     occurred_at     | shift_report | written_by | added_by |      added_at       | changed_by |     changed_at      | sys_log---------+---------------+---------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+--------------+------------+----------+---------------------+------------+---------------------+---------(actual results snipped)Time: 4654.190 msDescription of tables and views:agency=> \\d+ log                                  View \"public.log\"    Column     |              Type              | Modifiers | Storage  | Description ---------------+--------------------------------+-----------+----------+------------- log_id        | integer                        |           | plain    |  log_type_code | character varying(10)[]        |           | extended |  subject       | character varying(80)          |           | extended |  log_text      | text                           |           | extended |  occurred_at   | timestamp(0) without time zone |           | plain    |  shift_report  | boolean                        |           | plain    |  written_by    | integer                        |           | plain    |  added_by      | integer                        |           | plain    |  added_at      | timestamp(0) without time zone |           | plain    |  changed_by    | integer                        |           | plain    |  changed_at    | timestamp(0) without time zone |           | plain    |  sys_log       | text                           |           | extended | View definition: SELECT tbl_log.log_id,    tbl_log.log_type_code,    tbl_log.subject,    tbl_log.log_text,    tbl_log.occurred_at,    tbl_log.shift_report,    tbl_log.written_by,    tbl_log.added_by,    tbl_log.added_at,    tbl_log.changed_by,    tbl_log.changed_at,    tbl_log.sys_log   FROM tbl_log;agency=> \\d tbl_log                                          Table \"public.tbl_log\"    Column     |              Type              |                        Modifiers                         ---------------+--------------------------------+---------------------------------------------------------- log_id        | integer                        | not null default nextval('tbl_log_log_id_seq'::regclass) log_type_code | character varying(10)[]        | not null subject       | character varying(80)          | not null log_text      | text                           |  occurred_at   | timestamp(0) without time zone |  written_by    | integer                        | not null added_by      | integer                        | not null added_at      | timestamp(0) without time zone | not null default now() changed_by    | integer                        | not null changed_at    | timestamp(0) without time zone | not null default now() sys_log       | text                           |  shift_report  | boolean                        | default falseIndexes:    \"tbl_log_pkey\" PRIMARY KEY, btree (log_id)    \"index_tbl_log_log_type_code\" btree (log_type_code)    \"tbl_log_added_at\" btree (added_at)    \"tbl_log_added_by\" btree (added_by)    \"tbl_log_added_by2\" btree (added_at DESC)    \"tbl_log_event_time\" btree ((COALESCE(occurred_at, added_at)))    \"tbl_log_log_type_code\" btree (log_type_code)    \"tbl_log_log_type_code_gin\" gin (log_type_code)    \"tbl_log_occurred_at\" btree (occurred_at)    \"tbl_log_subject\" btree (subject)    \"tbl_log_test\" btree (added_at, log_type_code)    \"tbl_log_test2\" btree (log_type_code, added_at)    \"tbl_log_written_by\" btree (written_by)Foreign-key constraints:    \"tbl_log_added_by_fkey\" FOREIGN KEY (added_by) REFERENCES tbl_staff(staff_id)    \"tbl_log_changed_by_fkey\" FOREIGN KEY (changed_by) REFERENCES tbl_staff(staff_id)    \"tbl_log_written_by_fkey\" FOREIGN KEY (written_by) REFERENCES tbl_staff(staff_id)Triggers:    tbl_log_changed_at_update BEFORE UPDATE ON tbl_log FOR EACH ROW EXECUTE PROCEDURE auto_changed_at_update()    tbl_log_log_chg AFTER INSERT OR DELETE OR UPDATE ON tbl_log FOR EACH ROW EXECUTE PROCEDURE table_log()agency=> \\d+ reference                                View \"public.reference\"     Column      |              Type              | Modifiers | Storage  | Description-----------------+--------------------------------+-----------+----------+------------- reference_id    | integer                        |           | plain    | from_table      | name                           |           | plain    | from_id_field   | name                           |           | plain    | from_id         | integer                        |           | plain    | to_table        | name                           |           | plain    | to_id_field     | name                           |           | plain    | to_id           | integer                        |           | plain    | added_at        | timestamp(0) without time zone |           | plain    | added_by        | integer                        |           | plain    | changed_at      | timestamp(0) without time zone |           | plain    | changed_by      | integer                        |           | plain    | is_deleted      | boolean                        |           | plain    | deleted_at      | timestamp(0) without time zone |           | plain    | deleted_by      | integer                        |           | plain    | deleted_comment | text                           |           | extended | sys_log         | text                           |           | extended |View definition: SELECT tbl_reference.reference_id,    tbl_reference.from_table,    tbl_reference.from_id_field,    tbl_reference.from_id,    tbl_reference.to_table,    tbl_reference.to_id_field,    tbl_reference.to_id,    tbl_reference.added_at,    tbl_reference.added_by,    tbl_reference.changed_at,    tbl_reference.changed_by,    tbl_reference.is_deleted,    tbl_reference.deleted_at,    tbl_reference.deleted_by,    tbl_reference.deleted_comment,    tbl_reference.sys_log   FROM tbl_reference  WHERE NOT tbl_reference.is_deleted;agency=> \\d+ tbl_reference                                                                  Table \"public.tbl_reference\"     Column      |              Type              |                              Modifiers                               | Storage  | Stats target | Description-----------------+--------------------------------+----------------------------------------------------------------------+----------+--------------+------------- reference_id    | integer                        | not null default nextval('tbl_reference_reference_id_seq'::regclass) | plain    |              | from_table      | name                           | not null                                                             | plain    |              | from_id_field   | name                           | not null                                                             | plain    |              | from_id         | integer                        | not null                                                             | plain    |              | to_table        | name                           | not null                                                             | plain    |              | to_id_field     | name                           | not null                                                             | plain    |              | to_id           | integer                        | not null                                                             | plain    |              | added_at        | timestamp(0) without time zone | not null default now()                                               | plain    |              | added_by        | integer                        | not null                                                             | plain    |              | changed_at      | timestamp(0) without time zone | not null default now()                                               | plain    |              | changed_by      | integer                        | not null                                                             | plain    |              | is_deleted      | boolean                        | not null default false                                               | plain    |              | deleted_at      | timestamp(0) without time zone |                                                                      | plain    |              | deleted_by      | integer                        |                                                                      | plain    |              | deleted_comment | text                           |                                                                      | extended |              | sys_log         | text                           |                                                                      | extended |              |Indexes:    \"tbl_reference_pkey\" PRIMARY KEY, btree (reference_id)    \"unique_index_tbl_reference\" UNIQUE, btree (from_table, from_id_field, from_id, to_table, to_id_field, to_id)    \"index_tbl_reference_from_id\" btree (from_id)    \"index_tbl_reference_from_id_field\" btree (from_id_field)    \"index_tbl_reference_from_table\" btree (from_table)    \"index_tbl_reference_is_deleted\" btree (is_deleted)    \"index_tbl_reference_to_id\" btree (to_id)    \"index_tbl_reference_to_id_field\" btree (to_id_field)    \"index_tbl_reference_to_table\" btree (to_table)    \"index_tbl_reference_to_table_id\" btree (to_table, to_id)Foreign-key constraints:    \"tbl_reference_added_by_fkey\" FOREIGN KEY (added_by) REFERENCES tbl_staff(staff_id)    \"tbl_reference_changed_by_fkey\" FOREIGN KEY (changed_by) REFERENCES tbl_staff(staff_id)    \"tbl_reference_deleted_by_fkey\" FOREIGN KEY (deleted_by) REFERENCES tbl_staff(staff_id)Triggers:    tbl_reference_alert_notify AFTER INSERT OR DELETE OR UPDATE ON tbl_reference FOR EACH ROW EXECUTE PROCEDURE table_alert_notify()    tbl_reference_changed_at_update BEFORE UPDATE ON tbl_reference FOR EACH ROW EXECUTE PROCEDURE auto_changed_at_update()    tbl_reference_log_chg AFTER INSERT OR DELETE OR UPDATE ON tbl_reference FOR EACH ROW EXECUTE PROCEDURE table_log()-- AGENCY Software  A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/[email protected](253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.", "msg_date": "Fri, 14 Aug 2020 14:34:52 -0700", "msg_from": "Ken Tanzer <[email protected]>", "msg_from_op": true, "msg_subject": "Query takes way longer with LIMIT, and EXPLAIN takes way longer than\n actual query" }, { "msg_contents": "On Fri, Aug 14, 2020 at 02:34:52PM -0700, Ken Tanzer wrote:\n> Hi. I've got a query that runs fine (~50ms). When I add a \"LIMIT 25\" to\n> it though, it takes way longer. The query itself then takes about 4.5\n> seconds. And when I do an explain, it takes 90+ seconds for the same query!\n\nDue to the over-estimated rowcount, the planner believes that (more) rows will\nbe output (sooner) than they actually are:\n\n -> Nested Loop Semi Join (cost=47.11..31367302.81 ROWS=611586 width=336) (actual time=47.098..97236.123 ROWS=25 loops=1)\n\nSo it thinks there's something to be saved/gained by using a plan that has a\nlow startup cost. But instead, it ends up running for a substantial fraction\nof the total (estimated) cost.\n\nAs for the \"explain is more expensive than the query\", that could be due to\ntiming overhead, as mentioned here. Test with \"explain (timing off)\" ?\nhttps://www.postgresql.org/docs/12/using-explain.html#USING-EXPLAIN-CAVEATS\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 14 Aug 2020 17:04:24 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query takes way longer with LIMIT, and EXPLAIN takes way longer\n than actual query" }, { "msg_contents": "On Fri, Aug 14, 2020 at 3:04 PM Justin Pryzby <[email protected]> wrote:\n\n> Due to the over-estimated rowcount, the planner believes that (more) rows\n> will\n> be output (sooner) than they actually are:\n>\n> -> Nested Loop Semi Join (cost=47.11..31367302.81 ROWS=611586\n> width=336) (actual time=47.098..97236.123 ROWS=25 loops=1)\n>\n> So it thinks there's something to be saved/gained by using a plan that has\n> a\n> low startup cost. But instead, it ends up running for a substantial\n> fraction\n> of the total (estimated) cost.\n>\n> Got it. Is there any way to address this other than re-writing the\nquery? (Statistics? Or something else?)\n\n\n\n> As for the \"explain is more expensive than the query\", that could be due to\n> timing overhead, as mentioned here. Test with \"explain (timing off)\" ?\n> https://www.postgresql.org/docs/12/using-explain.html#USING-EXPLAIN-CAVEATS\n>\n>\nGood call--explain with the timing off showed about the same time as the\nactual query.\n\nThanks!\n\nKen\n\n\n\n\n> --\n> Justin\n>\n\n\n-- \nAGENCY Software\nA Free Software data system\nBy and for non-profits\n*http://agency-software.org/ <http://agency-software.org/>*\n*https://demo.agency-software.org/client\n<https://demo.agency-software.org/client>*\[email protected]\n(253) 245-3801\n\nSubscribe to the mailing list\n<[email protected]?body=subscribe> to\nlearn more about AGENCY or\nfollow the discussion.\n\nOn Fri, Aug 14, 2020 at 3:04 PM Justin Pryzby <[email protected]> wrote:Due to the over-estimated rowcount, the planner believes that (more) rows will\nbe output (sooner) than they actually are:\n\n   ->  Nested Loop Semi Join  (cost=47.11..31367302.81 ROWS=611586 width=336) (actual time=47.098..97236.123 ROWS=25 loops=1)\n\nSo it thinks there's something to be saved/gained by using a plan that has a\nlow startup cost.  But instead, it ends up running for a substantial fraction\nof the total (estimated) cost.\nGot it.  Is there any way to address this other than re-writing the query?  (Statistics? Or something else?) \nAs for the \"explain is more expensive than the query\", that could be due to\ntiming overhead, as mentioned here.  Test with \"explain (timing off)\" ?\nhttps://www.postgresql.org/docs/12/using-explain.html#USING-EXPLAIN-CAVEATS\nGood call--explain with the timing off showed about the same time as the actual query.Thanks!Ken \n-- \nJustin\n-- AGENCY Software  A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/[email protected](253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.", "msg_date": "Fri, 14 Aug 2020 15:40:40 -0700", "msg_from": "Ken Tanzer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query takes way longer with LIMIT, and EXPLAIN takes way longer\n than actual query" }, { "msg_contents": "On Fri, Aug 14, 2020 at 5:35 PM Ken Tanzer <[email protected]> wrote:\n\n> Hi. I've got a query that runs fine (~50ms). When I add a \"LIMIT 25\" to\n> it though, it takes way longer. The query itself then takes about 4.5\n> seconds. And when I do an explain, it takes 90+ seconds for the same query!\n>\n> Explains and detailed table/view info below. tbl_log has 1.2M records,\n> tbl_reference has 550k. This is 9.6.19 on CentOS 6 with PDGG packages.\n>\n\nCentOS6 has slow clock calls, so it is not surprising that EXPLAIN ANALYZE\nwith TIMING defaulting to ON is slow. Using something more modern for the\ndistribution should really help that, but for the current case just setting\nTIMING OFF should be good enough as it is the row counts which are\ninteresting, not the timing of individual steps.\n\n\n> I know the query itself could be re-written, but it's coming from an ORM,\n> so I'm really focused on why the adding a limit is causing such performance\n> degradation, and what to do about it.\n>\n\nBut if it is coming from an ORM and you can't rewrite it, then what can you\ndo about it? Can you set enable_someting or something_cost parameters\nlocally just for the duration of one query? If the ORM doesn't let you\nre-write, then I doubt it would let you do that, either. Since you are\nusing such an old version, you can't create multivariate statistics, either\n(although I doubt they would help anyway).\n\n -> Nested Loop (cost=4313.36..14216.18 rows=611586 width=336) (actual\n> time=10.837..38.177 rows=1432 loops=1)\n> -> HashAggregate (cost=4312.93..4325.68 rows=1275 width=136)\n> (actual time=10.802..13.800 rows=1433 loops=1)\n> -> Index Scan using tbl_log_pkey on public.tbl_log\n> (cost=0.43..7.75 rows=1 width=336) (actual time=0.007..0.009 rows=1\n> loops=1433)\n>\n\nThe way-off row estimate for the nested loop is the cause of the bad plan\nchoice once you add the LIMIT. But what is the cause of the bad estimate?\nIf you just multiply the estimates for each of the child nodes, you get\nabout the correct answer. But the estimate for the nested loop is very\ndifferent from the product of the children. On the one hand that isn't\nsurprising, as the row estimates are computed at each node from first\nprinciples, not computed from the bottom up. But usually if the stats are\nway off, you can follow the error down to a lower level where they are also\nway off, but in this case you can't. That makes it really hard to reason\nabout what the problem might be.\n\nCan you clone your server, upgrade the clone to 12.4 or 13BETA3 or 14dev,\nand see if the problem still exists there? Can you anonymize your data so\nthat you can publish an example other people could run themselves to\ndissect the problem; or maybe give some queries that generate random data\nwhich have the correct data distribution to reproduce the issue?\n\nCheers,\n\nJeff\n\n>\n\nOn Fri, Aug 14, 2020 at 5:35 PM Ken Tanzer <[email protected]> wrote:Hi.  I've got a query that runs fine (~50ms).  When I add a \"LIMIT 25\" to it though, it takes way longer.  The query itself then takes about 4.5 seconds.  And when I do an explain, it takes 90+ seconds for the same query!Explains and detailed table/view info below.  tbl_log has 1.2M records, tbl_reference has 550k.  This is 9.6.19 on CentOS 6 with PDGG packages.CentOS6 has slow clock calls, so it is not surprising that EXPLAIN ANALYZE with TIMING defaulting to ON is slow.  Using something more modern for the distribution should really help that, but for the current case just setting TIMING OFF should be good enough as it is the row counts which are interesting, not the timing of individual steps. I know the query itself could be re-written, but it's coming from an ORM, so I'm really focused on why the adding a limit is causing such performance degradation, and what to do about it.  But if it is coming from an ORM and you can't rewrite it, then what can you do about it?  Can you set enable_someting or something_cost parameters locally just for the duration of one query?  If the ORM doesn't let you re-write, then I doubt it would let you do that, either.  Since you are using such an old version, you can't create multivariate statistics, either (although I doubt they would help anyway).   ->  Nested Loop  (cost=4313.36..14216.18 rows=611586 width=336) (actual time=10.837..38.177 rows=1432 loops=1)         ->  HashAggregate  (cost=4312.93..4325.68 rows=1275 width=136) (actual time=10.802..13.800 rows=1433 loops=1)         ->  Index Scan using tbl_log_pkey on public.tbl_log  (cost=0.43..7.75 rows=1 width=336) (actual time=0.007..0.009 rows=1 loops=1433)The way-off row estimate for the nested loop is the cause of the bad plan choice once you add the LIMIT.  But what is the cause of the bad estimate?  If you just multiply the estimates for each of the child nodes, you get about the correct answer.  But the estimate for the nested loop is very different from the product of the children.  On the one hand that isn't surprising, as the row estimates are computed at each node from first principles, not computed from the bottom up.  But usually if the stats are way off, you can follow the error down to a lower level where they are also way off, but in this case you can't.  That makes it really hard to reason about what the problem might be.Can you clone your server, upgrade the clone to 12.4 or 13BETA3 or 14dev, and see if the problem still exists there?  Can you anonymize your data so that you can publish an example other people could run themselves to dissect the problem; or maybe give some queries that generate random data which have the correct data distribution to reproduce the issue? Cheers,Jeff", "msg_date": "Fri, 14 Aug 2020 20:24:27 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query takes way longer with LIMIT, and EXPLAIN takes way longer\n than actual query" }, { "msg_contents": "On Fri, Aug 14, 2020 at 03:40:40PM -0700, Ken Tanzer wrote:\n> On Fri, Aug 14, 2020 at 3:04 PM Justin Pryzby <[email protected]> wrote:\n> > Due to the over-estimated rowcount, the planner believes that (more) rows\n> > will be output (sooner) than they actually are:\n> >\n> > -> Nested Loop Semi Join (cost=47.11..31367302.81 ROWS=611586\n> > width=336) (actual time=47.098..97236.123 ROWS=25 loops=1)\n> >\n> > So it thinks there's something to be saved/gained by using a plan that has a\n> > low startup cost. But instead, it ends up running for a substantial fraction\n> > of the total (estimated) cost.\n>\n> Got it. Is there any way to address this other than re-writing the\n> query? (Statistics? Or something else?)\n\nA usual trick is to change to write something like:\n|ORDER BY added_at + '0 seconds'::interval\"\nwhich means an index scan on added_at doesn't match the ORDER BY exactly (the\nplanner isn't smart enough to know better).\n\nYou could try to address the misestimate, which is probably a good idea anyway.\n\nOr make it believe that it's going to work harder to return those 25 rows.\nMaybe you could change the \"25\" to a bind parameter, like LIMIT $N, or however\nthe ORM wants it.\n\nYou could change the index to a BRIN index, which doesn't help with ORDER BY\n(but will also affect other queries which want to ORDER BY).\n\nMaybe you could add an index on this expression and ANALYZE the table. I think\nit might help the estimate. Or it might totally change the shape of the plan,\nlike allowing indexonly scan (which would probably require VACUUM)..\n Group Key: CASE WHEN (tbl_reference.to_table = 'client'::name) THEN tbl_reference.from_id WHEN (tbl_reference.from_table = 'client'::name) THEN tbl_reference.to_id ELSE NULL::integer END\n\nI know you didn't want to rewrite the query, but it looks to me like adjusting\nthe schema or query might be desirable.\n\nagency=> EXPLAIN (ANALYZE,VERBOSE,BUFFERS,TIMING) SELECT * FROM Log WHERE log_id IN (SELECT CASE WHEN to_table='client' THEN from_id WHEN from_table='client' THEN to_id END FROM reference WHERE ((from_id_field = E'client_id'\n AND from_id = E'34918'\n AND from_table = E'client'\n AND to_table = E'log'\n )\n OR (to_id_field = E'client_id'\n AND to_id = E'34918'\n AND to_table = E'client'\n AND from_table = E'log'\n ))) ORDER BY added_at DESC;\n\nTo me that smells like a UNION (maybe not UNION ALL):\nSELECT FROM log WHERE EXISTS (SELECT 1 FROM reference ref WHERE log.log_id=ref.from_id AND to_table='client' AND from_id_field='client_id' AND from_id=$1 AND from_table='client' AND to_table='log')\nUNION\nSELECT FROM log WHERE EXISTS (SELECT 1 FROM reference ref WHERE log.log_id=ref.to_id AND from_table='client' AND to_id_field='client_id' AND to_id=$1 AND to_table='client' AND from_table='log')\n\nI guess you might know that various indexes are redundant:\n\n \"index_tbl_log_log_type_code\" btree (log_type_code)\n \"tbl_log_log_type_code\" btree (log_type_code)\n \"tbl_log_test2\" btree (log_type_code, added_at)\n\n \"tbl_log_added_at\" btree (added_at)\n \"tbl_log_test\" btree (added_at, log_type_code)\n\n \"index_tbl_reference_to_table\" btree (to_table)\n \"index_tbl_reference_to_table_id\" btree (to_table, to_id)\n\n \"index_tbl_reference_is_deleted\" btree (is_deleted)\n=> Maybe that would be better as a WHERE NOT is_deleted clause on various indexes (?)\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 14 Aug 2020 19:55:33 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query takes way longer with LIMIT, and EXPLAIN takes way longer\n than actual query" } ]
[ { "msg_contents": "Hello,\n\nI wish to use logical replication in Postgres to capture transactions as\nCDC and forward them to a custom sink.\n\nTo understand the overhead of logical replication workflow I created a toy\nsubscriber using the V3PGReplicationStream that acknowledges LSNs after\nevery 16k reads by calling setAppliedLsn, setFlushedLsn, and forceUpdateState.\nThe toy subscriber is set up as a subscriber for a master Postgres instance\nthat publishes changes using a Publication. I then run a write-heavy\nworkload on this setup that generates transaction logs at approximately\n235MBps. Postgres is run on a beefy machine with a 10+GBps network link\nbetween Postgres and the toy subscriber.\n\nMy expectation with this setup was that the replication lag on master would\nbe minimal as the subscriber acks the LSN almost immediately. However, I\nobserve the replication lag to increase continuously for the duration of\nthe test. Statistics in pg_replication_slots show that restart_lsn\nlags significantly behind\nthe confirmed_flushed_lsn. Cursory reading on restart_lsn suggests that an\nincreasing gap between restart_lsn and confirmed_flushed_lsn means that\nPostgres needs to reclaim disk space and advance restart_lsn to catch up to\nconfirmed_flushed_lsn.\n\nWith that context, I am looking for answers for two questions -\n\n1. What work needs to happen in the database to advance restart_lsn to\nconfirmed_flushed_lsn?\n2. What is the recommendation on tuning the database to improve the\nreplication lag in such scenarios?\n\nRegards,\nSatyam\n\nHello,I wish to use logical replication in Postgres to capture transactions as CDC and forward them to a custom sink. To understand the overhead of logical replication workflow I created a toy subscriber using the V3PGReplicationStream that acknowledges LSNs after every 16k reads by calling setAppliedLsn, setFlushedLsn, and forceUpdateState. The toy subscriber is set up as a subscriber for a master Postgres instance that publishes changes using a Publication. I then run a write-heavy workload on this setup that generates transaction logs at approximately 235MBps. Postgres is run on a beefy machine with a 10+GBps network link between Postgres and the toy subscriber. My expectation with this setup was that the replication lag on master would be minimal as the subscriber acks the LSN almost immediately. However, I observe the replication lag to increase continuously for the duration of the test. Statistics in pg_replication_slots show that restart_lsn lags significantly behind the confirmed_flushed_lsn. Cursory reading on restart_lsn suggests that an increasing gap between restart_lsn and confirmed_flushed_lsn means that Postgres needs to reclaim disk space and advance restart_lsn to catch up to confirmed_flushed_lsn. With that context, I am looking for answers for two questions -1. What work needs to happen in the database to advance restart_lsn to confirmed_flushed_lsn?2. What is the recommendation on tuning the database to improve the replication lag in such scenarios?Regards,Satyam", "msg_date": "Tue, 18 Aug 2020 09:27:34 -0700", "msg_from": "Satyam Shekhar <[email protected]>", "msg_from_op": true, "msg_subject": "Replication lag due to lagging restart_lsn" }, { "msg_contents": "Hello.\n\nAt Tue, 18 Aug 2020 09:27:34 -0700, Satyam Shekhar <[email protected]> wrote in \n> Hello,\n> \n> I wish to use logical replication in Postgres to capture transactions as\n> CDC and forward them to a custom sink.\n> \n> To understand the overhead of logical replication workflow I created a toy\n> subscriber using the V3PGReplicationStream that acknowledges LSNs after\n> every 16k reads by calling setAppliedLsn, setFlushedLsn, and forceUpdateState.\n> The toy subscriber is set up as a subscriber for a master Postgres instance\n> that publishes changes using a Publication. I then run a write-heavy\n> workload on this setup that generates transaction logs at approximately\n> 235MBps. Postgres is run on a beefy machine with a 10+GBps network link\n> between Postgres and the toy subscriber.\n> \n> My expectation with this setup was that the replication lag on master would\n> be minimal as the subscriber acks the LSN almost immediately. However, I\n> observe the replication lag to increase continuously for the duration of\n> the test. Statistics in pg_replication_slots show that restart_lsn\n> lags significantly behind\n> the confirmed_flushed_lsn. Cursory reading on restart_lsn suggests that an\n> increasing gap between restart_lsn and confirmed_flushed_lsn means that\n> Postgres needs to reclaim disk space and advance restart_lsn to catch up to\n> confirmed_flushed_lsn.\n> \n> With that context, I am looking for answers for two questions -\n> \n> 1. What work needs to happen in the database to advance restart_lsn to\n> confirmed_flushed_lsn?\n> 2. What is the recommendation on tuning the database to improve the\n> replication lag in such scenarios?\n\nTo make sure, replication delay or lag here is current_wal_lsn() -\nconfirmed_flush_lsn. restart_lsn has nothing to do with replication\nlag. It is the minimum LSN the server thinks it needs for restarting\nreplication on the slot.\n\nHow long have you observed the increase of the gap? If no\nlong-transactions are running, restart_lsn is the current LSN about\nfrom 15 to 30 seconds ago. That is, the gap between restart_lsn and\nconfirmed_flush_lsn would be at most the amount of WAL emitted in the\nlast 30 seconds. In this case, that is estimated to be 235MB*30 =\nabout 7GB or 440 in 16MB-segments even if the system is perfectly\nworking. Anyway the publisher server would need to preserve WAL files\nup to about 68GB (in the case where checkpoint_timeout is 5 minutes)\nso requirement of 7GB by restart_lsn doesn't matter.\n\nIn short, I don't think you need to do something against that \"lag\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 19 Aug 2020 20:43:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication lag due to lagging restart_lsn" }, { "msg_contents": "When logical replication is setup, any wal generation on any tables will\nresult in replication lag. Since you are running a long running transaction\non the master, the maximum number of changes kept in the memory per\ntransaction is 4MB. If the transaction requires more than 4MB the changes\nare spilled to disk. This is when you will start seeing\n\n1. Replication lag spiking\n2. Storage being consumed\n3. Restart lsn stops moving forward\n\nYou can confirm if the heavy write that you are talking about is spilling\nto disk or not by setting log_min_messges to debug 2. Try to find if the\nchanges are spilled to disk.\n\nTo answer your question:\n\n1. As long as the write heavy query is running on the database, you will\nnot see restart lsn moving.\n2. You will have to have smaller transactions\n3. When the query is completed, you will see restart_lsn moving forward\n\nOn Tue, Aug 18, 2020 at 11:27 AM Satyam Shekhar <[email protected]>\nwrote:\n\n> Hello,\n>\n> I wish to use logical replication in Postgres to capture transactions as\n> CDC and forward them to a custom sink.\n>\n> To understand the overhead of logical replication workflow I created a toy\n> subscriber using the V3PGReplicationStream that acknowledges LSNs after\n> every 16k reads by calling setAppliedLsn, setFlushedLsn, and forceUpdateState.\n> The toy subscriber is set up as a subscriber for a master Postgres\n> instance that publishes changes using a Publication. I then run a\n> write-heavy workload on this setup that generates transaction logs at\n> approximately 235MBps. Postgres is run on a beefy machine with a 10+GBps\n> network link between Postgres and the toy subscriber.\n>\n> My expectation with this setup was that the replication lag on master\n> would be minimal as the subscriber acks the LSN almost immediately.\n> However, I observe the replication lag to increase continuously for the\n> duration of the test. Statistics in pg_replication_slots show that\n> restart_lsn lags significantly behind the confirmed_flushed_lsn. Cursory\n> reading on restart_lsn suggests that an increasing gap between restart_lsn\n> and confirmed_flushed_lsn means that Postgres needs to reclaim disk space\n> and advance restart_lsn to catch up to confirmed_flushed_lsn.\n>\n> With that context, I am looking for answers for two questions -\n>\n> 1. What work needs to happen in the database to advance restart_lsn to\n> confirmed_flushed_lsn?\n> 2. What is the recommendation on tuning the database to improve the\n> replication lag in such scenarios?\n>\n> Regards,\n> Satyam\n>\n>\n>\n\nWhen logical replication is setup, any wal generation on any tables will result in replication lag. Since you are running a long running transaction on the master, the maximum number of changes kept in the memory per transaction is 4MB. If the transaction requires more than 4MB the changes are spilled to disk. This is when you will start seeing 1. Replication lag spiking 2. Storage being consumed3. Restart lsn stops moving forwardYou can confirm if the heavy write that you are talking about is spilling to disk or not by setting log_min_messges to debug 2. Try to find if the changes are spilled to disk. To answer your question: 1. As long as the write heavy query is running on the database, you will not see restart lsn moving. 2. You will have to have smaller transactions3. When the query is completed, you will see restart_lsn moving forward  On Tue, Aug 18, 2020 at 11:27 AM Satyam Shekhar <[email protected]> wrote:Hello,I wish to use logical replication in Postgres to capture transactions as CDC and forward them to a custom sink. To understand the overhead of logical replication workflow I created a toy subscriber using the V3PGReplicationStream that acknowledges LSNs after every 16k reads by calling setAppliedLsn, setFlushedLsn, and forceUpdateState. The toy subscriber is set up as a subscriber for a master Postgres instance that publishes changes using a Publication. I then run a write-heavy workload on this setup that generates transaction logs at approximately 235MBps. Postgres is run on a beefy machine with a 10+GBps network link between Postgres and the toy subscriber. My expectation with this setup was that the replication lag on master would be minimal as the subscriber acks the LSN almost immediately. However, I observe the replication lag to increase continuously for the duration of the test. Statistics in pg_replication_slots show that restart_lsn lags significantly behind the confirmed_flushed_lsn. Cursory reading on restart_lsn suggests that an increasing gap between restart_lsn and confirmed_flushed_lsn means that Postgres needs to reclaim disk space and advance restart_lsn to catch up to confirmed_flushed_lsn. With that context, I am looking for answers for two questions -1. What work needs to happen in the database to advance restart_lsn to confirmed_flushed_lsn?2. What is the recommendation on tuning the database to improve the replication lag in such scenarios?Regards,Satyam", "msg_date": "Wed, 19 Aug 2020 08:15:44 -0500", "msg_from": "Kiran Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication lag due to lagging restart_lsn" }, { "msg_contents": "I have a case that master has been restarted many times, restart_lsn not\nmoved since nov 2022 till today apr 2023.\nI have tried pg_replication_slot_advance() but no luck :-(\n\npostgres 12.8\n1 master (4 publisher) many SR slaves, and 1 logical replication (4\nsubscribers)\n\nIs there a chance to edit the state file under the pg_replslots folder?\n\n\nOn Wed, Aug 19, 2020 at 8:16 PM Kiran Singh <[email protected]>\nwrote:\n\n> When logical replication is setup, any wal generation on any tables will\n> result in replication lag. Since you are running a long running transaction\n> on the master, the maximum number of changes kept in the memory per\n> transaction is 4MB. If the transaction requires more than 4MB the changes\n> are spilled to disk. This is when you will start seeing\n>\n> 1. Replication lag spiking\n> 2. Storage being consumed\n> 3. Restart lsn stops moving forward\n>\n> You can confirm if the heavy write that you are talking about is spilling\n> to disk or not by setting log_min_messges to debug 2. Try to find if the\n> changes are spilled to disk.\n>\n> To answer your question:\n>\n> 1. As long as the write heavy query is running on the database, you will\n> not see restart lsn moving.\n> 2. You will have to have smaller transactions\n> 3. When the query is completed, you will see restart_lsn moving forward\n>\n>\n-- \nregards\n\nujang jaenudin | Self-Employed, DBA Consultant\nhttp://id.linkedin.com/pub/ujang-jaenudin/12/64/bab\n\nI have a case that master has been restarted many times, restart_lsn not moved since nov 2022 till today apr 2023.I have tried pg_replication_slot_advance() but no luck :-(postgres 12.81 master (4 publisher) many SR slaves, and 1 logical replication (4 subscribers) Is there a chance to edit the state file under the pg_replslots folder?On Wed, Aug 19, 2020 at 8:16 PM Kiran Singh <[email protected]> wrote:When logical replication is setup, any wal generation on any tables will result in replication lag. Since you are running a long running transaction on the master, the maximum number of changes kept in the memory per transaction is 4MB. If the transaction requires more than 4MB the changes are spilled to disk. This is when you will start seeing 1. Replication lag spiking 2. Storage being consumed3. Restart lsn stops moving forwardYou can confirm if the heavy write that you are talking about is spilling to disk or not by setting log_min_messges to debug 2. Try to find if the changes are spilled to disk. To answer your question: 1. As long as the write heavy query is running on the database, you will not see restart lsn moving. 2. You will have to have smaller transactions3. When the query is completed, you will see restart_lsn moving forward   -- regardsujang jaenudin | Self-Employed, DBA Consultanthttp://id.linkedin.com/pub/ujang-jaenudin/12/64/bab", "msg_date": "Tue, 11 Apr 2023 10:39:38 +0700", "msg_from": "milist ujang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication lag due to lagging restart_lsn" }, { "msg_contents": "Finally got this useful tool:\nhttps://github.com/EnterpriseDB/pg_failover_slots\n\nSince I usually do scheduled switchover, this tool really helps a lot.\n\nOn Tue, Apr 11, 2023 at 10:39 AM milist ujang <[email protected]>\nwrote:\n\n> I have a case that master has been restarted many times, restart_lsn not\n> moved since nov 2022 till today apr 2023.\n> I have tried pg_replication_slot_advance() but no luck :-(\n>\n> postgres 12.8\n> 1 master (4 publisher) many SR slaves, and 1 logical replication (4\n> subscribers)\n>\n> Is there a chance to edit the state file under the pg_replslots folder?\n>\n>>\n>>\n> --\nregards\n\nujang jaenudin | Self-Employed, DBA Consultant\nhttp://id.linkedin.com/pub/ujang-jaenudin/12/64/bab\n\nFinally got this useful tool:https://github.com/EnterpriseDB/pg_failover_slotsSince I usually do scheduled switchover, this tool really helps a lot.On Tue, Apr 11, 2023 at 10:39 AM milist ujang <[email protected]> wrote:I have a case that master has been restarted many times, restart_lsn not moved since nov 2022 till today apr 2023.I have tried pg_replication_slot_advance() but no luck :-(postgres 12.81 master (4 publisher) many SR slaves, and 1 logical replication (4 subscribers) Is there a chance to edit the state file under the pg_replslots folder? -- regardsujang jaenudin | Self-Employed, DBA Consultanthttp://id.linkedin.com/pub/ujang-jaenudin/12/64/bab", "msg_date": "Sat, 15 Apr 2023 17:41:16 +0700", "msg_from": "milist ujang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication lag due to lagging restart_lsn" } ]
[ { "msg_contents": "Using V12, Linux [Ubuntu 16.04LTS]\n\nI have a system which implements a message queue with the basic pattern \nthat a process selects a group of, for example 250, rows for processing \nvia SELECT .. LIMIT 250 FOR UPDATE SKIP LOCKED.\n\nWhen there are a small number of concurrent connections to process the \nqueue, this seems to work as expected and connections quickly obtain a \nunique block of 250 rows for processing.\n\nHowever, as I scale up the number of concurrent connections, I see a \nspike in CPU (to 100% across 80 cores) when the SELECT FOR UPDATE SKIP \nLOCKED executes and the select processes wait for multiple minutes \n(10-20 minutes) before completing.  My use case requires around 256 \nconcurrent processors for the queue but I've been unable to scale beyond \n128 without everything grinding to a halt.\n\nThe queue table itself fits in RAM (with 2M hugepages) and during the \nwait, all the performance counters drop to almost 0 - no disk read or \nwrite (semi-expected due to the table fitting in memory) with 100% \nbuffer hit rate in pg_top and row read around 100/s which is much \nsmaller than expected.\n\nAfter processes complete the select and the number of waiting selects \nstarts to fall, CPU load falls and then suddenly the remaining processes \nall complete within a few seconds and things perform normally until the \nnext time there are a group of SELECT FOR UPDATE statements which bunch \ntogether and things then repeat.\n\nI found that performing extremely frequent vacuum analyze (every 30 \nminutes) helps a small amount but this is not that helpful so problems \nare still very apparent.\n\nI've exhausted all the performance tuning and analysis results I can \nfind that seem even a little bit relevant but cannot get this cracked.\n\nIs anyone on the list able to help with suggestions of what I can do to \ntrack why this CPU hogging happens as this does seem to be the root of \nthe problem?\n\nThanks in advance,\n\nJim\n\n\n\n\n\n\n\n\nUsing V12, Linux [Ubuntu 16.04LTS]\n\nI have a system which implements a message queue with the basic\n pattern that a process selects a group of, for example 250, rows\n for processing via SELECT .. LIMIT 250 FOR UPDATE SKIP LOCKED.\n When there are a small number of concurrent connections to process\n the queue, this seems to work as expected and connections quickly\n obtain a unique block of 250 rows for processing.\n However, as I scale up the number of concurrent connections, I\n see a spike in CPU (to 100% across 80 cores) when the SELECT FOR\n UPDATE SKIP LOCKED executes and the select processes wait for\n multiple minutes (10-20 minutes) before completing.  My use case\n requires around 256 concurrent processors for the queue but I've\n been unable to scale beyond 128 without everything grinding to a\n halt.\n\nThe queue table itself fits in RAM (with 2M hugepages) and during\n the wait, all the performance counters drop to almost 0 - no disk\n read or write (semi-expected due to the table fitting in memory)\n with 100% buffer hit rate in pg_top and row read around 100/s\n which is much smaller than expected.\nAfter processes complete the select and the number of waiting\n selects starts to fall, CPU load falls and then suddenly the\n remaining processes all complete within a few seconds and things\n perform normally until the next time there are a group of SELECT \n FOR UPDATE statements which bunch together and things then repeat.\n\nI found that performing extremely frequent vacuum analyze (every\n 30 minutes) helps a small amount but this is not that helpful so\n problems are still very apparent.\nI've exhausted all the performance tuning and analysis results I\n can find that seem even a little bit relevant but cannot get this\n cracked.\nIs anyone on the list able to help with suggestions of what I can\n do to track why this CPU hogging happens as this does seem to be\n the root of the problem?\nThanks in advance,\nJim", "msg_date": "Tue, 18 Aug 2020 19:52:56 -0400", "msg_from": "Jim Jarvie <[email protected]>", "msg_from_op": true, "msg_subject": "CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "Message queue...\nAre rows deleted? Are they updated once or many times? Have you adjusted\nfillfactor on table or indexes? How many rows in the table currently or on\naverage? Is there any ordering to which rows you update?\n\nIt seems likely that one of the experts/code contributors will chime in and\nexplain about how locking that many rows in that many concurrent\nconnections means that some resource is overrun and so you are escalating\nto a table lock instead of actually truly locking only the 250 rows you\nwanted.\n\nOn the other hand, you say 80 cores and you are trying to increase the\nnumber of concurrent processes well beyond that without (much) disk I/O\nbeing involved. I wouldn't expect that to perform awesome.\n\nIs there a chance to modify the code to permit each process to lock 1000\nrows at a time and be content with 64 concurrent processes?\n\nMessage queue...Are rows deleted? Are they updated once or many times? Have you adjusted fillfactor on table or indexes? How many rows in the table currently or on average? Is there any ordering to which rows you update?It seems likely that one of the experts/code contributors will chime in and explain about how locking that many rows in that many concurrent connections means that some resource is overrun and so you are escalating to a table lock instead of actually truly locking only the 250 rows you wanted.On the other hand, you say 80 cores and you are trying to increase the number of concurrent processes well beyond that without (much) disk I/O being involved. I wouldn't expect that to perform awesome.Is there a chance to modify the code to permit each process to lock 1000 rows at a time and be content with 64 concurrent processes?", "msg_date": "Tue, 18 Aug 2020 18:08:56 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "Thank you for the quick response.\n\nNo adjustments of fill factors.  Hadn't though of that - I'll \ninvestigate and try some options to see if I can measure an effect.\n\nThere is some ordering on the select [ ORDER BY q_id] so each block of \n250 is sequential-ish queue items; I just need them more or less in the \norder they were queued so as near FIFO as possible without being totally \nstrict on absolute sequential order.\n\nTable has around 192K rows, as a row is processed it is deleted as part \nof the transaction with a commit at the end after all 250 are processed \n[partitioned table, state changes and it migrates to a different \npartition] and as the queue drops to 64K it is added to with 128K rows \nat a time.\n\nI've tuned the LIMIT value both up and down.  As I move the limit up, \nthe problem becomes substantially worse; 300 swamps it and the selects \ntake > 1 hour to complete; at 600 they just all lock everything up and \nit stops processing.  I did try 1,000 but it basically resulted in \nnothing being processed.\n\nLess processes does not give the throughput required because the queue \nsends data elsewhere which has a long round trip time but does permit \nover 1K concurrent connections as their work-round for throughput.  I'm \nstuck having to scale up my concurrent processes in order to compensate \nfor the long processing time of an individual queue item.\n\n\n\nOn 18-Aug.-2020 20:08, Michael Lewis wrote:\n> Message queue...\n> Are rows deleted? Are they updated once or many times? Have you adjusted\n> fillfactor on table or indexes? How many rows in the table currently or on\n> average? Is there any ordering to which rows you update?\n>\n> It seems likely that one of the experts/code contributors will chime in and\n> explain about how locking that many rows in that many concurrent\n> connections means that some resource is overrun and so you are escalating\n> to a table lock instead of actually truly locking only the 250 rows you\n> wanted.\n>\n> On the other hand, you say 80 cores and you are trying to increase the\n> number of concurrent processes well beyond that without (much) disk I/O\n> being involved. I wouldn't expect that to perform awesome.\n>\n> Is there a chance to modify the code to permit each process to lock 1000\n> rows at a time and be content with 64 concurrent processes?\n>\n\n\n\n\n\n\nThank you for the quick response.\nNo adjustments of fill factors.  Hadn't though of that - I'll\n investigate and try some options to see if I can measure an\n effect.\n\nThere is some ordering on the select [ ORDER BY q_id] so each\n block of 250 is sequential-ish queue items; I just need them more\n or less in the order they were queued so as near FIFO as possible\n without being totally strict on absolute sequential order.\n\nTable has around 192K rows, as a row is processed it is deleted\n as part of the transaction with a commit at the end after all 250\n are processed [partitioned table, state changes and it migrates to\n a different partition] and as the queue drops to 64K it is added\n to with 128K rows at a time.\nI've tuned the LIMIT value both up and down.  As I move the limit\n up, the problem becomes substantially worse; 300 swamps it and the\n selects take > 1 hour to complete; at 600 they just all lock\n everything up and it stops processing.  I did try 1,000 but it\n basically resulted in nothing being processed.\n\nLess processes does not give the\n throughput required because the queue sends data elsewhere which\n has a long round trip time but does permit over 1K concurrent\n connections as their work-round for throughput.  I'm stuck having\n to scale up my concurrent processes in order to compensate for the\n long processing time of an individual queue item.\n\n\n\n\n\n\nOn 18-Aug.-2020 20:08, Michael Lewis\n wrote:\n\n\nMessage queue...\nAre rows deleted? Are they updated once or many times? Have you adjusted\nfillfactor on table or indexes? How many rows in the table currently or on\naverage? Is there any ordering to which rows you update?\n\nIt seems likely that one of the experts/code contributors will chime in and\nexplain about how locking that many rows in that many concurrent\nconnections means that some resource is overrun and so you are escalating\nto a table lock instead of actually truly locking only the 250 rows you\nwanted.\n\nOn the other hand, you say 80 cores and you are trying to increase the\nnumber of concurrent processes well beyond that without (much) disk I/O\nbeing involved. I wouldn't expect that to perform awesome.\n\nIs there a chance to modify the code to permit each process to lock 1000\nrows at a time and be content with 64 concurrent processes?", "msg_date": "Tue, 18 Aug 2020 20:21:38 -0400", "msg_from": "Jim Jarvie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "Did you try using NOWAIT instead of SKIP LOCKED to see if the behavior\nstill shows up?\n\nOn Tue, Aug 18, 2020, 8:22 PM Jim Jarvie <[email protected]> wrote:\n\n> Thank you for the quick response.\n>\n> No adjustments of fill factors. Hadn't though of that - I'll investigate\n> and try some options to see if I can measure an effect.\n>\n> There is some ordering on the select [ ORDER BY q_id] so each block of 250\n> is sequential-ish queue items; I just need them more or less in the order\n> they were queued so as near FIFO as possible without being totally strict\n> on absolute sequential order.\n>\n> Table has around 192K rows, as a row is processed it is deleted as part of\n> the transaction with a commit at the end after all 250 are processed\n> [partitioned table, state changes and it migrates to a different partition]\n> and as the queue drops to 64K it is added to with 128K rows at a time.\n>\n> I've tuned the LIMIT value both up and down. As I move the limit up, the\n> problem becomes substantially worse; 300 swamps it and the selects take > 1\n> hour to complete; at 600 they just all lock everything up and it stops\n> processing. I did try 1,000 but it basically resulted in nothing being\n> processed.\n> Less processes does not give the throughput required because the queue\n> sends data elsewhere which has a long round trip time but does permit over\n> 1K concurrent connections as their work-round for throughput. I'm stuck\n> having to scale up my concurrent processes in order to compensate for the\n> long processing time of an individual queue item.\n>\n>\n>\n> On 18-Aug.-2020 20:08, Michael Lewis wrote:\n>\n> Message queue...\n> Are rows deleted? Are they updated once or many times? Have you adjusted\n> fillfactor on table or indexes? How many rows in the table currently or on\n> average? Is there any ordering to which rows you update?\n>\n> It seems likely that one of the experts/code contributors will chime in and\n> explain about how locking that many rows in that many concurrent\n> connections means that some resource is overrun and so you are escalating\n> to a table lock instead of actually truly locking only the 250 rows you\n> wanted.\n>\n> On the other hand, you say 80 cores and you are trying to increase the\n> number of concurrent processes well beyond that without (much) disk I/O\n> being involved. I wouldn't expect that to perform awesome.\n>\n> Is there a chance to modify the code to permit each process to lock 1000\n> rows at a time and be content with 64 concurrent processes?\n>\n>\n>\n\nDid you try using NOWAIT instead of SKIP LOCKED to see if the behavior still shows up?On Tue, Aug 18, 2020, 8:22 PM Jim Jarvie <[email protected]> wrote:\n\nThank you for the quick response.\nNo adjustments of fill factors.  Hadn't though of that - I'll\n investigate and try some options to see if I can measure an\n effect.\n\nThere is some ordering on the select [ ORDER BY q_id] so each\n block of 250 is sequential-ish queue items; I just need them more\n or less in the order they were queued so as near FIFO as possible\n without being totally strict on absolute sequential order.\n\nTable has around 192K rows, as a row is processed it is deleted\n as part of the transaction with a commit at the end after all 250\n are processed [partitioned table, state changes and it migrates to\n a different partition] and as the queue drops to 64K it is added\n to with 128K rows at a time.\nI've tuned the LIMIT value both up and down.  As I move the limit\n up, the problem becomes substantially worse; 300 swamps it and the\n selects take > 1 hour to complete; at 600 they just all lock\n everything up and it stops processing.  I did try 1,000 but it\n basically resulted in nothing being processed.\n\nLess processes does not give the\n throughput required because the queue sends data elsewhere which\n has a long round trip time but does permit over 1K concurrent\n connections as their work-round for throughput.  I'm stuck having\n to scale up my concurrent processes in order to compensate for the\n long processing time of an individual queue item.\n\n\n\n\n\n\nOn 18-Aug.-2020 20:08, Michael Lewis\n wrote:\n\n\nMessage queue...\nAre rows deleted? Are they updated once or many times? Have you adjusted\nfillfactor on table or indexes? How many rows in the table currently or on\naverage? Is there any ordering to which rows you update?\n\nIt seems likely that one of the experts/code contributors will chime in and\nexplain about how locking that many rows in that many concurrent\nconnections means that some resource is overrun and so you are escalating\nto a table lock instead of actually truly locking only the 250 rows you\nwanted.\n\nOn the other hand, you say 80 cores and you are trying to increase the\nnumber of concurrent processes well beyond that without (much) disk I/O\nbeing involved. I wouldn't expect that to perform awesome.\n\nIs there a chance to modify the code to permit each process to lock 1000\nrows at a time and be content with 64 concurrent processes?", "msg_date": "Tue, 18 Aug 2020 20:38:56 -0400", "msg_from": "Henrique Montenegro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "On Tue, Aug 18, 2020 at 6:22 PM Jim Jarvie <[email protected]> wrote:\n\n> There is some ordering on the select [ ORDER BY q_id] so each block of 250\n> is sequential-ish queue items; I just need them more or less in the order\n> they were queued so as near FIFO as possible without being totally strict\n> on absolute sequential order.\n>\nHow long does each process take in total? How strict does that FIFO really\nneed to be when you are already doing SKIP LOCKED anyway?\n\nTable has around 192K rows, as a row is processed it is deleted as part of\n> the transaction with a commit at the end after all 250 are processed\n> [partitioned table, state changes and it migrates to a different partition]\n> and as the queue drops to 64K it is added to with 128K rows at a time.\n>\nCan you expound on the partitioning? Are all consumers of the queue always\nhitting one active partition and anytime a row is processed, it always\nmoves to one of many? archived type partitions?\n\nLess processes does not give the throughput required because the queue\n> sends data elsewhere which has a long round trip time\n>\n\nIs that done via FDW or otherwise within the same database transaction? Are\nyou connecting some queue consumer application code to Postgres, select for\nupdate, doing work on some remote system that is slow, and then coming back\nand committing the DB work?\n\nBy the way, top-posting is discouraged here and partial quotes with\ninterspersed comments are common practice.\n\nOn Tue, Aug 18, 2020 at 6:22 PM Jim Jarvie <[email protected]> wrote:\n\nThere is some ordering on the select [ ORDER BY q_id] so each\n block of 250 is sequential-ish queue items; I just need them more\n or less in the order they were queued so as near FIFO as possible\n without being totally strict on absolute sequential order.How long does each process take in total? How strict does that FIFO really need to be when you are already doing SKIP LOCKED anyway?\nTable has around 192K rows, as a row is processed it is deleted\n as part of the transaction with a commit at the end after all 250\n are processed [partitioned table, state changes and it migrates to\n a different partition] and as the queue drops to 64K it is added\n to with 128K rows at a time.Can you expound on the partitioning? Are all consumers of the queue always hitting one active partition and anytime a row is processed, it always moves to one of many? archived type partitions?\nLess processes does not give the\n throughput required because the queue sends data elsewhere which\n has a long round trip timeIs that done via FDW or otherwise within the same database transaction? Are you connecting some queue consumer application code to Postgres, select for update, doing work on some remote system that is slow, and then coming back and committing the DB work?By the way, top-posting is discouraged here and partial quotes with interspersed comments are common practice.", "msg_date": "Tue, 18 Aug 2020 18:39:45 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "Also, have you checked how bloated your indexes are getting? Do you run\ndefault autovacuum settings? Did you update to the new default 2ms cost\ndelay value? With a destructive queue, it would be very important to ensure\nautovacuum is keeping up with the churn. Share your basic table structure\nand indexes, sanitized if need be.\n\n>\n\nAlso, have you checked how bloated your indexes are getting? Do you run default autovacuum settings? Did you update to the new default 2ms cost delay value? With a destructive queue, it would be very important to ensure autovacuum is keeping up with the churn. Share your basic table structure and indexes, sanitized if need be.", "msg_date": "Tue, 18 Aug 2020 18:45:26 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "On Tue, 2020-08-18 at 19:52 -0400, Jim Jarvie wrote:\n> I have a system which implements a message queue with the basic pattern that a process selects a group of,\n> for example 250, rows for processing via SELECT .. LIMIT 250 FOR UPDATE SKIP LOCKED.\n> \n> When there are a small number of concurrent connections to process the queue, this seems to work as\n> expected and connections quickly obtain a unique block of 250 rows for processing.\n> However, as I scale up the number of concurrent connections, I see a spike in CPU (to 100% across 80 cores)\n> when the SELECT FOR UPDATE SKIP LOCKED executes and the select processes wait for multiple minutes\n> (10-20 minutes) before completing. My use case requires around 256 concurrent processors for the queue\n> but I've been unable to scale beyond 128 without everything grinding to a halt.\n> \n> The queue table itself fits in RAM (with 2M hugepages) and during the wait, all the performance counters\n> drop to almost 0 - no disk read or write (semi-expected due to the table fitting in memory) with 100%\n> buffer hit rate in pg_top and row read around 100/s which is much smaller than expected.\n> \n> After processes complete the select and the number of waiting selects starts to fall, CPU load falls and\n> then suddenly the remaining processes all complete within a few seconds and things perform normally until\n> the next time there are a group of SELECT FOR UPDATE statements which bunch together and things then repeat.\n> \n> I found that performing extremely frequent vacuum analyze (every 30 minutes) helps a small amount but\n> this is not that helpful so problems are still very apparent.\n> \n> I've exhausted all the performance tuning and analysis results I can find that seem even a little bit\n> relevant but cannot get this cracked.\n> \n> Is anyone on the list able to help with suggestions of what I can do to track why this CPU hogging happens\n> as this does seem to be the root of the problem?\n\nYou should\n\n- check with \"pgstattuple\" if the table is bloated.\n\n- use \"perf\" to see where the CPU time is spent.\n\n- look at \"pg_stat_activity\" for wait events (unlikely if the CPU is busy).\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Wed, 19 Aug 2020 08:04:20 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "Updates added, mixture of good and bad news:\n\nOn 18-Aug.-2020 20:39, Michael Lewis wrote:\n> How long does each process take in total? How strict does that FIFO really\n> need to be when you are already doing SKIP LOCKED anyway?\n\nThe processes all bottleneck[ed] for several minutes, approximately \nexponential to the number above the threshold where the problem \nhappened.  Up to around 60 concurrent worked with minimal delay but \nbeyond that a few more added about a minute, 70 about 5 minutes, 80 \nabout 30 minutes and beyond that was hours (I left up to 12 hours one time).\n\nHowever, I removed the order by clause which eliminated [only] the high \nCPU.  The processes all stopped in the same pattern, just without the \nhigh CPU use.  So the ordering was the factor in the CPU use, but was \nnot responsible for the - forgive the pun - lock up.\n\nI then added a random few seconds of delay to each process before it \nexecutes the select in order to prevent too many of them colliding on \nsimultaneous selects.  That was enough to make the lock-up disappear and \nindividual selects complete in a few ms, regardless of how many other \nconcurrent transactions are in progress (tested up to 192 concurrent).  \nBut still not quite out the woods - read below.\n\n> Can you expound on the partitioning? Are all consumers of the queue always\n> hitting one active partition and anytime a row is processed, it always\n> moves to one of many? archived type partitions?\n\nPartitions are list partitioned as 'incoming', 'processing', 'retry', \n'ok', 'failed':\n\nIncoming: This is itself hash partitioned (64 partitions) approx 10M \nrows added/day so partitioned to allow incoming throughput; this works well.\n\nProcessing: Simple partition, data is moved into this in blocks as the \nrows count drops below a threshold, another block is added, coming from \nthe incoming.\n\nRetry: simple partition, non fatal errors go in here and go back into \nthe processing queue for retries later.\n\nFailed: simple partition, fatal errors go here.  Thankfully very few.\n\nOK: hash partition, as everything that was in incoming should eventually \nend up here.  64 partitions currently.\n\nThere is one interesting thing about this.  When the SELECT FOR UPDATE \nSKIP LOCKED is executed, reasonably frequently, the select aborts with \nthe error:\n\nTuple to be locked was already moved to another partition due to \nconcurrent update.\n\nThis error still persists even though the lock-up has been removed by \nthe time delay, so there is a regular stream of transactions aborting \ndue to this (I just re-run the transaction to recover).\n\nNow, if locking worked as I understand it, if another process locked and \nmigrated, this should still have left the lock in place on the original \npartition and created a new one on the newly inserted partition until a \ncommit was done.  The second process should not have visibility on the \nnewly inserted row and the skip locked should simply have skipped over \nthe locked but deleted row on the original partition.\n\nWhat am I missing?  All of this feels like some locking/partitioning \nissue but I'm unsure exactly what.\n\n> Is that done via FDW or otherwise within the same database transaction? Are\n> you connecting some queue consumer application code to Postgres, select for\n> update, doing work on some remote system that is slow, and then coming back\n> and committing the DB work?\nAlas not FDW, an actual external system elsewhere in the world which \nsends an ACK when it has processed the message.  I have no control or \ninfluence on this.\n> By the way, top-posting is discouraged here and partial quotes with\n> interspersed comments are common practice.\nNoted!\n\n\n\n\n\n\nUpdates added, mixture of good and bad news:\n\nOn 18-Aug.-2020 20:39, Michael Lewis\n wrote:\n\n\n\n\nHow long does each process take in total? How strict does that FIFO really\nneed to be when you are already doing SKIP LOCKED anyway?\n\n\nThe processes all bottleneck[ed] for several minutes,\n approximately exponential to the number above the threshold where\n the problem happened.  Up to around 60 concurrent worked with\n minimal delay but beyond that a few more added about a minute, 70\n about 5 minutes, 80 about 30 minutes and beyond that was hours (I\n left up to 12 hours one time).\nHowever, I removed the order by clause which eliminated [only]\n the high CPU.  The processes all stopped in the same pattern, just\n without the high CPU use.  So the ordering was the factor in the\n CPU use, but was not responsible for the - forgive the pun - lock\n up.\nI then added a random few seconds of delay to each process before\n it executes the select in order to prevent too many of them\n colliding on simultaneous selects.  That was enough to make the\n lock-up disappear and individual selects complete in a few ms,\n regardless of how many other concurrent transactions are in\n progress (tested up to 192 concurrent).  But still not quite out\n the woods - read below.\n\n\nCan you expound on the partitioning? Are all consumers of the queue always\nhitting one active partition and anytime a row is processed, it always\nmoves to one of many? archived type partitions?\n\nPartitions are list partitioned as 'incoming', 'processing',\n 'retry', 'ok', 'failed':\nIncoming: This is itself hash partitioned (64 partitions) approx\n 10M rows added/day so partitioned to allow incoming throughput;\n this works well.\nProcessing: Simple partition, data is moved into this in blocks\n as the rows count drops below a threshold, another block is added,\n coming from the incoming.\nRetry: simple partition, non fatal errors go in here and go back\n into the processing queue for retries later.\nFailed: simple partition, fatal errors go here.  Thankfully very\n few.\nOK: hash partition, as everything that was in incoming should\n eventually end up here.  64 partitions currently.\nThere is one interesting thing about this.  When the SELECT FOR\n UPDATE SKIP LOCKED is executed, reasonably frequently, the select\n aborts with the error:\nTuple to be locked was already moved to another partition due to\n concurrent update.\nThis error still persists even though the lock-up has been\n removed by the time delay, so there is a regular stream of\n transactions aborting due to this (I just re-run the transaction\n to recover).\n\nNow, if locking worked as I understand it, if another process\n locked and migrated, this should still have left the lock in place\n on the original partition and created a new one on the newly\n inserted partition until a commit was done.  The second process\n should not have visibility on the newly inserted row and the skip\n locked should simply have skipped over the locked but deleted row\n on the original partition.\nWhat am I missing?  All of this feels like some\n locking/partitioning issue but I'm unsure exactly what.\n\n\nIs that done via FDW or otherwise within the same database transaction? Are\nyou connecting some queue consumer application code to Postgres, select for\nupdate, doing work on some remote system that is slow, and then coming back\nand committing the DB work?\n\n\n Alas not FDW, an actual external system elsewhere in the world which\n sends an ACK when it has processed the message.  I have no control\n or influence on this.\n\n\nBy the way, top-posting is discouraged here and partial quotes with\ninterspersed comments are common practice.\n\n\n Noted!", "msg_date": "Thu, 20 Aug 2020 09:35:38 -0400", "msg_from": "Jim Jarvie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "Great to hear that some of the issues are now mitigated. Though, perhaps\nyou actually require that ORDER BY if items are expected to be sitting in\nthe queue quite some time because you have incoming queue items in a burst\npattern and have to play catch up sometimes. If so, I highly suspect the\nindex on q_id is becoming very bloated and reindex concurrently would help.\n\n> Partitions are list partitioned as 'incoming', 'processing', 'retry',\n'ok', 'failed':\n\nI am unclear on what purpose a \"processing\" status would have. Shouldn't a\nrow be in the incoming status & locked by select for update, until it\neither gets updated to ok or failed (or left alone if retry is needed)?\nWhat purpose do the retry and processing statuses serve? I don't understand\nyour full workflow to venture a guess on how you are hitting that error\nregarding a row being in the wrong partition, but fewer main level\npartitions and removing unneeded updates seems likely to help or resolve\nthe issue perhaps.\n\nI don't know if you might have missed my last message, and the suggestion\nfrom Laurenz to check pgstattuple.\n\nAt a high level, it seems like any needed update to the rows would result\nin it being removed from the current partition and moved to another\npartition. If you are doing this in a transaction block, then you could\njust as well skip the select for update and just DELETE [] RETURNING from\nthe existing partition and insert into the new partition later (use a\nselect for update if you want to order the deletes*). If your transaction\nfails and gets rolled back, then the delete won't have happened and the row\nwill get picked up by the next consumer.\n\nAnother thought is that I don't know how performant that hash partitioning\nwill be for select for update, particularly if that targets many partitions\npotentially. Would it be feasible to match the number of partitions to the\nnumber of consumers and actually have each of them working on one?\n\n\n*\nhttps://www.2ndquadrant.com/en/blog/what-is-select-skip-locked-for-in-postgresql-9-5/\n\nGreat to hear that some of the issues are now mitigated. Though, perhaps you actually require that ORDER BY if items are expected to be sitting in the queue quite some time because you have incoming queue items in a burst pattern and have to play catch up sometimes. If so, I highly suspect the index on q_id is becoming very bloated and reindex concurrently would help.> Partitions are list partitioned as 'incoming', 'processing', 'retry', 'ok', 'failed':I am unclear on what purpose a \"processing\" status would have. Shouldn't a row be in the incoming status & locked by select for update, until it either gets updated to ok or failed (or left alone if retry is needed)? What purpose do the retry and processing statuses serve? I don't understand your full workflow to venture a guess on how you are hitting that error regarding a row being in the wrong partition, but fewer main level partitions and removing unneeded updates seems likely to help or resolve the issue perhaps.I don't know if you might have missed my last message, and the suggestion from Laurenz to check pgstattuple.At a high level, it seems like any needed update to the rows would result in it being removed from the current partition and moved to another partition. If you are doing this in a transaction block, then you could just as well skip the select for update and just DELETE [] RETURNING from the existing partition and insert into the new partition later (use a select for update if you want to order the deletes*). If your transaction fails and gets rolled back, then the delete won't have happened and the row will get picked up by the next consumer.Another thought is that I don't know how performant that hash partitioning will be for select for update, particularly if that targets many partitions potentially. Would it be feasible to match the number of partitions to the number of consumers and actually have each of them working on one?*https://www.2ndquadrant.com/en/blog/what-is-select-skip-locked-for-in-postgresql-9-5/", "msg_date": "Thu, 20 Aug 2020 11:30:52 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "On 20-Aug.-2020 13:30, Michael Lewis wrote:\n> Great to hear that some of the issues are now mitigated. Though, perhaps\n> you actually require that ORDER BY if items are expected to be sitting in\n> the queue quite some time because you have incoming queue items in a burst\n> pattern and have to play catch up sometimes. If so, I highly suspect the\n> index on q_id is becoming very bloated and reindex concurrently would help.\nI managed to bypass the need for the sort by relying on the active feed \nonly sending the oldest items in for processing (it was always doing \nthat) but based on some of the earlier e-mails in this thread, it \nprompted the revelation that my order by when processing was really \npretty pointless because I need more-or-less ordered rather than \nstrictly ordered and that was already happening due to how the process \nlist was being fed.\n>\n> I don't know if you might have missed my last message, and the suggestion\n> from Laurenz to check pgstattuple.\nI still need to look at that, but since I had made some progress, I got \npretty exited and have not got round to this yet.\n> *\n> https://www.2ndquadrant.com/en/blog/what-is-select-skip-locked-for-in-postgresql-9-5/\n\nThis does warn about the overhead, but I've also upgraded pg_top on my \nsystem today and saw a useful additional data point that it displays - \nthe number of locks held by a process.\n\nWhat I see happening is that when the select statements collide, they \nare holding about 10-12 locks each and then begin to very slowly acquire \nmore locks every few seconds.  One process will grow quicker than others \nthen reach the target (250) and start processing.  Then another takes \nthe lead and so on until a critical mass is reached and then the \nremaining all acquire their locks in a few seconds.\n\nI still keep thinking there is some scaling type issue here in the \nlocking and possibly due to it being a partitioned table (due to that \ntuple moved error).\n\n\n\n\n\n\n\n\n\nOn 20-Aug.-2020 13:30, Michael Lewis\n wrote:\n\n\nGreat to hear that some of the issues are now mitigated. Though, perhaps\nyou actually require that ORDER BY if items are expected to be sitting in\nthe queue quite some time because you have incoming queue items in a burst\npattern and have to play catch up sometimes. If so, I highly suspect the\nindex on q_id is becoming very bloated and reindex concurrently would help.\n\n\n I managed to bypass the need for the sort by relying on the active\n feed only sending the oldest items in for processing (it was always\n doing that) but based on some of the earlier e-mails in this thread,\n it prompted the revelation that my order by when processing was\n really pretty pointless because I need more-or-less ordered rather\n than strictly ordered and that was already happening due to how the\n process list was being fed.\n\nI don't know if you might have missed my last message, and the suggestion\nfrom Laurenz to check pgstattuple.\n\n\n I still need to look at that, but since I had made some progress, I\n got pretty exited and have not got round to this yet.\n \n*\nhttps://www.2ndquadrant.com/en/blog/what-is-select-skip-locked-for-in-postgresql-9-5/\n\n\nThis does warn about the overhead, but I've also upgraded pg_top\n on my system today and saw a useful additional data point that it\n displays - the number of locks held by a process.\nWhat I see happening is that when the select statements collide,\n they are holding about 10-12 locks each and then begin to very\n slowly acquire more locks every few seconds.  One process will\n grow quicker than others then reach the target (250) and start\n processing.  Then another takes the lead and so on until a\n critical mass is reached and then the remaining all acquire their\n locks in a few seconds.\nI still keep thinking there is some scaling type issue here in\n the locking and possibly due to it being a partitioned table (due\n to that tuple moved error).", "msg_date": "Thu, 20 Aug 2020 17:13:27 -0400", "msg_from": "Jim Jarvie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "Can you share an explain analyze for the query that does the select for\nupdate? I wouldn't assume that partition pruning is possible at all with\nhash, and it would be interesting to see how it is finding those rows.\n\n>\n\nCan you share an explain analyze for the query that does the select for update? I wouldn't assume that partition pruning is possible at all with hash, and it would be interesting to see how it is finding those rows.", "msg_date": "Thu, 20 Aug 2020 15:42:36 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "On 20-Aug.-2020 17:42, Michael Lewis wrote:\n> Can you share an explain analyze for the query that does the select for\n> update? I wouldn't assume that partition pruning is possible at all with\n> hash, and it would be interesting to see how it is finding those rows.\n\nWell this got interesting  - the already moved error showed up: Note, \nthe actual process partitions are regular table partitions, these are \nnot hashed.  Only the incoming and completed are hashed due to row \ncounts at either end of the processing; in flight (where the issue shows \nup) is quite small:\n\n[queuedb] # explain analyze select queueid,txobject,objectid,state from \nmq.queue where (state = 'tx_active' or state='tx_fail_retryable') and \ntxobject = 'ticket' limit 250 for update skip locked;\nERROR:  40001: tuple to be locked was already moved to another partition \ndue to concurrent update\nLOCATION:  heapam_tuple_lock, heapam_handler.c:405\nTime: 579.131 ms\n[queuedb] # explain analyze select queueid,txobject,objectid,state from \nmq.queue where (state = 'tx_active' or state='tx_fail_retryable') and \ntxobject = 'ticket' limit 250 for update skip locked;\nERROR:  40001: tuple to be locked was already moved to another partition \ndue to concurrent update\nLOCATION:  heapam_tuple_lock, heapam_handler.c:405\nTime: 568.008 ms\n[queuedb] # explain analyze select queueid,txobject,objectid,state from \nmq.queue where (state = 'tx_active' or state='tx_fail_retryable') and \ntxobject = 'ticket' limit 250 for update skip locked;\n       QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n  Limit  (cost=0.00..25.71 rows=250 width=34) (actual \ntime=1306.041..1306.338 rows=250 loops=1)\n    ->  LockRows  (cost=0.00..7934.38 rows=77150 width=34) (actual \ntime=1306.040..1306.315 rows=250 loops=1)\n          ->  Append  (cost=0.00..7162.88 rows=77150 width=34) (actual \ntime=520.685..1148.347 rows=31500 loops=1)\n                ->  Seq Scan on queue_tx_active  (cost=0.00..6764.50 \nrows=77000 width=34) (actual time=520.683..1145.258 rows=31500 loops=1)\n                      Filter: ((txobject = 'ticket'::mq.queue_object) \nAND ((state = 'tx_active'::mq.tx_state) OR (state = \n'tx_fail_retryable'::mq.tx_state)))\n                ->  Seq Scan on queue_tx_fail_retryable \n  (cost=0.00..12.62 rows=150 width=34) (never executed)\n                      Filter: ((txobject = 'ticket'::mq.queue_object) \nAND ((state = 'tx_active'::mq.tx_state) OR (state = \n'tx_fail_retryable'::mq.tx_state)))\n  Planning Time: 0.274 ms\n  Execution Time: 1306.380 ms\n(9 rows)\n\nTime: 1317.150 ms (00:01.317)\n[queuedb] #\n\n\n\n\n\n\n\n\n\nOn 20-Aug.-2020 17:42, Michael Lewis\n wrote:\n\n\nCan you share an explain analyze for the query that does the select for\nupdate? I wouldn't assume that partition pruning is possible at all with\nhash, and it would be interesting to see how it is finding those rows.\n\n\nWell this got interesting  - the already moved error showed up: \n Note, the actual process partitions are regular table partitions,\n these are not hashed.  Only the incoming and completed are hashed\n due to row counts at either end of the processing; in flight\n (where the issue shows up) is quite small:\n\n[queuedb] # explain analyze select\n queueid,txobject,objectid,state from mq.queue where (state =\n 'tx_active' or state='tx_fail_retryable') and txobject = 'ticket'\n limit 250 for update skip locked;\n ERROR:  40001: tuple to be locked was already moved to another\n partition due to concurrent update\n LOCATION:  heapam_tuple_lock, heapam_handler.c:405\n Time: 579.131 ms\n [queuedb] # explain analyze select queueid,txobject,objectid,state\n from mq.queue where (state = 'tx_active' or\n state='tx_fail_retryable') and txobject = 'ticket' limit 250 for\n update skip locked;\n ERROR:  40001: tuple to be locked was already moved to another\n partition due to concurrent update\n LOCATION:  heapam_tuple_lock, heapam_handler.c:405\n Time: 568.008 ms\n [queuedb] # explain analyze select queueid,txobject,objectid,state\n from mq.queue where (state = 'tx_active' or\n state='tx_fail_retryable') and txobject = 'ticket' limit 250 for\n update skip locked;\n                                                                  \n       QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n  Limit  (cost=0.00..25.71 rows=250 width=34) (actual\n time=1306.041..1306.338 rows=250 loops=1)\n    ->  LockRows  (cost=0.00..7934.38 rows=77150 width=34)\n (actual time=1306.040..1306.315 rows=250 loops=1)\n          ->  Append  (cost=0.00..7162.88 rows=77150 width=34)\n (actual time=520.685..1148.347 rows=31500 loops=1)\n                ->  Seq Scan on queue_tx_active\n  (cost=0.00..6764.50 rows=77000 width=34) (actual\n time=520.683..1145.258 rows=31500 loops=1)\n                      Filter: ((txobject =\n 'ticket'::mq.queue_object) AND ((state = 'tx_active'::mq.tx_state)\n OR (state = 'tx_fail_retryable'::mq.tx_state)))\n                ->  Seq Scan on queue_tx_fail_retryable\n  (cost=0.00..12.62 rows=150 width=34) (never executed)\n                      Filter: ((txobject =\n 'ticket'::mq.queue_object) AND ((state = 'tx_active'::mq.tx_state)\n OR (state = 'tx_fail_retryable'::mq.tx_state)))\n  Planning Time: 0.274 ms\n  Execution Time: 1306.380 ms\n (9 rows)\n\n Time: 1317.150 ms (00:01.317)\n [queuedb] #", "msg_date": "Thu, 20 Aug 2020 18:39:59 -0400", "msg_from": "Jim Jarvie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "On Thu, Aug 20, 2020 at 4:40 PM Jim Jarvie <[email protected]> wrote:\n\n> On 20-Aug.-2020 17:42, Michael Lewis wrote:\n>\n> Can you share an explain analyze for the query that does the select for\n> update? I wouldn't assume that partition pruning is possible at all with\n> hash, and it would be interesting to see how it is finding those rows.\n>\n> Well this got interesting - the already moved error showed up: Note, the\n> actual process partitions are regular table partitions, these are not\n> hashed. Only the incoming and completed are hashed due to row counts at\n> either end of the processing; in flight (where the issue shows up) is quite\n> small:\n>\n> [queuedb] # explain analyze select queueid,txobject,objectid,state from\n> mq.queue where (state = 'tx_active' or state='tx_fail_retryable') and\n> txobject = 'ticket' limit 250 for update skip locked;\n> ERROR: 40001: tuple to be locked was already moved to another partition\n> due to concurrent update\n> LOCATION: heapam_tuple_lock, heapam_handler.c:405\n> Time: 579.131 ms\n>\nThat is super curious. I hope that someone will jump in with an explanation\nor theory on this.\n\nI still wonder why the move between partitions is needed though if the work\nis either done (failed or successful) or not done... not started, retry\nneeded or in progress... it doesn't matter. It needs to get picked up by\nthe next process if it isn't already row locked.\n\n>\n\nOn Thu, Aug 20, 2020 at 4:40 PM Jim Jarvie <[email protected]> wrote:\n\nOn 20-Aug.-2020 17:42, Michael Lewis\n wrote:\n\nCan you share an explain analyze for the query that does the select for\nupdate? I wouldn't assume that partition pruning is possible at all with\nhash, and it would be interesting to see how it is finding those rows.\n\n\nWell this got interesting  - the already moved error showed up: \n Note, the actual process partitions are regular table partitions,\n these are not hashed.  Only the incoming and completed are hashed\n due to row counts at either end of the processing; in flight\n (where the issue shows up) is quite small:\n\n[queuedb] # explain analyze select\n queueid,txobject,objectid,state from mq.queue where (state =\n 'tx_active' or state='tx_fail_retryable') and txobject = 'ticket'\n limit 250 for update skip locked;\n ERROR:  40001: tuple to be locked was already moved to another\n partition due to concurrent update\n LOCATION:  heapam_tuple_lock, heapam_handler.c:405\n Time: 579.131 msThat is super curious. I hope that someone will jump in with an explanation or theory on this.I still wonder why the move between partitions is needed though if the work is either done (failed or successful) or not done... not started, retry needed or in progress... it doesn't matter. It needs to get picked up by the next process if it isn't already row locked.", "msg_date": "Thu, 20 Aug 2020 17:01:17 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "On Fri, 21 Aug 2020 at 11:01, Michael Lewis <[email protected]> wrote:\n>\n> On Thu, Aug 20, 2020 at 4:40 PM Jim Jarvie <[email protected]> wrote:\n>>\n>> On 20-Aug.-2020 17:42, Michael Lewis wrote:\n>>\n>> Can you share an explain analyze for the query that does the select for\n>> update? I wouldn't assume that partition pruning is possible at all with\n>> hash, and it would be interesting to see how it is finding those rows.\n>>\n>> Well this got interesting - the already moved error showed up: Note, the actual process partitions are regular table partitions, these are not hashed. Only the incoming and completed are hashed due to row counts at either end of the processing; in flight (where the issue shows up) is quite small:\n>>\n>> [queuedb] # explain analyze select queueid,txobject,objectid,state from mq.queue where (state = 'tx_active' or state='tx_fail_retryable') and txobject = 'ticket' limit 250 for update skip locked;\n>> ERROR: 40001: tuple to be locked was already moved to another partition due to concurrent update\n>> LOCATION: heapam_tuple_lock, heapam_handler.c:405\n>> Time: 579.131 ms\n>\n> That is super curious. I hope that someone will jump in with an explanation or theory on this.\n>\n> I still wonder why the move between partitions is needed though if the work is either done (failed or successful) or not done... not started, retry needed or in progress... it doesn't matter. It needs to get picked up by the next process if it isn't already row locked.\n\nI may be heading off in the wrong direction as I'm not fully sure I\nunderstand what the complaint is about, but isn't the executor just\nhitting dead rows in one of the active or failed partitions that have\nbeen moved off to some other partition?\n\nWhen updates occur in a non-partitioned table we can follow item\npointer chains to find the live row and check if the WHERE clause\nstill matches to determine if the row should be updated, or in this\ncase just locked since it's a SELECT FOR UPDATE. However, with\npartitioned table, a concurrent UPDATE may have caused the row to have\nbeen moved off to another partition, in which case the tuple's item\npointer cannot point to it since we don't have enough address space,\nwe only have 6 bytes for a TID. To get around the fact that we can't\nfollow these update chains, we just throw the serialization error,\nwhich is what you're getting. Ideally, we'd figure out where the live\nversion of the tuple is and check if it matches the WHERE clause and\nlock it if it does, but we've no means to do that with the current\ndesign.\n\nIf the complaint is about the fact that you're getting the error and\nyou think you shouldn't be because you said \"SKIP LOCKED\" then I'm not\nreally sure the fact that you said \"SKIP LOCKED\" gives us the right to\nignore this case. The user only gave us the go-ahead to skip locked\ntuples, not skip tuples that we just failed to follow item pointer\nchains for. It might be okay to do this for rows that have since\nbeen marked as complete since they no longer match your WHERE clause,\nhowever, if the row has gone from the queue_tx_active partition into\nthe queue_tx_fail_retryable partition then I don't see why we'd have\nthe right to skip the tuple. Your query says you want tuples that need\nto be retried. We can't go skipping them.\n\nSo isn't the fix just to code the application to retry on 40001 errors?\n\nDavid\n\n\n", "msg_date": "Fri, 21 Aug 2020 11:37:47 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "On Fri, Aug 21, 2020 at 9:58 AM Jim Jarvie <[email protected]> wrote:\n> However, as I scale up the number of concurrent connections, I see a spike in CPU (to 100% across 80 cores) when the SELECT FOR UPDATE SKIP LOCKED executes and the select processes wait for multiple minutes (10-20 minutes) before completing. My use case requires around 256 concurrent processors for the queue but I've been unable to scale beyond 128 without everything grinding to a halt.\n\nMaybe it's just getting algorithmically ugly. To claim some job rows,\nyou have to skip all dead/non-matching tuples left behind so far at\nthe start of the table by all the other sessions, and then also all\ncurrently locked tuples, and you have to do update-chain walks on some\nof them too. It all gets a bit explosive once you have such high\nnumbers of workers.\n\nI think I'd experiment with splitting the job table up into N tables\nand feed jobs into all of them about evenly (by hashing, at random,\nwhatever), and then I'd assign each consumer a \"preferred\" table where\nit looks for jobs first (perhaps my_worker_id % ntables), before\ntrying the others in round robin order. Then they won't trample on\neach other's toes so much.\n\nIn the past I've wondered about a hypothetical circular_seqscan\noption, which would cause table scans to start where they left off\nlast time in each backend, so SELECT * FROM t LIMIT 1 repeated would\nshow you a different row each time until you get all the way around to\nthe start again (as we're entirely within our rights to do for a query\nwith no ORDER BY). That'd give the system a chance to vacuum and\nstart refilling the start of the table before you get around to it\nagain, instead of repeatedly having to step over the same useless\npages every time you need a new job. Combined with the N tables\nthing, you'd be approaching a sweet spot for contention and dead tuple\navoidance. The synchronized_seqscans setting is related to this idea,\nbut more expensive, different, and probably not useful.\n\nHmm. I guess another way to avoid colliding with others' work would\nbe to try to use SELECT * FROM t TABLESAMPLE SYSTEM (10) WHERE ... FOR\nUPDATE SKIP LOCKED LIMIT .... It's less cache-friendly, and less\norder-preserving, but way more contention-friendly. That has another\ncomplication though; how do you pick 10? And if it doesn't return any\nor enough rows, it doesn't mean there isn't enough, so you may need to\nbe ready to fall back to the plain approach if having 250 rows is\nreally important to you and TABLESAMPLE doesn't give you enough. Or\nsomething.\n\nBy the way, when working with around 64 consumer processes I was also\nannoyed by the thundering herd problem when using NOTIFY. I found\nvarious workaround solutions to that, but ultimately I think we need\nmore precise wakeups for that sort of thing, which I hope to revisit\none day.\n\n\n", "msg_date": "Fri, 21 Aug 2020 16:34:07 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "Merging a couple of emails:\n\nOn 20-Aug.-2020 19:37, David Rowley wrote:\n\n> When updates occur in a non-partitioned table we can follow item\n> pointer chains to find the live row and check if the WHERE clause\n> still matches to determine if the row should be updated, or in this\n> case just locked since it's a SELECT FOR UPDATE. However, with\n> partitioned table, a concurrent UPDATE may have caused the row to have\n> been moved off to another partition, in which case the tuple's item\n> pointer cannot point to it since we don't have enough address space,\n> we only have 6 bytes for a TID. To get around the fact that we can't\n> follow these update chains, we just throw the serialization error,\n> which is what you're getting. Ideally, we'd figure out where the live\n> version of the tuple is and check if it matches the WHERE clause and\n> lock it if it does, but we've no means to do that with the current\n> design.\nThis is the absolute best description of what causes the \"tuple to be \nlocked was already moved to another partition due to concurrent update\" \nmessage I have ever seen.  It totally makes sense why this happens given \nyour explanation.  Thank you for giving the detail.\n\nI am backing off the query with a time delay and then retrying and that \nseems to be the correct approach as well, but I only stumbled upon that \nby accident.  Hopefully when the google gnomes index the mailing list \nthis message will come up to save others time and worry about the message.\n\nOn 21-Aug.-2020 00:34, Thomas Munro wrote:\n>\n> Maybe it's just getting algorithmically ugly. To claim some job rows,\n> you have to skip all dead/non-matching tuples left behind so far at\n> the start of the table by all the other sessions, and then also all\n> currently locked tuples, and you have to do update-chain walks on some\n> of them too. It all gets a bit explosive once you have such high\n> numbers of workers.\nYes, fundamentally it seems to come down to traffic volume.  When I get \nover 128 connections all selecting and locking that one table, applying \nthe locks seems to struggle and the problem grows exponentially.  I'm an \nextreme edge case, so not really a scenario that locking could ever have \nbeen expected to handle but it's pretty good that it gets up to 128 at all.\n> I think I'd experiment with splitting the job table up into N tables\n> and feed jobs into all of them about evenly (by hashing, at random,\n> whatever), and then I'd assign each consumer a \"preferred\" table where\n> it looks for jobs first (perhaps my_worker_id % ntables), before\n> trying the others in round robin order. Then they won't trample on\n> each other's toes so much.\nI think this is a good idea, but (in my case) I think this is where it \nwill need v13 which is going to give that via \"Allow ROW values to be \nused as partitioning expressions\" ? (e.g. will v13 permit queueid mod \n200 as the partition expression to make 200 partitions to allow 200 \n[contention free] consumers?).\n> In the past I've wondered about a hypothetical circular_seqscan\n> option, which would cause table scans to start where they left off\n> last time in each backend, so SELECT * FROM t LIMIT 1 repeated would\n> show you a different row each time until you get all the way around to\n> the start again (as we're entirely within our rights to do for a query\n> with no ORDER BY). That'd give the system a chance to vacuum and\n> start refilling the start of the table before you get around to it\n> again, instead of repeatedly having to step over the same useless\n> pages every time you need a new job.\nI like the sound of this; if I understand correctly, it would \nessentially walk in insertion(-ish) order which would be OK for me. As \nlong as it was documented clearly; perhaps I should put a page online \nabout high traffic message queues with PostgreSQL for people to find \nwhen they try the same thing.\n>\n> Hmm. I guess another way to avoid colliding with others' work would\n> be to try to use SELECT * FROM t TABLESAMPLE SYSTEM (10) WHERE ... FOR\n> UPDATE SKIP LOCKED LIMIT .... It's less cache-friendly, and less\n> order-preserving, but way more contention-friendly. That has another\n> complication though; how do you pick 10? And if it doesn't return any\n> or enough rows, it doesn't mean there isn't enough, so you may need to\n> be ready to fall back to the plain approach if having 250 rows is\n> really important to you and TABLESAMPLE doesn't give you enough. Or\n> something.\n\nFor me, this is an acceptable compromise.  I just need to consume \nsomething on each pass and up to 250 items but getting something less \nthan 250 would still allow progress and if it removed wait time, could \neven be an overall greater throughput.  I may try this just to see how \nit performs.\n\nIf someone has a strictly ordered queue, they will never really have \nthis issue as they must start at the beginning and go sequentially using \nonly 1 consumer.  Because I only need loose/approximate ordering and \nthroughput is the objective, all the locking and contention comes into play.\n\n> By the way, when working with around 64 consumer processes I was also\n> annoyed by the thundering herd problem when using NOTIFY. I found\n> various workaround solutions to that, but ultimately I think we need\n> more precise wakeups for that sort of thing, which I hope to revisit\n> one day.\n\nI'm making a note of this because I also happen to have a different \nscenario which has NOTIFY with well over 100 LISTEN consumers...  That's \nnot given me problems - yet - but I'm now aware of this should problems \narise.\n\n\n\n", "msg_date": "Fri, 21 Aug 2020 11:24:20 -0400", "msg_from": "Jim Jarvie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "On Tue, Aug 18, 2020 at 8:22 PM Jim Jarvie <[email protected]> wrote:\n\n> I've tuned the LIMIT value both up and down. As I move the limit up, the\n> problem becomes substantially worse; 300 swamps it and the selects take > 1\n> hour to complete; at 600 they just all lock everything up and it stops\n> processing. I did try 1,000 but it basically resulted in nothing being\n> processed.\n>\nYou've only described what happens when you turn the LIMIT up. What\nhappens when you turn it down? Why did you pick 250 in the first place? I\ndon't see the rationale for having 250*256 rows locked simultaneously. I\ncan see reasons you might want a LIMIT as high as 250, or for having 256\nprocesses. I just don't see why you would want to do both in the same\nsystem.\n\n> Less processes does not give the throughput required because the queue\n> sends data elsewhre which has a long round trip time but does permit over\n> 1K concurrent connections as their work-round for throughput. I'm stuck\n> having to scale up my concurrent processes in order to compensate for the\n> long processing time of an individual queue item.\n>\n\nYou've tied the database concurrency to the external process concurrency.\nWhile this might be convenient, there is no reason to think it will be\noptimal. If you achieve concurrency by having 256 processes, why does each\nprocess need to lock 250 rows at time. Having 64,000 rows locked to\nobtain 256-fold concurrency seems like a poor design.\n\nWith modern tools it should not be too hard to have just one process obtain\n1000 rows, and launch 1000 concurrent external tasks. Either with threads\n(making sure only one thread deals with the database), or with\nasynchronous operations. (Then the problem would be how to harvest the\nresults, it couldn't unlock the rows until all external tasks have\nfinished, which would be a problem if some took much longer than others).\n\nIt is easy to reproduce scaling problems when you have a large number of\nprocesses trying to do ORDER BY id LIMIT 250 FOR UPDATE SKIP LOCKED without\nall the partitioning and stuff. I don't know if the problems are as severe\nas you describe with your very elaborate setup--or even if they have the\nsame bottleneck. But in the simple case, there seems to be a lot of\nspin-lock contention, as every selecting query needs to figure out if every\nmarked-as-locked row is truly locked, by asking if the apparently-locking\ntransaction is still valid.\n\nCheers,\n\nJeff\n\n>\n\nOn Tue, Aug 18, 2020 at 8:22 PM Jim Jarvie <[email protected]> wrote:\n\nI've tuned the LIMIT value both up and down.  As I move the limit\n up, the problem becomes substantially worse; 300 swamps it and the\n selects take > 1 hour to complete; at 600 they just all lock\n everything up and it stops processing.  I did try 1,000 but it\n basically resulted in nothing being processed.You've only described what happens when you turn the LIMIT up.  What happens when you turn it down?  Why did you pick 250 in the first place?  I don't see the rationale for having 250*256 rows locked simultaneously.  I can see reasons you might want a LIMIT as high as 250, or for having 256 processes.  I just don't see why you would want to do both in the same system.\nLess processes does not give the\n throughput required because the queue sends data elsewhre which\n has a long round trip time but does permit over 1K concurrent\n connections as their work-round for throughput.  I'm stuck having\n to scale up my concurrent processes in order to compensate for the\n long processing time of an individual queue item.You've tied the database concurrency to the external process concurrency.  While this might be convenient, there is no reason to think it will be optimal.  If you achieve concurrency by having 256 processes, why does each process need to lock 250 rows at  time.  Having 64,000 rows locked to obtain 256-fold concurrency seems like a poor design.With modern tools it should not be too hard to have just one process obtain 1000 rows, and launch 1000 concurrent external tasks.  Either with threads (making sure only one thread deals with the database), or with asynchronous operations.  (Then the problem would be how to harvest the results, it couldn't unlock the rows until all external tasks have finished, which would be a problem if some took much longer than others).It is easy to reproduce scaling problems when you have a large number of processes trying to do ORDER BY id LIMIT 250 FOR UPDATE SKIP LOCKED without all the partitioning and stuff.  I don't know if the problems are as severe as you describe with your very elaborate setup--or even if they have the same bottleneck.  But in the simple case, there seems to be a lot of spin-lock contention, as every selecting query needs to figure out if every marked-as-locked row is truly locked, by asking if the apparently-locking transaction is still valid.Cheers,Jeff", "msg_date": "Mon, 24 Aug 2020 13:58:15 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "Hi all, especially Jim Jarvie, I saw your email to me only now on my \nrelated issue. My issue remains this one:\n\n> Well this got interesting  - the already moved error showed up: \nand I have already gone through all those index pruning and all that \ngood stuff.\n\nI remain with my original question from 30th of June, to me it feels \nlike a bug of some sort:\n\n> \"tuple to be locked was already moved to another partition due to \n> concurrent update\"\n>\n> This would not exactly look like a bug, because the message says \"to \n> be locked\", so at least it's not allowing two workers to lock the same \n> tuple. But it seems that the skip-locked mode should not make an error \n> out of this, but treat it as the tuple was already locked. Why would \n> it want to lock the tuple (representing the job) if another worker has \n> already finished his UPDATE of the job to mark it as \"done\" (which is \n> what makes the tuple move to the \"completed\" partition.)\n>\n> Either the SELECT for jobs to do returned a wrong tuple, which was \n> already updated, or there is some lapse in the locking.\n>\n> Either way it would seem to be a waste of time throwing all these \n> errors when the tuple should not even have been selected for update \n> and locking.\n>\n> I wonder if anybody knows anything about that issue? Of course you'll \n> want to see the DDL and SQL queries, etc. but you can't really try it \n> out unless you do some massively parallel magic.\n\nI still think that it should simply not happen. Don't waste time on old \ntuples trying to fetch and lock something that's no longer there. It's a \nwaste of resources.\n\nregards,\n-Gunther\n\nOn 8/20/2020 6:39 PM, Jim Jarvie wrote:\n>\n>\n> On 20-Aug.-2020 17:42, Michael Lewis wrote:\n>> Can you share an explain analyze for the query that does the select for\n>> update? I wouldn't assume that partition pruning is possible at all with\n>> hash, and it would be interesting to see how it is finding those rows.\n>\n> Well this got interesting  - the already moved error showed up:  Note, \n> the actual process partitions are regular table partitions, these are \n> not hashed.  Only the incoming and completed are hashed due to row \n> counts at either end of the processing; in flight (where the issue \n> shows up) is quite small:\n>\n> [queuedb] # explain analyze select queueid,txobject,objectid,state \n> from mq.queue where (state = 'tx_active' or state='tx_fail_retryable') \n> and txobject = 'ticket' limit 250 for update skip locked;\n> ERROR:  40001: tuple to be locked was already moved to another \n> partition due to concurrent update\n> LOCATION:  heapam_tuple_lock, heapam_handler.c:405\n> Time: 579.131 ms\n> [queuedb] # explain analyze select queueid,txobject,objectid,state \n> from mq.queue where (state = 'tx_active' or state='tx_fail_retryable') \n> and txobject = 'ticket' limit 250 for update skip locked;\n> ERROR:  40001: tuple to be locked was already moved to another \n> partition due to concurrent update\n> LOCATION:  heapam_tuple_lock, heapam_handler.c:405\n> Time: 568.008 ms\n> [queuedb] # explain analyze select queueid,txobject,objectid,state \n> from mq.queue where (state = 'tx_active' or state='tx_fail_retryable') \n> and txobject = 'ticket' limit 250 for update skip locked;\n>         QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=0.00..25.71 rows=250 width=34) (actual \n> time=1306.041..1306.338 rows=250 loops=1)\n>    ->  LockRows  (cost=0.00..7934.38 rows=77150 width=34) (actual \n> time=1306.040..1306.315 rows=250 loops=1)\n>          ->  Append  (cost=0.00..7162.88 rows=77150 width=34) (actual \n> time=520.685..1148.347 rows=31500 loops=1)\n>                ->  Seq Scan on queue_tx_active  (cost=0.00..6764.50 \n> rows=77000 width=34) (actual time=520.683..1145.258 rows=31500 loops=1)\n>                      Filter: ((txobject = 'ticket'::mq.queue_object) \n> AND ((state = 'tx_active'::mq.tx_state) OR (state = \n> 'tx_fail_retryable'::mq.tx_state)))\n>                ->  Seq Scan on queue_tx_fail_retryable \n>  (cost=0.00..12.62 rows=150 width=34) (never executed)\n>                      Filter: ((txobject = 'ticket'::mq.queue_object) \n> AND ((state = 'tx_active'::mq.tx_state) OR (state = \n> 'tx_fail_retryable'::mq.tx_state)))\n>  Planning Time: 0.274 ms\n>  Execution Time: 1306.380 ms\n> (9 rows)\n>\n> Time: 1317.150 ms (00:01.317)\n> [queuedb] #\n>\n\n\n", "msg_date": "Mon, 7 Sep 2020 14:04:58 -0400", "msg_from": "Raj <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "Hi Gunther\n\nOn 07-Sep.-2020 14:04, [email protected] wrote:\n> Hi all, especially Jim Jarvie, I saw your email to me only now on my \n> related issue. My issue remains this one:\n---8<---\n> I remain with my original question from 30th of June, to me it feels \n> like a bug of some sort:\n>\n>> \"tuple to be locked was already moved to another partition due to \n>> concurrent update\"\n>>\n---8<---\n> I still think that it should simply not happen. Don't waste time on \n> old tuples trying to fetch and lock something that's no longer there. \n> It's a waste of resources.\n>\nI'm inclined to agree that the error seems to indicate PostgreSQL knows \nthe row was locked & migrated, so attempting to lock it should not \nreally result in the error when SKIP LOCKED is set (but it should behave \nas it does when there is no SKIP LOCKED).  For the SKIP LOCKED case, it \nshould treat the migrated as being exactly the same as already locked.\n\nI think this is an edge case on the SKIP LOCKED that is not handled as \nit should be.\n\nDo others agree?\n\nJim\n\n\n\n> regards,\n> -Gunther\n>\n\n\n\n\n\n\nHi Gunther\n\nOn 07-Sep.-2020 14:04, [email protected]\n wrote:\n\nHi all,\n especially Jim Jarvie, I saw your email to me only now on my\n related issue. My issue remains this one:\n \n\n ---8<---\nI remain\n with my original question from 30th of June, to me it feels like a\n bug of some sort:\n \n\n\"tuple to be locked was already moved to\n another partition due to concurrent update\"\n \n\n\n\n ---8<---\nI still\n think that it should simply not happen. Don't waste time on old\n tuples trying to fetch and lock something that's no longer there.\n It's a waste of resources.\n \n\n\nI'm inclined to agree that the error seems to indicate PostgreSQL\n knows the row was locked & migrated, so attempting to lock it\n should not really result in the error when SKIP LOCKED is set (but\n it should behave as it does when there is no SKIP LOCKED).  For\n the SKIP LOCKED case, it should treat the migrated as being\n exactly the same as already locked.\nI think this is an edge case on the SKIP LOCKED that is not\n handled as it should be.\nDo others agree?\nJim\n\n\n\n\nregards,\n \n -Gunther", "msg_date": "Fri, 11 Sep 2020 14:29:06 -0400", "msg_from": "Jim Jarvie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "On Tue, 8 Sep 2020 at 06:05, Raj <[email protected]> wrote:\n>\n> > This would not exactly look like a bug, because the message says \"to\n> > be locked\", so at least it's not allowing two workers to lock the same\n> > tuple. But it seems that the skip-locked mode should not make an error\n> > out of this, but treat it as the tuple was already locked. Why would\n> > it want to lock the tuple (representing the job) if another worker has\n> > already finished his UPDATE of the job to mark it as \"done\" (which is\n> > what makes the tuple move to the \"completed\" partition.)\n\n(It's not very clear who wrote the above text since the quote does not\nmention who the author is and the original email didn't appear to have\nmade it to the list)\n\nIt's not a bug. I think the quoted text is expecting a bit too much\nfrom the database. It does not know that if the tuple is updated and\nmoved to another partition that it can be safely ignored. For all\nthe database knows, the new version of the tuple that's in the new\npartition still matches the query's WHERE clause and should be locked.\nIf we just go and ignore moved off tuples then we could miss\nprocessing tuples that still need to be processed.\n\nIt's perhaps not impossible to make it work slightly better if it were\nsomehow possible to inform heapam_tuple_lock() that it's operating on\na partition and the query queried a partitioned table and that all but\n1 partition was pruned with partition pruning. In this case we could\nbe certain the new verison of the tuple can't match the WHERE clause\nof the SELECT since partition pruning determined that all other\npartitions don't match the WHERE clause. However, that's:\n\na) a pretty horrid thing to have to teach heapam_tuple_lock() about, and;\nb) only going to work when 1 partition survives partition pruning,\nwhich is pretty horrible since doing ATTACH PARTITION could suddenly\ncause your queries to fail randomly.\n\nIf you had 3 partitions, one for \"pending\", \"retry\" and \"complete\",\nand you wanted to lock all rows that are in a \"pending\" or \"retry\"\nstate, then when we encounter an updated row in the \"pending\"\npartition, we have no knowledge if it was moved into the \"retry\" or\nthe \"completed\" partition. If it's in \"retry\", then we do want to\nfind it and process it, but if it's in \"completed\", then it does not\nmatch the WHERE clause of the query and we can ignore it. Since we\ndon't know which, we can't make assumptions and must force the user to\ntry again, hence the serialisation failure error.\n\n> > Either the SELECT for jobs to do returned a wrong tuple, which was\n> > already updated, or there is some lapse in the locking.\n> >\n> > Either way it would seem to be a waste of time throwing all these\n> > errors when the tuple should not even have been selected for update\n> > and locking.\n> >\n> > I wonder if anybody knows anything about that issue? Of course you'll\n> > want to see the DDL and SQL queries, etc. but you can't really try it\n> > out unless you do some massively parallel magic.\n\nI ready mentioned why this cannot work that way [1]. If you have some\nidea on how to make it work correctly, then it would be interesting to\nhear. Otherwise, I'm sorry to say that we can't just ignore these\ntuples because it happens to suit your use case.\n\nThe solution is just to make the application retry on serialisation failures.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrDH6TQeLxTqnnAnhjrs55ru5g2_QMG=ME+WvD5MmpHQg@mail.gmail.com\n\n\n", "msg_date": "Mon, 14 Sep 2020 01:52:23 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" }, { "msg_contents": "On 2020-Sep-14, David Rowley wrote:\n\n> On Tue, 8 Sep 2020 at 06:05, Raj <[email protected]> wrote:\n> >\n> > > This would not exactly look like a bug, because the message says \"to\n> > > be locked\", so at least it's not allowing two workers to lock the same\n> > > tuple. But it seems that the skip-locked mode should not make an error\n> > > out of this, but treat it as the tuple was already locked. Why would\n> > > it want to lock the tuple (representing the job) if another worker has\n> > > already finished his UPDATE of the job to mark it as \"done\" (which is\n> > > what makes the tuple move to the \"completed\" partition.)\n> \n> (It's not very clear who wrote the above text since the quote does not\n> mention who the author is and the original email didn't appear to have\n> made it to the list)\n\nSame person.\nhttps://postgr.es/m/[email protected]\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 13 Sep 2020 11:55:12 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED" } ]
[ { "msg_contents": "Hi,\r\n\r\nare there any nice rules of thumb about capacity planning in relation the expected\r\namount of transactions or request per second?\r\n\r\nFor example, if I have around 100 000 transactions per second on a 5 TB database.\r\nWith what amount of Memory and CPUs/Cores and which settings would you basically\r\nStart to evaluate the performance.\r\n\r\nOr are there any other recommendations or experiences here?\r\n\r\nThanks and best regards\r\n\r\nDirk\r\n", "msg_date": "Mon, 24 Aug 2020 16:39:46 +0000", "msg_from": "Dirk Krautschick <[email protected]>", "msg_from_op": true, "msg_subject": "sizing / capacity planning tipps related to expected request or\n transactions per second" }, { "msg_contents": "Hi Dirk,\n\nThere are a bunch of other things to consider besides just TPS and size \nof database.  Since PG is process-bound, I would consider connection \nactivity: How many active connections at any one time?  This greatly \naffects your CPUs.  SQL workload is another big factor: a lot of complex \nqueries may use up or want to use up large amounts of work_mem, which \ngreatly affects your memory capacity.\n\nBunch of other stuff, but these are my top 2.\n\n\nRegards,\nMichael Vitale\n\nDirk Krautschick wrote on 8/24/2020 12:39 PM:\n> Hi,\n>\n> are there any nice rules of thumb about capacity planning in relation the expected\n> amount of transactions or request per second?\n>\n> For example, if I have around 100 000 transactions per second on a 5 TB database.\n> With what amount of Memory and CPUs/Cores and which settings would you basically\n> Start to evaluate the performance.\n>\n> Or are there any other recommendations or experiences here?\n>\n> Thanks and best regards\n>\n> Dirk\n\n\n\n", "msg_date": "Mon, 24 Aug 2020 12:49:21 -0400", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sizing / capacity planning tipps related to expected request or\n transactions per second" }, { "msg_contents": "Hi\n\npo 24. 8. 2020 v 18:40 odesílatel Dirk Krautschick <\[email protected]> napsal:\n\n> Hi,\n>\n> are there any nice rules of thumb about capacity planning in relation the\n> expected\n> amount of transactions or request per second?\n>\n> For example, if I have around 100 000 transactions per second on a 5 TB\n> database.\n> With what amount of Memory and CPUs/Cores and which settings would you\n> basically\n> Start to evaluate the performance.\n>\n\nYou have to know the duration of a typical query - if it is 1ms, then one\ncpu can do 1000 tps and you need 100 cpu. If duration is 10 ms, then you\nneed 1000 cpu.\n\nas minimum RAM for OLTP is 10% of database size, in your case 500GB RAM.\n\nAny time, when I see a request higher than 20-30K tps, then it is good to\nthink about horizontal scaling or about sharding.\n\nRegards\n\nPavel\n\n\n\n> Or are there any other recommendations or experiences here?\n>\n> Thanks and best regards\n>\n> Dirk\n>\n\nHipo 24. 8. 2020 v 18:40 odesílatel Dirk Krautschick <[email protected]> napsal:Hi,\n\nare there any nice rules of thumb about capacity planning in relation the expected\namount of transactions or request per second?\n\nFor example, if I have around 100 000 transactions per second on a 5 TB database.\nWith what amount of Memory and CPUs/Cores and which settings would you basically\nStart to evaluate the performance.You have to know the duration of a typical query - if it is 1ms, then one cpu can do 1000 tps and you need 100 cpu. If duration is 10 ms, then you need 1000 cpu.as minimum RAM for OLTP is 10% of database size, in your case 500GB RAM.Any time, when I see a request higher than 20-30K tps, then it is good to think about horizontal scaling or about sharding.RegardsPavel \n\nOr are there any other recommendations or experiences here?\n\nThanks and best regards\n\nDirk", "msg_date": "Mon, 24 Aug 2020 18:51:59 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sizing / capacity planning tipps related to expected request or\n transactions per second" } ]
[ { "msg_contents": "I have a query which will more often run on DB and very slow and it is doing 'seqscan'. I was trying to optimize it by adding indexes in different ways but nothing helps.\nAny suggestions?\n\nQuery:\nEXPALIN ANALYZE select serial_no,receivingplant,sku,r3_eventtime from (select serial_no,receivingplant,sku,eventtime as r3_eventtime, row_number() over (partition by serial_no order by eventtime desc) as mpos from receiving_item_delivered_received where eventtype='LineItemdetailsReceived'and replenishmenttype = 'DC2SWARRANTY'and coalesce(serial_no,'') <> '') Rec where mpos = 1;\n\nQuery Planner: \n\"Subquery Scan on rec  (cost=70835.30..82275.49 rows=1760 width=39) (actual time=2322.999..3451.783 rows=333451 loops=1)\"\"  Filter: (rec.mpos = 1)\"\"  Rows Removed by Filter: 19900\"\"  ->  WindowAgg  (cost=70835.30..77875.42 rows=352006 width=47) (actual time=2322.997..3414.384 rows=353351 loops=1)\"\"        ->  Sort  (cost=70835.30..71715.31 rows=352006 width=39) (actual time=2322.983..3190.090 rows=353351 loops=1)\"\"              Sort Key: receiving_item_delivered_received.serial_no, receiving_item_delivered_received.eventtime DESC\"\"              Sort Method: external merge  Disk: 17424kB\"\"              ->  Seq Scan on receiving_item_delivered_received  (cost=0.00..28777.82 rows=352006 width=39) (actual time=0.011..184.677 rows=353351 loops=1)\"\"                    Filter: (((COALESCE(serial_no, ''::character varying))::text <> ''::text) AND ((eventtype)::text = 'LineItemdetailsReceived'::text) AND ((replenishmenttype)::text = 'DC2SWARRANTY'::text))\"\"                    Rows Removed by Filter: 55953\"\"Planning Time: 0.197 ms\"\"Execution Time: 3466.985 ms\"\nTable DDL: \nCREATE TABLE receiving_item_delivered_received(    load_dttm timestamp with time zone,    iamuniqueid character varying(200)  ,    batchid character varying(200)  ,    eventid character varying(200)  ,    eventtype character varying(200)  ,    eventversion character varying(200)  ,    eventtime timestamp with time zone,    eventproducerid character varying(200)  ,    deliverynumber character varying(200)  ,    activityid character varying(200)  ,    applicationid character varying(200)  ,    channelid character varying(200)  ,    interactionid character varying(200)  ,    sessionid character varying(200)  ,    receivingplant character varying(200)  ,    deliverydate date,    shipmentdate date,    shippingpoint character varying(200)  ,    replenishmenttype character varying(200)  ,    numberofpackages character varying(200)  ,    carrier_id character varying(200)  ,    carrier_name character varying(200)  ,    billoflading character varying(200)  ,    pro_no character varying(200)  ,    partner_id character varying(200)  ,    deliveryitem character varying(200)  ,    ponumber character varying(200)  ,    poitem character varying(200)  ,    tracking_no character varying(200)  ,    serial_no character varying(200)  ,    sto_no character varying(200)  ,    sim_no character varying(200)  ,    sku character varying(200)  ,    quantity numeric(15,2),    uom character varying(200)  );\n\n-- Index: receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx\n-- DROP INDEX receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx;\nCREATE INDEX receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  , COALESCE(serial_no, ''::character varying)  )    ;-- Index: receiving_item_delivered_rece_serial_no_eventtype_replenish_idx\n-- DROP INDEX receiving_item_delivered_rece_serial_no_eventtype_replenish_idx;\nCREATE INDEX receiving_item_delivered_rece_serial_no_eventtype_replenish_idx    ON receiving_item_delivered_received USING btree    (serial_no  , eventtype  , replenishmenttype  )        WHERE eventtype::text = 'LineItemdetailsReceived'::text AND replenishmenttype::text = 'DC2SWARRANTY'::text AND COALESCE(serial_no, ''::character varying)::text <> ''::text;-- Index: receiving_item_delivered_recei_eventtype_replenishmenttype_idx1\n-- DROP INDEX receiving_item_delivered_recei_eventtype_replenishmenttype_idx1;\nCREATE INDEX receiving_item_delivered_recei_eventtype_replenishmenttype_idx1    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  )        WHERE eventtype::text = 'LineItemdetailsReceived'::text AND replenishmenttype::text = 'DC2SWARRANTY'::text;-- Index: receiving_item_delivered_receiv_eventtype_replenishmenttype_idx\n-- DROP INDEX receiving_item_delivered_receiv_eventtype_replenishmenttype_idx;\nCREATE INDEX receiving_item_delivered_receiv_eventtype_replenishmenttype_idx    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  )    ;-- Index: receiving_item_delivered_received_eventtype_idx\n-- DROP INDEX receiving_item_delivered_received_eventtype_idx;\nCREATE INDEX receiving_item_delivered_received_eventtype_idx    ON receiving_item_delivered_received USING btree    (eventtype  )    ;-- Index: receiving_item_delivered_received_replenishmenttype_idx\n-- DROP INDEX receiving_item_delivered_received_replenishmenttype_idx;\nCREATE INDEX receiving_item_delivered_received_replenishmenttype_idx    ON receiving_item_delivered_received USING btree    (replenishmenttype  )    ;\nThanks,Rj\n I have a query which will more often run on DB and very slow and it is doing 'seqscan'. I was trying to optimize it by adding indexes in different ways but nothing helps.Any suggestions?Query:EXPALIN ANALYZE select serial_no,receivingplant,sku,r3_eventtime from (select serial_no,receivingplant,sku,eventtime as r3_eventtime, row_number() over (partition by serial_no order by eventtime desc) as mpos from receiving_item_delivered_received where eventtype='LineItemdetailsReceived'and replenishmenttype = 'DC2SWARRANTY'and coalesce(serial_no,'') <> '') Rec where mpos = 1;Query Planner: \"Subquery Scan on rec  (cost=70835.30..82275.49 rows=1760 width=39) (actual time=2322.999..3451.783 rows=333451 loops=1)\"\"  Filter: (rec.mpos = 1)\"\"  Rows Removed by Filter: 19900\"\"  ->  WindowAgg  (cost=70835.30..77875.42 rows=352006 width=47) (actual time=2322.997..3414.384 rows=353351 loops=1)\"\"        ->  Sort  (cost=70835.30..71715.31 rows=352006 width=39) (actual time=2322.983..3190.090 rows=353351 loops=1)\"\"              Sort Key: receiving_item_delivered_received.serial_no, receiving_item_delivered_received.eventtime DESC\"\"              Sort Method: external merge  Disk: 17424kB\"\"              ->  Seq Scan on receiving_item_delivered_received  (cost=0.00..28777.82 rows=352006 width=39) (actual time=0.011..184.677 rows=353351 loops=1)\"\"                    Filter: (((COALESCE(serial_no, ''::character varying))::text <> ''::text) AND ((eventtype)::text = 'LineItemdetailsReceived'::text) AND ((replenishmenttype)::text = 'DC2SWARRANTY'::text))\"\"                    Rows Removed by Filter: 55953\"\"Planning Time: 0.197 ms\"\"Execution Time: 3466.985 ms\"Table DDL: CREATE TABLE receiving_item_delivered_received(    load_dttm timestamp with time zone,    iamuniqueid character varying(200)  ,    batchid character varying(200)  ,    eventid character varying(200)  ,    eventtype character varying(200)  ,    eventversion character varying(200)  ,    eventtime timestamp with time zone,    eventproducerid character varying(200)  ,    deliverynumber character varying(200)  ,    activityid character varying(200)  ,    applicationid character varying(200)  ,    channelid character varying(200)  ,    interactionid character varying(200)  ,    sessionid character varying(200)  ,    receivingplant character varying(200)  ,    deliverydate date,    shipmentdate date,    shippingpoint character varying(200)  ,    replenishmenttype character varying(200)  ,    numberofpackages character varying(200)  ,    carrier_id character varying(200)  ,    carrier_name character varying(200)  ,    billoflading character varying(200)  ,    pro_no character varying(200)  ,    partner_id character varying(200)  ,    deliveryitem character varying(200)  ,    ponumber character varying(200)  ,    poitem character varying(200)  ,    tracking_no character varying(200)  ,    serial_no character varying(200)  ,    sto_no character varying(200)  ,    sim_no character varying(200)  ,    sku character varying(200)  ,    quantity numeric(15,2),    uom character varying(200)  );-- Index: receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx-- DROP INDEX receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx;CREATE INDEX receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  , COALESCE(serial_no, ''::character varying)  )    ;-- Index: receiving_item_delivered_rece_serial_no_eventtype_replenish_idx-- DROP INDEX receiving_item_delivered_rece_serial_no_eventtype_replenish_idx;CREATE INDEX receiving_item_delivered_rece_serial_no_eventtype_replenish_idx    ON receiving_item_delivered_received USING btree    (serial_no  , eventtype  , replenishmenttype  )        WHERE eventtype::text = 'LineItemdetailsReceived'::text AND replenishmenttype::text = 'DC2SWARRANTY'::text AND COALESCE(serial_no, ''::character varying)::text <> ''::text;-- Index: receiving_item_delivered_recei_eventtype_replenishmenttype_idx1-- DROP INDEX receiving_item_delivered_recei_eventtype_replenishmenttype_idx1;CREATE INDEX receiving_item_delivered_recei_eventtype_replenishmenttype_idx1    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  )        WHERE eventtype::text = 'LineItemdetailsReceived'::text AND replenishmenttype::text = 'DC2SWARRANTY'::text;-- Index: receiving_item_delivered_receiv_eventtype_replenishmenttype_idx-- DROP INDEX receiving_item_delivered_receiv_eventtype_replenishmenttype_idx;CREATE INDEX receiving_item_delivered_receiv_eventtype_replenishmenttype_idx    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  )    ;-- Index: receiving_item_delivered_received_eventtype_idx-- DROP INDEX receiving_item_delivered_received_eventtype_idx;CREATE INDEX receiving_item_delivered_received_eventtype_idx    ON receiving_item_delivered_received USING btree    (eventtype  )    ;-- Index: receiving_item_delivered_received_replenishmenttype_idx-- DROP INDEX receiving_item_delivered_received_replenishmenttype_idx;CREATE INDEX receiving_item_delivered_received_replenishmenttype_idx    ON receiving_item_delivered_received USING btree    (replenishmenttype  )    ;Thanks,Rj", "msg_date": "Fri, 4 Sep 2020 21:18:41 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Query performance issue" }, { "msg_contents": "Nagaraj Raj schrieb am 04.09.2020 um 23:18:\n> I have a query which will more often run on DB and very slow and it\n> is doing 'seqscan'. I was trying to optimize it by adding indexes in\n> different ways but nothing helps.\n>\n> EXPALIN ANALYZE select serial_no,receivingplant,sku,r3_eventtime\n> from (select serial_no,receivingplant,sku,eventtime as r3_eventtime, row_number() over (partition by serial_no order by eventtime desc) as mpos\n> from receiving_item_delivered_received\n> where eventtype='LineItemdetailsReceived'\n> and replenishmenttype = 'DC2SWARRANTY'\n> and coalesce(serial_no,'') <> ''\n> ) Rec where mpos = 1;\n>\n>\n> Query Planner:\n>\n> \"Subquery Scan on rec  (cost=70835.30..82275.49 rows=1760 width=39) (actual time=2322.999..3451.783 rows=333451 loops=1)\"\n> \"  Filter: (rec.mpos = 1)\"\n> \"  Rows Removed by Filter: 19900\"\n> \"  ->  WindowAgg  (cost=70835.30..77875.42 rows=352006 width=47) (actual time=2322.997..3414.384 rows=353351 loops=1)\"\n> \"        ->  Sort  (cost=70835.30..71715.31 rows=352006 width=39) (actual time=2322.983..3190.090 rows=353351 loops=1)\"\n> \"              Sort Key: receiving_item_delivered_received.serial_no, receiving_item_delivered_received.eventtime DESC\"\n> \"              Sort Method: external merge  Disk: 17424kB\"\n> \"              ->  Seq Scan on receiving_item_delivered_received  (cost=0.00..28777.82 rows=352006 width=39) (actual time=0.011..184.677 rows=353351 loops=1)\"\n> \"                    Filter: (((COALESCE(serial_no, ''::character varying))::text <> ''::text) AND ((eventtype)::text = 'LineItemdetailsReceived'::text) AND ((replenishmenttype)::text = 'DC2SWARRANTY'::text))\"\n> \"                    Rows Removed by Filter: 55953\"\n> \"Planning Time: 0.197 ms\"\n> \"Execution Time: 3466.985 ms\"\n\nThe query retrieves nearly all rows from the table 353351 of 409304 and the Seq Scan takes less than 200ms, so that's not your bottleneck.\nAdding indexes won't change that.\n\nThe majority of the time is spent in the sort step which is done on disk.\nTry to increase work_mem until the \"external merge\" disappears and is done in memory.\n\nThomas\n\n\n", "msg_date": "Fri, 4 Sep 2020 23:23:38 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "query planner:SPJe | explain.depesz.com\n\n\n| \n| \n| | \nSPJe | explain.depesz.com\n\n\n |\n\n |\n\n |\n\n\n\n\n On Friday, September 4, 2020, 02:19:06 PM PDT, Nagaraj Raj <[email protected]> wrote: \n \n I have a query which will more often run on DB and very slow and it is doing 'seqscan'. I was trying to optimize it by adding indexes in different ways but nothing helps.\nAny suggestions?\n\nQuery:\nEXPALIN ANALYZE select serial_no,receivingplant,sku,r3_eventtime from (select serial_no,receivingplant,sku,eventtime as r3_eventtime, row_number() over (partition by serial_no order by eventtime desc) as mpos from receiving_item_delivered_received where eventtype='LineItemdetailsReceived'and replenishmenttype = 'DC2SWARRANTY'and coalesce(serial_no,'') <> '') Rec where mpos = 1;\n\nQuery Planner: \n\"Subquery Scan on rec  (cost=70835.30..82275.49 rows=1760 width=39) (actual time=2322.999..3451.783 rows=333451 loops=1)\"\"  Filter: (rec.mpos = 1)\"\"  Rows Removed by Filter: 19900\"\"  ->  WindowAgg  (cost=70835.30..77875.42 rows=352006 width=47) (actual time=2322.997..3414.384 rows=353351 loops=1)\"\"        ->  Sort  (cost=70835.30..71715.31 rows=352006 width=39) (actual time=2322.983..3190.090 rows=353351 loops=1)\"\"              Sort Key: receiving_item_delivered_received.serial_no, receiving_item_delivered_received.eventtime DESC\"\"              Sort Method: external merge  Disk: 17424kB\"\"              ->  Seq Scan on receiving_item_delivered_received  (cost=0.00..28777.82 rows=352006 width=39) (actual time=0.011..184.677 rows=353351 loops=1)\"\"                    Filter: (((COALESCE(serial_no, ''::character varying))::text <> ''::text) AND ((eventtype)::text = 'LineItemdetailsReceived'::text) AND ((replenishmenttype)::text = 'DC2SWARRANTY'::text))\"\"                    Rows Removed by Filter: 55953\"\"Planning Time: 0.197 ms\"\"Execution Time: 3466.985 ms\"\nTable DDL: \nCREATE TABLE receiving_item_delivered_received(    load_dttm timestamp with time zone,    iamuniqueid character varying(200)  ,    batchid character varying(200)  ,    eventid character varying(200)  ,    eventtype character varying(200)  ,    eventversion character varying(200)  ,    eventtime timestamp with time zone,    eventproducerid character varying(200)  ,    deliverynumber character varying(200)  ,    activityid character varying(200)  ,    applicationid character varying(200)  ,    channelid character varying(200)  ,    interactionid character varying(200)  ,    sessionid character varying(200)  ,    receivingplant character varying(200)  ,    deliverydate date,    shipmentdate date,    shippingpoint character varying(200)  ,    replenishmenttype character varying(200)  ,    numberofpackages character varying(200)  ,    carrier_id character varying(200)  ,    carrier_name character varying(200)  ,    billoflading character varying(200)  ,    pro_no character varying(200)  ,    partner_id character varying(200)  ,    deliveryitem character varying(200)  ,    ponumber character varying(200)  ,    poitem character varying(200)  ,    tracking_no character varying(200)  ,    serial_no character varying(200)  ,    sto_no character varying(200)  ,    sim_no character varying(200)  ,    sku character varying(200)  ,    quantity numeric(15,2),    uom character varying(200)  );\n\n-- Index: receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx\n-- DROP INDEX receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx;\nCREATE INDEX receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  , COALESCE(serial_no, ''::character varying)  )    ;-- Index: receiving_item_delivered_rece_serial_no_eventtype_replenish_idx\n-- DROP INDEX receiving_item_delivered_rece_serial_no_eventtype_replenish_idx;\nCREATE INDEX receiving_item_delivered_rece_serial_no_eventtype_replenish_idx    ON receiving_item_delivered_received USING btree    (serial_no  , eventtype  , replenishmenttype  )        WHERE eventtype::text = 'LineItemdetailsReceived'::text AND replenishmenttype::text = 'DC2SWARRANTY'::text AND COALESCE(serial_no, ''::character varying)::text <> ''::text;-- Index: receiving_item_delivered_recei_eventtype_replenishmenttype_idx1\n-- DROP INDEX receiving_item_delivered_recei_eventtype_replenishmenttype_idx1;\nCREATE INDEX receiving_item_delivered_recei_eventtype_replenishmenttype_idx1    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  )        WHERE eventtype::text = 'LineItemdetailsReceived'::text AND replenishmenttype::text = 'DC2SWARRANTY'::text;-- Index: receiving_item_delivered_receiv_eventtype_replenishmenttype_idx\n-- DROP INDEX receiving_item_delivered_receiv_eventtype_replenishmenttype_idx;\nCREATE INDEX receiving_item_delivered_receiv_eventtype_replenishmenttype_idx    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  )    ;-- Index: receiving_item_delivered_received_eventtype_idx\n-- DROP INDEX receiving_item_delivered_received_eventtype_idx;\nCREATE INDEX receiving_item_delivered_received_eventtype_idx    ON receiving_item_delivered_received USING btree    (eventtype  )    ;-- Index: receiving_item_delivered_received_replenishmenttype_idx\n-- DROP INDEX receiving_item_delivered_received_replenishmenttype_idx;\nCREATE INDEX receiving_item_delivered_received_replenishmenttype_idx    ON receiving_item_delivered_received USING btree    (replenishmenttype  )    ;\nThanks,Rj \n\nquery planner:SPJe | explain.depesz.comSPJe | explain.depesz.com\n\n\n\n On Friday, September 4, 2020, 02:19:06 PM PDT, Nagaraj Raj <[email protected]> wrote:\n \n\n\n I have a query which will more often run on DB and very slow and it is doing 'seqscan'. I was trying to optimize it by adding indexes in different ways but nothing helps.Any suggestions?Query:EXPALIN ANALYZE select serial_no,receivingplant,sku,r3_eventtime from (select serial_no,receivingplant,sku,eventtime as r3_eventtime, row_number() over (partition by serial_no order by eventtime desc) as mpos from receiving_item_delivered_received where eventtype='LineItemdetailsReceived'and replenishmenttype = 'DC2SWARRANTY'and coalesce(serial_no,'') <> '') Rec where mpos = 1;Query Planner: \"Subquery Scan on rec  (cost=70835.30..82275.49 rows=1760 width=39) (actual time=2322.999..3451.783 rows=333451 loops=1)\"\"  Filter: (rec.mpos = 1)\"\"  Rows Removed by Filter: 19900\"\"  ->  WindowAgg  (cost=70835.30..77875.42 rows=352006 width=47) (actual time=2322.997..3414.384 rows=353351 loops=1)\"\"        ->  Sort  (cost=70835.30..71715.31 rows=352006 width=39) (actual time=2322.983..3190.090 rows=353351 loops=1)\"\"              Sort Key: receiving_item_delivered_received.serial_no, receiving_item_delivered_received.eventtime DESC\"\"              Sort Method: external merge  Disk: 17424kB\"\"              ->  Seq Scan on receiving_item_delivered_received  (cost=0.00..28777.82 rows=352006 width=39) (actual time=0.011..184.677 rows=353351 loops=1)\"\"                    Filter: (((COALESCE(serial_no, ''::character varying))::text <> ''::text) AND ((eventtype)::text = 'LineItemdetailsReceived'::text) AND ((replenishmenttype)::text = 'DC2SWARRANTY'::text))\"\"                    Rows Removed by Filter: 55953\"\"Planning Time: 0.197 ms\"\"Execution Time: 3466.985 ms\"Table DDL: CREATE TABLE receiving_item_delivered_received(    load_dttm timestamp with time zone,    iamuniqueid character varying(200)  ,    batchid character varying(200)  ,    eventid character varying(200)  ,    eventtype character varying(200)  ,    eventversion character varying(200)  ,    eventtime timestamp with time zone,    eventproducerid character varying(200)  ,    deliverynumber character varying(200)  ,    activityid character varying(200)  ,    applicationid character varying(200)  ,    channelid character varying(200)  ,    interactionid character varying(200)  ,    sessionid character varying(200)  ,    receivingplant character varying(200)  ,    deliverydate date,    shipmentdate date,    shippingpoint character varying(200)  ,    replenishmenttype character varying(200)  ,    numberofpackages character varying(200)  ,    carrier_id character varying(200)  ,    carrier_name character varying(200)  ,    billoflading character varying(200)  ,    pro_no character varying(200)  ,    partner_id character varying(200)  ,    deliveryitem character varying(200)  ,    ponumber character varying(200)  ,    poitem character varying(200)  ,    tracking_no character varying(200)  ,    serial_no character varying(200)  ,    sto_no character varying(200)  ,    sim_no character varying(200)  ,    sku character varying(200)  ,    quantity numeric(15,2),    uom character varying(200)  );-- Index: receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx-- DROP INDEX receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx;CREATE INDEX receiving_item_delivered_rece_eventtype_replenishmenttype_c_idx    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  , COALESCE(serial_no, ''::character varying)  )    ;-- Index: receiving_item_delivered_rece_serial_no_eventtype_replenish_idx-- DROP INDEX receiving_item_delivered_rece_serial_no_eventtype_replenish_idx;CREATE INDEX receiving_item_delivered_rece_serial_no_eventtype_replenish_idx    ON receiving_item_delivered_received USING btree    (serial_no  , eventtype  , replenishmenttype  )        WHERE eventtype::text = 'LineItemdetailsReceived'::text AND replenishmenttype::text = 'DC2SWARRANTY'::text AND COALESCE(serial_no, ''::character varying)::text <> ''::text;-- Index: receiving_item_delivered_recei_eventtype_replenishmenttype_idx1-- DROP INDEX receiving_item_delivered_recei_eventtype_replenishmenttype_idx1;CREATE INDEX receiving_item_delivered_recei_eventtype_replenishmenttype_idx1    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  )        WHERE eventtype::text = 'LineItemdetailsReceived'::text AND replenishmenttype::text = 'DC2SWARRANTY'::text;-- Index: receiving_item_delivered_receiv_eventtype_replenishmenttype_idx-- DROP INDEX receiving_item_delivered_receiv_eventtype_replenishmenttype_idx;CREATE INDEX receiving_item_delivered_receiv_eventtype_replenishmenttype_idx    ON receiving_item_delivered_received USING btree    (eventtype  , replenishmenttype  )    ;-- Index: receiving_item_delivered_received_eventtype_idx-- DROP INDEX receiving_item_delivered_received_eventtype_idx;CREATE INDEX receiving_item_delivered_received_eventtype_idx    ON receiving_item_delivered_received USING btree    (eventtype  )    ;-- Index: receiving_item_delivered_received_replenishmenttype_idx-- DROP INDEX receiving_item_delivered_received_replenishmenttype_idx;CREATE INDEX receiving_item_delivered_received_replenishmenttype_idx    ON receiving_item_delivered_received USING btree    (replenishmenttype  )    ;Thanks,Rj", "msg_date": "Fri, 4 Sep 2020 21:24:05 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "On Fri, Sep 04, 2020 at 09:18:41PM +0000, Nagaraj Raj wrote:\n> I have a query which will more often run on DB and very slow and it is doing 'seqscan'. I was trying to optimize it by adding indexes in different ways but nothing helps.\n>Any suggestions?\n>\n\n1) It's rather difficult to read the query plan as it's mangled by your\ne-mail client. I recommend to check how to prevent the client from doing\nthat, or attaching the plan as a file.\n\n2) The whole query takes ~3500ms, and the seqscan only accounts for\n~200ms, so it's very clearly not the main issue.\n\n3) Most of the time is spent in sort, so the one thing you can do is\neither increasing work_mem, or adding index providing that ordering.\nEven better if you include all necessary columns to allow IOS.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 4 Sep 2020 23:36:35 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "CREATE INDEX receiving_item_delivered_received\nON receiving_item_delivered_received USING btree ( eventtype,\nreplenishmenttype, serial_no, eventtime DESC );\n\n>\nMore work_mem as Tomas suggests, but also, the above index should find the\ncandidate rows by the first two keys, and then be able to skip the sort by\nreading just that portion of the index that matches\n\neventtype='LineItemdetailsReceived'\nand replenishmenttype = 'DC2SWARRANTY'\n\nCREATE INDEX receiving_item_delivered_received ON receiving_item_delivered_received USING btree ( eventtype, replenishmenttype, serial_no, eventtime DESC );\n\nMore work_mem as Tomas suggests, but also, the above index should find the candidate rows by the first two keys, and then be able to skip the sort by reading just that portion of the index that matches eventtype='LineItemdetailsReceived'and replenishmenttype = 'DC2SWARRANTY'", "msg_date": "Fri, 4 Sep 2020 15:39:12 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "Note- you may need to vacuum* the table to get full benefit of index only\nscan by updating the visibility map. I think index only scan is skipped in\nfavor of just checking visibility when the visibility map is stale.\n\n*NOT full\n\nNote- you may need to vacuum* the table to get full benefit of index only scan by updating the visibility map. I think index only scan is skipped in favor of just checking visibility when the visibility map is stale.*NOT full", "msg_date": "Fri, 4 Sep 2020 15:41:23 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "Sorry, I have attached the wrong query planner, which executed in lower environment which has fewer resources:\nUpdated one,eVFiF | explain.depesz.com\n\n\n| \n| \n| | \neVFiF | explain.depesz.com\n\n\n |\n\n |\n\n |\n\n\n\nThanks,Rj On Friday, September 4, 2020, 02:39:57 PM PDT, Michael Lewis <[email protected]> wrote: \n \n CREATE INDEX receiving_item_delivered_received ON receiving_item_delivered_received USING btree ( eventtype, replenishmenttype, serial_no, eventtime DESC );\n \n\nMore work_mem as Tomas suggests, but also, the above index should find the candidate rows by the first two keys, and then be able to skip the sort by reading just that portion of the index that matches \n\neventtype='LineItemdetailsReceived'and replenishmenttype = 'DC2SWARRANTY'\n \n\nSorry, I have attached the wrong query planner, which executed in lower environment which has fewer resources:Updated one,eVFiF | explain.depesz.comeVFiF | explain.depesz.comThanks,Rj\n\n\n\n On Friday, September 4, 2020, 02:39:57 PM PDT, Michael Lewis <[email protected]> wrote:\n \n\n\nCREATE INDEX receiving_item_delivered_received ON receiving_item_delivered_received USING btree ( eventtype, replenishmenttype, serial_no, eventtime DESC );\n\nMore work_mem as Tomas suggests, but also, the above index should find the candidate rows by the first two keys, and then be able to skip the sort by reading just that portion of the index that matches eventtype='LineItemdetailsReceived'and replenishmenttype = 'DC2SWARRANTY'", "msg_date": "Fri, 4 Sep 2020 21:44:14 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "\"Subquery Scan on rec (cost=1628601.89..1676580.92 rows=7381 width=41)\n(actual time=22171.986..23549.079 rows=1236042 loops=1)\" \" Filter:\n(rec.mpos = 1)\" \" Rows Removed by Filter: 228737\" \" Buffers: shared hit=45\nread=1166951\" \" I/O Timings: read=29.530\" \" -> WindowAgg\n(cost=1628601.89..1658127.45 rows=1476278 width=49) (actual\ntime=22171.983..23379.219 rows=1464779 loops=1)\" \" Buffers: shared hit=45\nread=1166951\" \" I/O Timings: read=29.530\" \" -> Sort\n(cost=1628601.89..1632292.58 rows=1476278 width=41) (actual\ntime=22171.963..22484.044 rows=1464779 loops=1)\" \" Sort Key:\nreceiving_item_delivered_received.serial_no,\nreceiving_item_delivered_received.eventtime DESC\" \" Sort Method: quicksort\nMemory: 163589kB\" \" Buffers: shared hit=45 read=1166951\" \" I/O Timings:\nread=29.530\" \" -> Gather (cost=1000.00..1477331.13 rows=1476278 width=41)\n(actual time=1.296..10428.060 rows=1464779 loops=1)\" \" Workers Planned: 2\" \"\nWorkers Launched: 2\" \" Buffers: shared hit=39 read=1166951\" \" I/O Timings:\nread=29.530\" \" -> Parallel Seq Scan on receiving_item_delivered_received\n(cost=0.00..1328703.33 rows=615116 width=41) (actual time=1.262..10150.325\nrows=488260 loops=3)\" \" Filter: (((COALESCE(serial_no, ''::character\nvarying))::text <> ''::text) AND ((eventtype)::text =\n'LineItemdetailsReceived'::text) AND ((replenishmenttype)::text =\n'DC2SWARRANTY'::text))\" \" Rows Removed by Filter: 6906258\" \" Buffers:\nshared hit=39 read=1166951\" \" I/O Timings: read=29.530\" \"Planning Time:\n0.375 ms\" \"Execution Time: 23617.348 ms\"\n\n\nThat is doing a lot of reading from disk. What do you have shared_buffers\nset to? I'd expect better cache hits unless it is quite low or this is a\nquery that differs greatly from the typical work.\n\nAlso, did you try adding the index I suggested? That lowest node has 488k\nrows coming out of it after throwing away 6.9 million. I would expect an\nindex on only eventtype, replenishmenttype to be quite helpful. I don't\nassume you have tons of rows where serial_no is null.\n\n\"Subquery Scan on rec (cost=1628601.89..1676580.92 rows=7381 width=41) (actual time=22171.986..23549.079 rows=1236042 loops=1)\"\n\" Filter: (rec.mpos = 1)\"\n\" Rows Removed by Filter: 228737\"\n\" Buffers: shared hit=45 read=1166951\"\n\" I/O Timings: read=29.530\"\n\" -> WindowAgg (cost=1628601.89..1658127.45 rows=1476278 width=49) (actual time=22171.983..23379.219 rows=1464779 loops=1)\"\n\" Buffers: shared hit=45 read=1166951\"\n\" I/O Timings: read=29.530\"\n\" -> Sort (cost=1628601.89..1632292.58 rows=1476278 width=41) (actual time=22171.963..22484.044 rows=1464779 loops=1)\"\n\" Sort Key: receiving_item_delivered_received.serial_no, receiving_item_delivered_received.eventtime DESC\"\n\" Sort Method: quicksort Memory: 163589kB\"\n\" Buffers: shared hit=45 read=1166951\"\n\" I/O Timings: read=29.530\"\n\" -> Gather (cost=1000.00..1477331.13 rows=1476278 width=41) (actual time=1.296..10428.060 rows=1464779 loops=1)\"\n\" Workers Planned: 2\"\n\" Workers Launched: 2\"\n\" Buffers: shared hit=39 read=1166951\"\n\" I/O Timings: read=29.530\"\n\" -> Parallel Seq Scan on receiving_item_delivered_received (cost=0.00..1328703.33 rows=615116 width=41) (actual time=1.262..10150.325 rows=488260 loops=3)\"\n\" Filter: (((COALESCE(serial_no, ''::character varying))::text <> ''::text) AND ((eventtype)::text = 'LineItemdetailsReceived'::text) AND ((replenishmenttype)::text = 'DC2SWARRANTY'::text))\"\n\" Rows Removed by Filter: 6906258\"\n\" Buffers: shared hit=39 read=1166951\"\n\" I/O Timings: read=29.530\"\n\"Planning Time: 0.375 ms\"\n\"Execution Time: 23617.348 ms\"That is doing a lot of reading from disk. What do you have shared_buffers set to? I'd expect better cache hits unless it is quite low or this is a query that differs greatly from the typical work.Also, did you try adding the index I suggested? That lowest node has 488k rows coming out of it after throwing away 6.9 million. I would expect an index on only eventtype, replenishmenttype to be quite helpful. I don't assume you have tons of rows where serial_no is null.", "msg_date": "Fri, 4 Sep 2020 15:55:10 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "Hi Mechel,\nI added the index as you suggested and the planner going through the bitmap index scan,heap and the new planner is,HaOx | explain.depesz.com\n\n\n| \n| \n| | \nHaOx | explain.depesz.com\n\n\n |\n\n |\n\n |\n\n\nMem config: \nAurora PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bit\nvCPU = 64RAM = 512show shared_buffers = 355 GBshow work_mem = 214 MB\nshow maintenance_work_mem = 8363MBshow effective_cache_size = 355 GB\n\nThanks,Rj\n On Friday, September 4, 2020, 02:55:50 PM PDT, Michael Lewis <[email protected]> wrote: \n \n \"Subquery Scan on rec (cost=1628601.89..1676580.92 rows=7381 width=41) (actual time=22171.986..23549.079 rows=1236042 loops=1)\"\" Filter: (rec.mpos = 1)\"\" Rows Removed by Filter: 228737\"\" Buffers: shared hit=45 read=1166951\"\" I/O Timings: read=29.530\"\" -> WindowAgg (cost=1628601.89..1658127.45 rows=1476278 width=49) (actual time=22171.983..23379.219 rows=1464779 loops=1)\"\" Buffers: shared hit=45 read=1166951\"\" I/O Timings: read=29.530\"\" -> Sort (cost=1628601.89..1632292.58 rows=1476278 width=41) (actual time=22171.963..22484.044 rows=1464779 loops=1)\"\" Sort Key: receiving_item_delivered_received.serial_no, receiving_item_delivered_received.eventtime DESC\"\" Sort Method: quicksort Memory: 163589kB\"\" Buffers: shared hit=45 read=1166951\"\" I/O Timings: read=29.530\"\" -> Gather (cost=1000.00..1477331.13 rows=1476278 width=41) (actual time=1.296..10428.060 rows=1464779 loops=1)\"\" Workers Planned: 2\"\" Workers Launched: 2\"\" Buffers: shared hit=39 read=1166951\"\" I/O Timings: read=29.530\"\" -> Parallel Seq Scan on receiving_item_delivered_received (cost=0.00..1328703.33 rows=615116 width=41) (actual time=1.262..10150.325 rows=488260 loops=3)\"\" Filter: (((COALESCE(serial_no, ''::character varying))::text <> ''::text) AND ((eventtype)::text = 'LineItemdetailsReceived'::text) AND ((replenishmenttype)::text = 'DC2SWARRANTY'::text))\"\" Rows Removed by Filter: 6906258\"\" Buffers: shared hit=39 read=1166951\"\" I/O Timings: read=29.530\"\"Planning Time: 0.375 ms\"\"Execution Time: 23617.348 ms\"\n\nThat is doing a lot of reading from disk. What do you have shared_buffers set to? I'd expect better cache hits unless it is quite low or this is a query that differs greatly from the typical work.\nAlso, did you try adding the index I suggested? That lowest node has 488k rows coming out of it after throwing away 6.9 million. I would expect an index on only eventtype, replenishmenttype to be quite helpful. I don't assume you have tons of rows where serial_no is null. \n\nHi Mechel,I added the index as you suggested and the planner going through the bitmap index scan,heap and the new planner is,HaOx | explain.depesz.comHaOx | explain.depesz.comMem config: Aurora PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bitvCPU = 64RAM = 512show shared_buffers = 355 GBshow work_mem = 214 MBshow maintenance_work_mem = 8363MBshow effective_cache_size = 355 GBThanks,Rj\n\n\n\n On Friday, September 4, 2020, 02:55:50 PM PDT, Michael Lewis <[email protected]> wrote:\n \n\n\n\"Subquery Scan on rec (cost=1628601.89..1676580.92 rows=7381 width=41) (actual time=22171.986..23549.079 rows=1236042 loops=1)\"\n\" Filter: (rec.mpos = 1)\"\n\" Rows Removed by Filter: 228737\"\n\" Buffers: shared hit=45 read=1166951\"\n\" I/O Timings: read=29.530\"\n\" -> WindowAgg (cost=1628601.89..1658127.45 rows=1476278 width=49) (actual time=22171.983..23379.219 rows=1464779 loops=1)\"\n\" Buffers: shared hit=45 read=1166951\"\n\" I/O Timings: read=29.530\"\n\" -> Sort (cost=1628601.89..1632292.58 rows=1476278 width=41) (actual time=22171.963..22484.044 rows=1464779 loops=1)\"\n\" Sort Key: receiving_item_delivered_received.serial_no, receiving_item_delivered_received.eventtime DESC\"\n\" Sort Method: quicksort Memory: 163589kB\"\n\" Buffers: shared hit=45 read=1166951\"\n\" I/O Timings: read=29.530\"\n\" -> Gather (cost=1000.00..1477331.13 rows=1476278 width=41) (actual time=1.296..10428.060 rows=1464779 loops=1)\"\n\" Workers Planned: 2\"\n\" Workers Launched: 2\"\n\" Buffers: shared hit=39 read=1166951\"\n\" I/O Timings: read=29.530\"\n\" -> Parallel Seq Scan on receiving_item_delivered_received (cost=0.00..1328703.33 rows=615116 width=41) (actual time=1.262..10150.325 rows=488260 loops=3)\"\n\" Filter: (((COALESCE(serial_no, ''::character varying))::text <> ''::text) AND ((eventtype)::text = 'LineItemdetailsReceived'::text) AND ((replenishmenttype)::text = 'DC2SWARRANTY'::text))\"\n\" Rows Removed by Filter: 6906258\"\n\" Buffers: shared hit=39 read=1166951\"\n\" I/O Timings: read=29.530\"\n\"Planning Time: 0.375 ms\"\n\"Execution Time: 23617.348 ms\"That is doing a lot of reading from disk. What do you have shared_buffers set to? I'd expect better cache hits unless it is quite low or this is a query that differs greatly from the typical work.Also, did you try adding the index I suggested? That lowest node has 488k rows coming out of it after throwing away 6.9 million. I would expect an index on only eventtype, replenishmenttype to be quite helpful. I don't assume you have tons of rows where serial_no is null.", "msg_date": "Fri, 4 Sep 2020 22:20:06 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "On Sat, 5 Sep 2020 at 10:20, Nagaraj Raj <[email protected]> wrote:\n> I added the index as you suggested and the planner going through the bitmap index scan,heap and the new planner is,\n> HaOx | explain.depesz.com\n\nIn addition to that index, you could consider moving away from\nstandard SQL and use DISTINCT ON, which is specific to PostgreSQL and\nshould give you the same result.\n\nEXPLAIN ANALYZE\nSELECT DISTINCT ON (serial_no) serial_no,receivingplant,sku,r3_eventtime\nFROM receiving_item_delivered_received\nWHERE eventtype='LineItemdetailsReceived'\n AND replenishmenttype = 'DC2SWARRANTY'\n AND coalesce(serial_no,'') <> ''\nORDER BY serial_no,eventtime DESC;\n\nThe more duplicate serial_nos you have the better this one should\nperform. It appears you don't have too many so I don't think this\nwill be significantly faster, but it should be a bit quicker.\n\nDavid\n\n\n", "msg_date": "Sat, 5 Sep 2020 20:16:29 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "On Fri, Sep 4, 2020, 4:20 PM Nagaraj Raj <[email protected]> wrote:\n\n> Hi Mechel,\n>\n> I added the index as you suggested and the planner going through the\n> bitmap index scan,heap and the new planner is,\n> HaOx | explain.depesz.com <https://explain.depesz.com/s/HaOx>\n>\n> HaOx | explain.depesz.com\n>\n> <https://explain.depesz.com/s/HaOx>\n>\n> Mem config:\n>\n> Aurora PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC)\n> 4.9.3, 64-bit\n> vCPU = 64\n> RAM = 512\n> show shared_buffers = 355 GB\n> show work_mem = 214 MB\n> show maintenance_work_mem = 8363MB\n> show effective_cache_size = 355 GB\n>\n\nI'm not very familiar with Aurora, but I would certainly try the explain\nanalyze with timing OFF and verify that the total time is similar. If the\nsystem clock is slow to read, execution plans can be significantly slower\njust because of the cost to measure each step.\n\nThat sort being so slow is perplexing. Did you do the two column or four\ncolumn index I suggested?\n\nObviously it depends on your use case and how much you want to tune this\nspecific query, but you could always try a partial index matching the where\ncondition and just index the other two columns to avoid the sort.\n\nOn Fri, Sep 4, 2020, 4:20 PM Nagaraj Raj <[email protected]> wrote:\nHi Mechel,I added the index as you suggested and the planner going through the bitmap index scan,heap and the new planner is,HaOx | explain.depesz.comHaOx | explain.depesz.comMem config: Aurora PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bitvCPU = 64RAM = 512show shared_buffers = 355 GBshow work_mem = 214 MBshow maintenance_work_mem = 8363MBshow effective_cache_size = 355 GBI'm not very familiar with Aurora, but I would certainly try the explain analyze with timing OFF and verify that the total time is similar. If the system clock is slow to read, execution plans can be significantly slower just because of the cost to measure each step.That sort being so slow is perplexing. Did you do the two column or four column index I suggested?Obviously it depends on your use case and how much you want to tune this specific query, but you could always try a partial index matching the where condition and just index the other two columns to avoid the sort.", "msg_date": "Sat, 5 Sep 2020 07:42:17 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "Hi Michael,\n\nI created an index as suggested, it improved.  I was tried with partial index but the planner not using it.\n\nalso, there is no difference even with timing OFF. ktbv : Optimization for: plan #HaOx | explain.depesz.com\n\n\n| \n| \n| | \nktbv : Optimization for: plan #HaOx | explain.depesz.com\n\n\n |\n\n |\n\n |\n\n\n\nThanks,Rj\n\n\n On Saturday, September 5, 2020, 06:42:31 AM PDT, Michael Lewis <[email protected]> wrote: \n \n \n\nOn Fri, Sep 4, 2020, 4:20 PM Nagaraj Raj <[email protected]> wrote:\n\n Hi Mechel,\nI added the index as you suggested and the planner going through the bitmap index scan,heap and the new planner is,HaOx | explain.depesz.com\n\n\n| \n| \n| | \nHaOx | explain.depesz.com\n\n\n |\n\n |\n\n |\n\n\nMem config: \nAurora PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bit\nvCPU = 64RAM = 512show shared_buffers = 355 GBshow work_mem = 214 MB\nshow maintenance_work_mem = 8363MBshow effective_cache_size = 355 GB\n\nI'm not very familiar with Aurora, but I would certainly try the explain analyze with timing OFF and verify that the total time is similar. If the system clock is slow to read, execution plans can be significantly slower just because of the cost to measure each step.\nThat sort being so slow is perplexing. Did you do the two column or four column index I suggested?\nObviously it depends on your use case and how much you want to tune this specific query, but you could always try a partial index matching the where condition and just index the other two columns to avoid the sort. \n\nHi Michael,I created an index as suggested, it improved.  I was tried with partial index but the planner not using it.also, there is no difference even with timing OFF. ktbv : Optimization for: plan #HaOx | explain.depesz.comktbv : Optimization for: plan #HaOx | explain.depesz.comThanks,Rj\n\n\n\n On Saturday, September 5, 2020, 06:42:31 AM PDT, Michael Lewis <[email protected]> wrote:\n \n\n\nOn Fri, Sep 4, 2020, 4:20 PM Nagaraj Raj <[email protected]> wrote:\nHi Mechel,I added the index as you suggested and the planner going through the bitmap index scan,heap and the new planner is,HaOx | explain.depesz.comHaOx | explain.depesz.comMem config: Aurora PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bitvCPU = 64RAM = 512show shared_buffers = 355 GBshow work_mem = 214 MBshow maintenance_work_mem = 8363MBshow effective_cache_size = 355 GBI'm not very familiar with Aurora, but I would certainly try the explain analyze with timing OFF and verify that the total time is similar. If the system clock is slow to read, execution plans can be significantly slower just because of the cost to measure each step.That sort being so slow is perplexing. Did you do the two column or four column index I suggested?Obviously it depends on your use case and how much you want to tune this specific query, but you could always try a partial index matching the where condition and just index the other two columns to avoid the sort.", "msg_date": "Sat, 5 Sep 2020 21:49:48 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance issue" }, { "msg_contents": "Hi %,\r\n\r\nin order to be able to readjust the effects of the stored procedure and, if necessary, \r\nto save turnaround times, different requests can be concatenated using semicolons for\r\nbundling several statements in one request. We did some tests against a postgres cluster.\r\n\r\nThe results in terms of optimizations are as follows:\r\n\r\n\r\nBatchsize | clients| count Queries | average s/query| comment\r\n--------------|---------|----------------------|----------------------|-------------------\r\n1\t | 1\t | 15.86k\t | 2.24ms\t | \r\n10\t | 1\t | 31.80k\t | 332us\t | \r\n25\t | 1\t | 31.75k\t | 312us\t | \r\n50\t | 1\t | 32.00k\t | 280us\t | \r\n100\t | 1\t | 32.00k\t | 286us\t | \r\n \t | \t | \t\t | | \r\n1\t | 2\t | 57.1k\t | 733us\t | Drop to 30k after some time!!\r\n10\t | 2\t | 63.6k\t | 323us\t | \r\n25\t | 2\t | 63.5k\t | 308us\t | \r\n50\t | 2\t | 64k\t | 293us\t | \r\n100\t | 2\t | 67.2k\t | 290us\t | \r\n | | | \t | \r\n1\t | 10\t | 158.6k\t | 2.15ms\t | \r\n10\t | 10\t | 298.9k\t | 383us\t | Drop to ~200k!!\r\n25\t | 10\t | 225k\t | 1.16ms\t | \r\n50\t | 10\t | 192k\t | 1.55ms\t | \r\n100\t | 10\t | 201.6k\t | 1.44ms\t | \r\n \t | \t | \t | | \r\n10\t | 50\t | 800k | 2.2ms\t | \r\n\r\n\r\nIt seems to be saturated here at around 200k requests per minute, \r\nthe question remains why this is so.\r\n\r\nDoes anyone has experience with something similar or are there some\r\nhints about how to optimize the postgres cluster for such bundled statements?\r\n\r\nThanks and best regards\r\n\r\nDirk\r\n", "msg_date": "Tue, 8 Sep 2020 10:29:18 +0000", "msg_from": "Dirk Krautschick <[email protected]>", "msg_from_op": false, "msg_subject": "AW: Query performance issue" } ]
[ { "msg_contents": "Update: Better title and format corrections\r\n\r\nHi %,\r\n\r\nin order to be able to readjust the effects of the stored procedure and, if necessary, to save turnaround times, different requests can be concatenated using semicolons for bundling several statements in one request. We did some tests against a postgres cluster.\r\n\r\nThe results in terms of optimizations are as follows:\r\n\r\n\r\nBatchsize | clients| count Queries | average s/query| comment\r\n--------------|---------|----------------------|----------------------|-\r\n1\t | 1\t | 15.86k\t | 2.24ms\t | \r\n10\t | 1\t | 31.80k\t | 332us\t | \r\n25\t | 1\t | 31.75k\t | 312us\t | \r\n50\t | 1\t | 32.00k\t | 280us\t | \r\n100\t | 1\t | 32.00k\t | 286us\t | \r\n \t | \t | \t\t | | \r\n1\t | 2\t | 57.1k\t | 733us\t | Drop to 30k after some time!!\r\n10\t | 2\t | 63.6k\t | 323us\t | \r\n25\t | 2\t | 63.5k\t | 308us\t | \r\n50\t | 2\t | 64k\t | 293us\t | \r\n100\t | 2\t | 67.2k\t | 290us\t | \r\n | | | \t | \r\n1\t | 10\t | 158.6k\t | 2.15ms\t | \r\n10\t | 10\t | 298.9k\t | 383us\t | Drop to ~200k!!\r\n25\t | 10\t | 225k\t | 1.16ms\t | \r\n50\t | 10\t | 192k\t | 1.55ms\t | \r\n100\t | 10\t | 201.6k\t | 1.44ms\t | \r\n \t | \t | \t | | \r\n10\t | 50\t | 800k | 2.2ms\t | \r\n\r\n\r\nIt seems to be saturated here at around 200k requests per minute, the question remains why this is so.\r\n\r\nDoes anyone has experience with something similar or are there some hints about how to optimize the postgres cluster for such bundled statements?\r\n\r\nThanks and best regards\r\n\r\nDirk\r\n", "msg_date": "Tue, 8 Sep 2020 10:30:50 +0000", "msg_from": "Dirk Krautschick <[email protected]>", "msg_from_op": true, "msg_subject": "Query Performance in bundled requests" }, { "msg_contents": "On Tue, Sep 08, 2020 at 10:30:50AM +0000, Dirk Krautschick wrote:\n> Update: Better title and format corrections\n> \n> Hi %,\n> \n> in order to be able to readjust the effects of the stored procedure and, if necessary, to save turnaround times, different requests can be concatenated using semicolons for bundling several statements in one request. We did some tests against a postgres cluster.\n> \n> The results in terms of optimizations are as follows:\n> \n> \n> Batchsize | clients| count Queries | average s/query| comment\n> --------------|---------|----------------------|----------------------|-\n> 1\t | 1\t | 15.86k\t | 2.24ms\t | \n> 10\t | 1\t | 31.80k\t | 332us\t | \n> 25\t | 1\t | 31.75k\t | 312us\t | \n> 50\t | 1\t | 32.00k\t | 280us\t | \n\nI guess you're looking at the minimum of 280us.\n\n; 1/(280e-6) * 60\n ~214285.71428571428571428571\n\n> the question remains why this is so.\n\nYou can't expect it to go a billion times faster just by putting a billion\nqueries in one request, and at 50 batches it looks like you've hit the next\nperformance bottleneck. Whether that's CPU / IO / network / locks / RAM /\nplanner / context switches / logging / ??? remains to be seen.\n\n> Does anyone has experience with something similar or are there some\n> hints about how to optimize the postgres cluster for such bundled statements?\n\nI think at this step you want to optimize for what the statements are doing,\nnot for the statements themselves. Could you send a query plan for the stored\nprocedure ?\n\nAlso, you'd maybe want to think if there's a way you can avoid making 100s of\n1000s of requests per second, rather than trying to optimize for it. Can you\nmake another stored procedure which handles N requests rather than calling this\nSP N times ? There's no guarantee that won't hit the same or other bottleneck,\nuntil you see what that is.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Sep 2020 05:49:22 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance in bundled requests" } ]
[ { "msg_contents": "Hi,\nWe have an application where one of the APIs calling queries(attached) is\nspiking the CPU to 100% during load testing.\nHowever, queries are making use of indexes(Bitmap Index and Bitmap Heap\nscan though). When run separately on DB queries hardly take less than 200\nms. Is CPU spiking due to Bitmap Heap Scan?\nThese queries are being called thousands of times. Application team says\nthey have handled connection pooling from the Application side. So there is\nno connection pooling here from DB side. Current db instance size is\n\"db.m4.4xlarge\"\n64 GB RAM 16 vCPU\".\nThe Application dev team has primary keys and foreign keys on tables so\nthey are unable to partition the tables as well due to limitations of\npostgres partitioning. Columns in WHERE clauses are not constant in all\nqueries to decide partition keys.\n\n1. Does DB need more CPU considering this kind of load?\n2. Can the query be tuned further? It is already using indexes(Bitmap\nthough).\n3. Will connection pooling resolve the CPU Spike issues?\n\nAlso pasting Query and plans below.\n\n----------------------exampleCount 1. Without\ninternalexamplecode-----------------------\n\nlmp_examples=> explain analyze with exampleCount as ( select\nexamplestatuscode from example j where 1=1 and j.countrycode = 'AD' and\nj.facilitycode in ('ABCD') and j.internalexamplecode in\n('005','006','007','005') and ((j.examplestartdatetime between '2020-05-18\n00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )\ngroup by j.examplestatuscode)\nlmp_examples-> select js.examplestatuscode,COALESCE(count(*),0) stat_count\nfrom exampleCount jc right outer join examplestatus js on\njc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;\n\n\n QUERY PLAN\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=79353.80..79353.89 rows=9 width=12) (actual\ntime=88.847..88.850 rows=9 loops=1)\n Group Key: js.examplestatuscode\n CTE examplecount\n -> HashAggregate (cost=79352.42..79352.46 rows=4 width=4) (actual\ntime=88.803..88.805 rows=5 loops=1)\n Group Key: j.examplestatuscode\n -> Bitmap Heap Scan on example j (cost=1547.81..79251.08\nrows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)\n Recheck Cond: ((((countrycode)::text = 'AD'::text) AND\n((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\nANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18\n00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n'2020-08-19 00:00:00'::timestamp without time zone)) OR\n(examplestartdatetime IS NULL))\n Filter: (((countrycode)::text = 'AD'::text) AND\n((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\nANY ('{005,006,007,005}'::text[])))\n Rows Removed by Filter: 3\n Heap Blocks: exact=18307\n -> BitmapOr (cost=1547.81..1547.81 rows=40538 width=0)\n(actual time=15.707..15.707 rows=0 loops=1)\n -> Bitmap Index Scan on example_list9_idx\n(cost=0.00..1523.10 rows=40538 width=0) (actual time=15.702..15.702\nrows=62851 loops=1)\n Index Cond: (((countrycode)::text =\n'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND\n((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND\n(examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time\nzone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without\ntime zone))\n -> Bitmap Index Scan on example_list10_idx\n(cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)\n Index Cond: (examplestartdatetime IS NULL)\n -> Hash Left Join (cost=0.13..1.29 rows=9 width=4) (actual\ntime=88.831..88.840 rows=9 loops=1)\n Hash Cond: ((js.examplestatuscode)::text =\n(jc.examplestatuscode)::text)\n -> Seq Scan on examplestatus js (cost=0.00..1.09 rows=9 width=4)\n(actual time=0.004..0.007 rows=9 loops=1)\n -> Hash (cost=0.08..0.08 rows=4 width=16) (actual\ntime=88.817..88.817 rows=5 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> CTE Scan on examplecount jc (cost=0.00..0.08 rows=4\nwidth=16) (actual time=88.807..88.812 rows=5 loops=1)\n Planning Time: 0.979 ms\n Execution Time: 89.036 ms\n(23 rows)\n\n\n----------------exampleCount 2. With\ninternalexamplecode---------------------------------\n\n\nlmp_examples=> explain analyze with exampleCount as ( select\nexamplestatuscode,count(1) stat_count from example j where 1=1 and\nj.countrycode = 'AD' and j.facilitycode in ('ABCD') and\nj.internalexamplecode in ('005','006','007','005') and\n((j.examplestartdatetime between '2020-05-18 00:00:00' and '2020-08-19\n00:00:00' ) or j.examplestartdatetime IS NULL ) group by\nj.examplestatuscode)\nlmp_examples-> select js.examplestatuscode,COALESCE(stat_count,0)\nstat_count from exampleCount jc right outer join examplestatus js on\njc.examplestatuscode=js.examplestatuscode;\n\n\n QUERY PLAN\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=79453.94..79455.10 rows=9 width=12) (actual\ntime=89.660..89.669 rows=9 loops=1)\n Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)\n CTE examplecount\n -> HashAggregate (cost=79453.77..79453.81 rows=4 width=12) (actual\ntime=89.638..89.640 rows=5 loops=1)\n Group Key: j.examplestatuscode\n -> Bitmap Heap Scan on example j (cost=1547.81..79251.08\nrows=40538 width=4) (actual time=18.193..69.710 rows=62851 loops=1)\n Recheck Cond: ((((countrycode)::text = 'AD'::text) AND\n((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\nANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18\n00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n'2020-08-19 00:00:00'::timestamp without time zone)) OR\n(examplestartdatetime IS NULL))\n Filter: (((countrycode)::text = 'AD'::text) AND\n((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\nANY ('{005,006,007,005}'::text[])))\n Rows Removed by Filter: 3\n Heap Blocks: exact=18307\n -> BitmapOr (cost=1547.81..1547.81 rows=40538 width=0)\n(actual time=15.483..15.483 rows=0 loops=1)\n -> Bitmap Index Scan on example_list9_idx\n(cost=0.00..1523.10 rows=40538 width=0) (actual time=15.477..15.478\nrows=62851 loops=1)\n Index Cond: (((countrycode)::text =\n'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND\n((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND\n(examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time\nzone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without\ntime zone))\n -> Bitmap Index Scan on example_list10_idx\n(cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)\n Index Cond: (examplestartdatetime IS NULL)\n -> Seq Scan on examplestatus js (cost=0.00..1.09 rows=9 width=4)\n(actual time=0.003..0.005 rows=9 loops=1)\n -> Hash (cost=0.08..0.08 rows=4 width=24) (actual time=89.650..89.651\nrows=5 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> CTE Scan on examplecount jc (cost=0.00..0.08 rows=4 width=24)\n(actual time=89.641..89.647 rows=5 loops=1)\n Planning Time: 0.470 ms\n Execution Time: 89.737 ms\n\n------------------------exampleSelect-----------------------------------\n\n\nlmp_examples=> explain analyze select j.id from example j where 1=1 and\nj.countrycode = 'AD' and j.facilitycode in ('ABCD') and\nj.examplestatuscode in ('101') and j.internalexamplecode in\n('005','006','007','005') and ((j.examplestartdatetime between '2020-05-18\n00:00:00' and '2020-08-19 00:00:00') or j.examplestartdatetime IS NULL)\nORDER BY createddate DESC limit 10;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=71286.65..71286.68 rows=10 width=12) (actual\ntime=47.351..47.359 rows=10 loops=1)\n -> Sort (cost=71286.65..71335.31 rows=19462 width=12) (actual\ntime=47.349..47.352 rows=10 loops=1)\n Sort Key: createddate DESC\n Sort Method: top-N heapsort Memory: 25kB\n -> Bitmap Heap Scan on example j (cost=1176.77..70866.09\nrows=19462 width=12) (actual time=15.133..46.555 rows=2530 loops=1)\n Recheck Cond: (((countrycode)::text = 'AD'::text) AND\n((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text =\n'101'::text) AND ((internalexamplecode)::text = ANY\n('{005,006,007,005}'::text[])))\n Filter: (((examplestartdatetime >= '2020-05-18\n00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n'2020-08-19 00:00:00'::timestamp without time zone)) OR\n(examplestartdatetime IS NULL))\n Rows Removed by Filter: 38724\n Heap Blocks: exact=20923\n -> Bitmap Index Scan on example_list1_idx\n(cost=0.00..1171.90 rows=33211 width=0) (actual time=9.938..9.939\nrows=41254 loops=1)\n Index Cond: (((countrycode)::text = 'AD'::text) AND\n((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text =\n'101'::text) AND ((internalexamplecode)::text = ANY\n('{005,006,007,005}'::text[])))\n Planning Time: 0.398 ms\n Execution Time: 47.416 ms\n\nRegards,\nAditya.\n\nHi,We have an application where one of the APIs calling queries(attached) is spiking the CPU to 100% during load testing.However, queries are making use of indexes(Bitmap Index and Bitmap Heap scan though). When run separately on DB queries hardly take less than 200 ms. Is CPU spiking due to Bitmap Heap Scan?These queries are being called thousands of times. Application team says they have handled connection pooling from the Application side. So there is no connection pooling here from DB side. Current db instance size is \"db.m4.4xlarge\" 64 GB RAM 16 vCPU\".  The Application dev team has primary keys and foreign keys on tables so they are unable to partition the tables as well due to limitations of postgres partitioning. Columns in WHERE clauses are not constant in all queries to decide partition keys.1. Does DB need more CPU considering this kind of load? 2. Can the query be tuned further? It is already using indexes(Bitmap though).3. Will connection pooling resolve the CPU Spike issues?Also pasting Query and plans below.----------------------exampleCount 1. Without internalexamplecode-----------------------lmp_examples=> explain analyze with exampleCount as ( select examplestatuscode from example j where 1=1 and j.countrycode = 'AD'   and j.facilitycode in ('ABCD') and j.internalexamplecode in ('005','006','007','005') and ((j.examplestartdatetime  between '2020-05-18 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )  group by j.examplestatuscode)lmp_examples-> select js.examplestatuscode,COALESCE(count(*),0) stat_count from exampleCount jc right outer join examplestatus js on jc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;                                                                                                                                                                                 QUERY PLAN                                                                                                                                                     ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=79353.80..79353.89 rows=9 width=12) (actual time=88.847..88.850 rows=9 loops=1)   Group Key: js.examplestatuscode   CTE examplecount     ->  HashAggregate  (cost=79352.42..79352.46 rows=4 width=4) (actual time=88.803..88.805 rows=5 loops=1)           Group Key: j.examplestatuscode           ->  Bitmap Heap Scan on example j  (cost=1547.81..79251.08 rows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)                 Recheck Cond: ((((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))                 Filter: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))                 Rows Removed by Filter: 3                 Heap Blocks: exact=18307                 ->  BitmapOr  (cost=1547.81..1547.81 rows=40538 width=0) (actual time=15.707..15.707 rows=0 loops=1)                       ->  Bitmap Index Scan on example_list9_idx  (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.702..15.702 rows=62851 loops=1)                             Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone))                       ->  Bitmap Index Scan on example_list10_idx  (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)                             Index Cond: (examplestartdatetime IS NULL)   ->  Hash Left Join  (cost=0.13..1.29 rows=9 width=4) (actual time=88.831..88.840 rows=9 loops=1)         Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)         ->  Seq Scan on examplestatus js  (cost=0.00..1.09 rows=9 width=4) (actual time=0.004..0.007 rows=9 loops=1)         ->  Hash  (cost=0.08..0.08 rows=4 width=16) (actual time=88.817..88.817 rows=5 loops=1)               Buckets: 1024  Batches: 1  Memory Usage: 9kB               ->  CTE Scan on examplecount jc  (cost=0.00..0.08 rows=4 width=16) (actual time=88.807..88.812 rows=5 loops=1) Planning Time: 0.979 ms Execution Time: 89.036 ms(23 rows)----------------exampleCount 2. With internalexamplecode---------------------------------lmp_examples=> explain analyze with exampleCount as ( select examplestatuscode,count(1) stat_count from example j where 1=1 and j.countrycode = 'AD'   and j.facilitycode in ('ABCD') and j.internalexamplecode in ('005','006','007','005') and ((j.examplestartdatetime  between '2020-05-18 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )  group by j.examplestatuscode)lmp_examples-> select js.examplestatuscode,COALESCE(stat_count,0) stat_count from exampleCount jc right outer join examplestatus js on jc.examplestatuscode=js.examplestatuscode;                                                                                                                                                                                 QUERY PLAN                                                                                                                                                     ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Left Join  (cost=79453.94..79455.10 rows=9 width=12) (actual time=89.660..89.669 rows=9 loops=1)   Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)   CTE examplecount     ->  HashAggregate  (cost=79453.77..79453.81 rows=4 width=12) (actual time=89.638..89.640 rows=5 loops=1)           Group Key: j.examplestatuscode           ->  Bitmap Heap Scan on example j  (cost=1547.81..79251.08 rows=40538 width=4) (actual time=18.193..69.710 rows=62851 loops=1)                 Recheck Cond: ((((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))                 Filter: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))                 Rows Removed by Filter: 3                 Heap Blocks: exact=18307                 ->  BitmapOr  (cost=1547.81..1547.81 rows=40538 width=0) (actual time=15.483..15.483 rows=0 loops=1)                       ->  Bitmap Index Scan on example_list9_idx  (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.477..15.478 rows=62851 loops=1)                             Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone))                       ->  Bitmap Index Scan on example_list10_idx  (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)                             Index Cond: (examplestartdatetime IS NULL)   ->  Seq Scan on examplestatus js  (cost=0.00..1.09 rows=9 width=4) (actual time=0.003..0.005 rows=9 loops=1)   ->  Hash  (cost=0.08..0.08 rows=4 width=24) (actual time=89.650..89.651 rows=5 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         ->  CTE Scan on examplecount jc  (cost=0.00..0.08 rows=4 width=24) (actual time=89.641..89.647 rows=5 loops=1) Planning Time: 0.470 ms Execution Time: 89.737 ms------------------------exampleSelect-----------------------------------lmp_examples=> explain analyze select j.id from example j where 1=1  and j.countrycode = 'AD'  and j.facilitycode in ('ABCD') and j.examplestatuscode in ('101') and j.internalexamplecode in ('005','006','007','005')  and ((j.examplestartdatetime between '2020-05-18 00:00:00' and '2020-08-19 00:00:00') or j.examplestartdatetime IS NULL)  ORDER BY createddate DESC limit 10;                                                                                                          QUERY PLAN                                                    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit  (cost=71286.65..71286.68 rows=10 width=12) (actual time=47.351..47.359 rows=10 loops=1)   ->  Sort  (cost=71286.65..71335.31 rows=19462 width=12) (actual time=47.349..47.352 rows=10 loops=1)         Sort Key: createddate DESC         Sort Method: top-N heapsort  Memory: 25kB         ->  Bitmap Heap Scan on example j  (cost=1176.77..70866.09 rows=19462 width=12) (actual time=15.133..46.555 rows=2530 loops=1)               Recheck Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text = '101'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))               Filter: (((examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))               Rows Removed by Filter: 38724               Heap Blocks: exact=20923               ->  Bitmap Index Scan on example_list1_idx  (cost=0.00..1171.90 rows=33211 width=0) (actual time=9.938..9.939 rows=41254 loops=1)                     Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text = '101'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[]))) Planning Time: 0.398 ms Execution Time: 47.416 msRegards,Aditya.", "msg_date": "Tue, 8 Sep 2020 19:03:26 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "AWS RDS PostgreSQL CPU Spiking to 100%" }, { "msg_contents": "út 8. 9. 2020 v 15:33 odesílatel aditya desai <[email protected]> napsal:\n\n> Hi,\n> We have an application where one of the APIs calling queries(attached) is\n> spiking the CPU to 100% during load testing.\n> However, queries are making use of indexes(Bitmap Index and Bitmap Heap\n> scan though). When run separately on DB queries hardly take less than 200\n> ms. Is CPU spiking due to Bitmap Heap Scan?\n> These queries are being called thousands of times. Application team says\n> they have handled connection pooling from the Application side. So there is\n> no connection pooling here from DB side. Current db instance size is \"db.m4.4xlarge\"\n> 64 GB RAM 16 vCPU\".\n> The Application dev team has primary keys and foreign keys on tables so\n> they are unable to partition the tables as well due to limitations of\n> postgres partitioning. Columns in WHERE clauses are not constant in all\n> queries to decide partition keys.\n>\n>\nif you have a lot of connection/disconnection per sec (more than ten or\ntwenty), then connection pooling can be a significant win.\n\nOne symptom of this issue can be high cpu.\n\nRegards\n\nPavel\n\n\n\n> 1. Does DB need more CPU considering this kind of load?\n> 2. Can the query be tuned further? It is already using indexes(Bitmap\n> though).\n> 3. Will connection pooling resolve the CPU Spike issues?\n>\n> Also pasting Query and plans below.\n>\n> ----------------------exampleCount 1. Without\n> internalexamplecode-----------------------\n>\n> lmp_examples=> explain analyze with exampleCount as ( select\n> examplestatuscode from example j where 1=1 and j.countrycode = 'AD' and\n> j.facilitycode in ('ABCD') and j.internalexamplecode in\n> ('005','006','007','005') and ((j.examplestartdatetime between '2020-05-18\n> 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )\n> group by j.examplestatuscode)\n> lmp_examples-> select js.examplestatuscode,COALESCE(count(*),0) stat_count\n> from exampleCount jc right outer join examplestatus js on\n> jc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;\n>\n>\n> QUERY PLAN\n>\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=79353.80..79353.89 rows=9 width=12) (actual\n> time=88.847..88.850 rows=9 loops=1)\n> Group Key: js.examplestatuscode\n> CTE examplecount\n> -> HashAggregate (cost=79352.42..79352.46 rows=4 width=4) (actual\n> time=88.803..88.805 rows=5 loops=1)\n> Group Key: j.examplestatuscode\n> -> Bitmap Heap Scan on example j (cost=1547.81..79251.08\n> rows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)\n> Recheck Cond: ((((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n> ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18\n> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n> (examplestartdatetime IS NULL))\n> Filter: (((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n> ANY ('{005,006,007,005}'::text[])))\n> Rows Removed by Filter: 3\n> Heap Blocks: exact=18307\n> -> BitmapOr (cost=1547.81..1547.81 rows=40538 width=0)\n> (actual time=15.707..15.707 rows=0 loops=1)\n> -> Bitmap Index Scan on example_list9_idx\n> (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.702..15.702\n> rows=62851 loops=1)\n> Index Cond: (((countrycode)::text =\n> 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND\n> ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND\n> (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time\n> zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without\n> time zone))\n> -> Bitmap Index Scan on example_list10_idx\n> (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)\n> Index Cond: (examplestartdatetime IS NULL)\n> -> Hash Left Join (cost=0.13..1.29 rows=9 width=4) (actual\n> time=88.831..88.840 rows=9 loops=1)\n> Hash Cond: ((js.examplestatuscode)::text =\n> (jc.examplestatuscode)::text)\n> -> Seq Scan on examplestatus js (cost=0.00..1.09 rows=9\n> width=4) (actual time=0.004..0.007 rows=9 loops=1)\n> -> Hash (cost=0.08..0.08 rows=4 width=16) (actual\n> time=88.817..88.817 rows=5 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> -> CTE Scan on examplecount jc (cost=0.00..0.08 rows=4\n> width=16) (actual time=88.807..88.812 rows=5 loops=1)\n> Planning Time: 0.979 ms\n> Execution Time: 89.036 ms\n> (23 rows)\n>\n>\n> ----------------exampleCount 2. With\n> internalexamplecode---------------------------------\n>\n>\n> lmp_examples=> explain analyze with exampleCount as ( select\n> examplestatuscode,count(1) stat_count from example j where 1=1 and\n> j.countrycode = 'AD' and j.facilitycode in ('ABCD') and\n> j.internalexamplecode in ('005','006','007','005') and\n> ((j.examplestartdatetime between '2020-05-18 00:00:00' and '2020-08-19\n> 00:00:00' ) or j.examplestartdatetime IS NULL ) group by\n> j.examplestatuscode)\n> lmp_examples-> select js.examplestatuscode,COALESCE(stat_count,0)\n> stat_count from exampleCount jc right outer join examplestatus js on\n> jc.examplestatuscode=js.examplestatuscode;\n>\n>\n> QUERY PLAN\n>\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=79453.94..79455.10 rows=9 width=12) (actual\n> time=89.660..89.669 rows=9 loops=1)\n> Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)\n> CTE examplecount\n> -> HashAggregate (cost=79453.77..79453.81 rows=4 width=12) (actual\n> time=89.638..89.640 rows=5 loops=1)\n> Group Key: j.examplestatuscode\n> -> Bitmap Heap Scan on example j (cost=1547.81..79251.08\n> rows=40538 width=4) (actual time=18.193..69.710 rows=62851 loops=1)\n> Recheck Cond: ((((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n> ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18\n> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n> (examplestartdatetime IS NULL))\n> Filter: (((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n> ANY ('{005,006,007,005}'::text[])))\n> Rows Removed by Filter: 3\n> Heap Blocks: exact=18307\n> -> BitmapOr (cost=1547.81..1547.81 rows=40538 width=0)\n> (actual time=15.483..15.483 rows=0 loops=1)\n> -> Bitmap Index Scan on example_list9_idx\n> (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.477..15.478\n> rows=62851 loops=1)\n> Index Cond: (((countrycode)::text =\n> 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND\n> ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND\n> (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time\n> zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without\n> time zone))\n> -> Bitmap Index Scan on example_list10_idx\n> (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)\n> Index Cond: (examplestartdatetime IS NULL)\n> -> Seq Scan on examplestatus js (cost=0.00..1.09 rows=9 width=4)\n> (actual time=0.003..0.005 rows=9 loops=1)\n> -> Hash (cost=0.08..0.08 rows=4 width=24) (actual time=89.650..89.651\n> rows=5 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> -> CTE Scan on examplecount jc (cost=0.00..0.08 rows=4\n> width=24) (actual time=89.641..89.647 rows=5 loops=1)\n> Planning Time: 0.470 ms\n> Execution Time: 89.737 ms\n>\n> ------------------------exampleSelect-----------------------------------\n>\n>\n> lmp_examples=> explain analyze select j.id from example j where 1=1 and\n> j.countrycode = 'AD' and j.facilitycode in ('ABCD') and\n> j.examplestatuscode in ('101') and j.internalexamplecode in\n> ('005','006','007','005') and ((j.examplestartdatetime between '2020-05-18\n> 00:00:00' and '2020-08-19 00:00:00') or j.examplestartdatetime IS NULL)\n> ORDER BY createddate DESC limit 10;\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=71286.65..71286.68 rows=10 width=12) (actual\n> time=47.351..47.359 rows=10 loops=1)\n> -> Sort (cost=71286.65..71335.31 rows=19462 width=12) (actual\n> time=47.349..47.352 rows=10 loops=1)\n> Sort Key: createddate DESC\n> Sort Method: top-N heapsort Memory: 25kB\n> -> Bitmap Heap Scan on example j (cost=1176.77..70866.09\n> rows=19462 width=12) (actual time=15.133..46.555 rows=2530 loops=1)\n> Recheck Cond: (((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text =\n> '101'::text) AND ((internalexamplecode)::text = ANY\n> ('{005,006,007,005}'::text[])))\n> Filter: (((examplestartdatetime >= '2020-05-18\n> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n> (examplestartdatetime IS NULL))\n> Rows Removed by Filter: 38724\n> Heap Blocks: exact=20923\n> -> Bitmap Index Scan on example_list1_idx\n> (cost=0.00..1171.90 rows=33211 width=0) (actual time=9.938..9.939\n> rows=41254 loops=1)\n> Index Cond: (((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text =\n> '101'::text) AND ((internalexamplecode)::text = ANY\n> ('{005,006,007,005}'::text[])))\n> Planning Time: 0.398 ms\n> Execution Time: 47.416 ms\n>\n> Regards,\n> Aditya.\n>\n\nút 8. 9. 2020 v 15:33 odesílatel aditya desai <[email protected]> napsal:Hi,We have an application where one of the APIs calling queries(attached) is spiking the CPU to 100% during load testing.However, queries are making use of indexes(Bitmap Index and Bitmap Heap scan though). When run separately on DB queries hardly take less than 200 ms. Is CPU spiking due to Bitmap Heap Scan?These queries are being called thousands of times. Application team says they have handled connection pooling from the Application side. So there is no connection pooling here from DB side. Current db instance size is \"db.m4.4xlarge\" 64 GB RAM 16 vCPU\".  The Application dev team has primary keys and foreign keys on tables so they are unable to partition the tables as well due to limitations of postgres partitioning. Columns in WHERE clauses are not constant in all queries to decide partition keys.if you have a lot of connection/disconnection per sec (more than ten or twenty), then connection pooling can be a significant win. One symptom of this issue can be high cpu.RegardsPavel 1. Does DB need more CPU considering this kind of load? 2. Can the query be tuned further? It is already using indexes(Bitmap though).3. Will connection pooling resolve the CPU Spike issues?Also pasting Query and plans below.----------------------exampleCount 1. Without internalexamplecode-----------------------lmp_examples=> explain analyze with exampleCount as ( select examplestatuscode from example j where 1=1 and j.countrycode = 'AD'   and j.facilitycode in ('ABCD') and j.internalexamplecode in ('005','006','007','005') and ((j.examplestartdatetime  between '2020-05-18 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )  group by j.examplestatuscode)lmp_examples-> select js.examplestatuscode,COALESCE(count(*),0) stat_count from exampleCount jc right outer join examplestatus js on jc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;                                                                                                                                                                                 QUERY PLAN                                                                                                                                                     ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=79353.80..79353.89 rows=9 width=12) (actual time=88.847..88.850 rows=9 loops=1)   Group Key: js.examplestatuscode   CTE examplecount     ->  HashAggregate  (cost=79352.42..79352.46 rows=4 width=4) (actual time=88.803..88.805 rows=5 loops=1)           Group Key: j.examplestatuscode           ->  Bitmap Heap Scan on example j  (cost=1547.81..79251.08 rows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)                 Recheck Cond: ((((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))                 Filter: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))                 Rows Removed by Filter: 3                 Heap Blocks: exact=18307                 ->  BitmapOr  (cost=1547.81..1547.81 rows=40538 width=0) (actual time=15.707..15.707 rows=0 loops=1)                       ->  Bitmap Index Scan on example_list9_idx  (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.702..15.702 rows=62851 loops=1)                             Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone))                       ->  Bitmap Index Scan on example_list10_idx  (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)                             Index Cond: (examplestartdatetime IS NULL)   ->  Hash Left Join  (cost=0.13..1.29 rows=9 width=4) (actual time=88.831..88.840 rows=9 loops=1)         Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)         ->  Seq Scan on examplestatus js  (cost=0.00..1.09 rows=9 width=4) (actual time=0.004..0.007 rows=9 loops=1)         ->  Hash  (cost=0.08..0.08 rows=4 width=16) (actual time=88.817..88.817 rows=5 loops=1)               Buckets: 1024  Batches: 1  Memory Usage: 9kB               ->  CTE Scan on examplecount jc  (cost=0.00..0.08 rows=4 width=16) (actual time=88.807..88.812 rows=5 loops=1) Planning Time: 0.979 ms Execution Time: 89.036 ms(23 rows)----------------exampleCount 2. With internalexamplecode---------------------------------lmp_examples=> explain analyze with exampleCount as ( select examplestatuscode,count(1) stat_count from example j where 1=1 and j.countrycode = 'AD'   and j.facilitycode in ('ABCD') and j.internalexamplecode in ('005','006','007','005') and ((j.examplestartdatetime  between '2020-05-18 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )  group by j.examplestatuscode)lmp_examples-> select js.examplestatuscode,COALESCE(stat_count,0) stat_count from exampleCount jc right outer join examplestatus js on jc.examplestatuscode=js.examplestatuscode;                                                                                                                                                                                 QUERY PLAN                                                                                                                                                     ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Left Join  (cost=79453.94..79455.10 rows=9 width=12) (actual time=89.660..89.669 rows=9 loops=1)   Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)   CTE examplecount     ->  HashAggregate  (cost=79453.77..79453.81 rows=4 width=12) (actual time=89.638..89.640 rows=5 loops=1)           Group Key: j.examplestatuscode           ->  Bitmap Heap Scan on example j  (cost=1547.81..79251.08 rows=40538 width=4) (actual time=18.193..69.710 rows=62851 loops=1)                 Recheck Cond: ((((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))                 Filter: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))                 Rows Removed by Filter: 3                 Heap Blocks: exact=18307                 ->  BitmapOr  (cost=1547.81..1547.81 rows=40538 width=0) (actual time=15.483..15.483 rows=0 loops=1)                       ->  Bitmap Index Scan on example_list9_idx  (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.477..15.478 rows=62851 loops=1)                             Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone))                       ->  Bitmap Index Scan on example_list10_idx  (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)                             Index Cond: (examplestartdatetime IS NULL)   ->  Seq Scan on examplestatus js  (cost=0.00..1.09 rows=9 width=4) (actual time=0.003..0.005 rows=9 loops=1)   ->  Hash  (cost=0.08..0.08 rows=4 width=24) (actual time=89.650..89.651 rows=5 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         ->  CTE Scan on examplecount jc  (cost=0.00..0.08 rows=4 width=24) (actual time=89.641..89.647 rows=5 loops=1) Planning Time: 0.470 ms Execution Time: 89.737 ms------------------------exampleSelect-----------------------------------lmp_examples=> explain analyze select j.id from example j where 1=1  and j.countrycode = 'AD'  and j.facilitycode in ('ABCD') and j.examplestatuscode in ('101') and j.internalexamplecode in ('005','006','007','005')  and ((j.examplestartdatetime between '2020-05-18 00:00:00' and '2020-08-19 00:00:00') or j.examplestartdatetime IS NULL)  ORDER BY createddate DESC limit 10;                                                                                                          QUERY PLAN                                                    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit  (cost=71286.65..71286.68 rows=10 width=12) (actual time=47.351..47.359 rows=10 loops=1)   ->  Sort  (cost=71286.65..71335.31 rows=19462 width=12) (actual time=47.349..47.352 rows=10 loops=1)         Sort Key: createddate DESC         Sort Method: top-N heapsort  Memory: 25kB         ->  Bitmap Heap Scan on example j  (cost=1176.77..70866.09 rows=19462 width=12) (actual time=15.133..46.555 rows=2530 loops=1)               Recheck Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text = '101'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))               Filter: (((examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))               Rows Removed by Filter: 38724               Heap Blocks: exact=20923               ->  Bitmap Index Scan on example_list1_idx  (cost=0.00..1171.90 rows=33211 width=0) (actual time=9.938..9.939 rows=41254 loops=1)                     Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text = '101'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[]))) Planning Time: 0.398 ms Execution Time: 47.416 msRegards,Aditya.", "msg_date": "Tue, 8 Sep 2020 16:36:02 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AWS RDS PostgreSQL CPU Spiking to 100%" }, { "msg_contents": "On Tue, Sep 8, 2020 at 9:33 AM aditya desai <[email protected]> wrote:\n\n> Hi,\n> We have an application where one of the APIs calling queries(attached) is\n> spiking the CPU to 100% during load testing.\n> However, queries are making use of indexes(Bitmap Index and Bitmap Heap\n> scan though).\n>\n\nThe CPU is there to be used. Anything will use 100% of the CPU unless it\nruns into some other bottleneck first.\n\nThese queries are being called thousands of times.\n>\n\nOver what time period? At what concurrency level?\n\n\n\n> Application team says they have handled connection pooling from the\n> Application side.\n>\n\nDid they do it correctly? Are you seeing a lot of connections churning\nthrough?\n\n\n> 1. Does DB need more CPU considering this kind of load?\n>\n\nIs it currently running fast enough, or does it need to be faster?\n\n\n> 2. Can the query be tuned further?\n>\n\nThe query you show can't possibly generate the plan you show, so there is\nno way to know that.\n\n\n> 3. Will connection pooling resolve the CPU Spike issues?\n>\n\nNot if the app-side pooling was done correctly.\n\n\n>\n> Also pasting Query and plans below.\n>\n> ----------------------exampleCount 1. Without\n> internalexamplecode-----------------------\n>\n> lmp_examples=> explain analyze with exampleCount as ( select\n> examplestatuscode from example j where 1=1 and j.countrycode = 'AD' and\n> j.facilitycode in ('ABCD') and j.internalexamplecode in\n> ('005','006','007','005') and ((j.examplestartdatetime between '2020-05-18\n> 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )\n> group by j.examplestatuscode)\n> lmp_examples-> select js.examplestatuscode,COALESCE(count(*),0) stat_count\n> from exampleCount jc right outer join examplestatus js on\n> jc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;\n>\n>\n> QUERY PLAN\n>\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=79353.80..79353.89 rows=9 width=12) (actual\n> time=88.847..88.850 rows=9 loops=1)\n> Group Key: js.examplestatuscode\n> CTE examplecount\n> -> HashAggregate (cost=79352.42..79352.46 rows=4 width=4) (actual\n> time=88.803..88.805 rows=5 loops=1)\n> Group Key: j.examplestatuscode\n> -> Bitmap Heap Scan on example j (cost=1547.81..79251.08\n> rows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)\n> Recheck Cond: ((((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n> ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18\n> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n> (examplestartdatetime IS NULL))\n>\n\nNote that the parenthesization of the OR condition is different between the\nrecheck, and the query itself. So I think that either the query or the\nplan has not been presented accurately. Please double check them.\n\nAlso, what version of PostgreSQL are you using? In v12, the CTE gets\noptimized away entirely.\n\nCheers,\n\nJeff\n\n>\n\nOn Tue, Sep 8, 2020 at 9:33 AM aditya desai <[email protected]> wrote:Hi,We have an application where one of the APIs calling queries(attached) is spiking the CPU to 100% during load testing.However, queries are making use of indexes(Bitmap Index and Bitmap Heap scan though). The CPU is there to be used.  Anything will use 100% of the CPU unless it runs into some other bottleneck first.These queries are being called thousands of times. Over what time period?  At what concurrency level? Application team says they have handled connection pooling from the Application side.Did they do it correctly?  Are you seeing a lot of connections churning through?1. Does DB need more CPU considering this kind of load? Is it currently running fast enough, or does it need to be faster? 2. Can the query be tuned further?The query you show can't possibly generate the plan you show, so there is no way to know that. 3. Will connection pooling resolve the CPU Spike issues?Not if the app-side pooling was done correctly. Also pasting Query and plans below.----------------------exampleCount 1. Without internalexamplecode-----------------------lmp_examples=> explain analyze with exampleCount as ( select examplestatuscode from example j where 1=1 and j.countrycode = 'AD'   and j.facilitycode in ('ABCD') and j.internalexamplecode in ('005','006','007','005') and ((j.examplestartdatetime  between '2020-05-18 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )  group by j.examplestatuscode)lmp_examples-> select js.examplestatuscode,COALESCE(count(*),0) stat_count from exampleCount jc right outer join examplestatus js on jc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;                                                                                                                                                                                 QUERY PLAN                                                                                                                                                     ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=79353.80..79353.89 rows=9 width=12) (actual time=88.847..88.850 rows=9 loops=1)   Group Key: js.examplestatuscode   CTE examplecount     ->  HashAggregate  (cost=79352.42..79352.46 rows=4 width=4) (actual time=88.803..88.805 rows=5 loops=1)           Group Key: j.examplestatuscode           ->  Bitmap Heap Scan on example j  (cost=1547.81..79251.08 rows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)                 Recheck Cond: ((((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))Note that the parenthesization of the OR condition is different between the recheck, and the query itself.  So I think that either the query or the plan has not been presented accurately.  Please double check them.Also, what version of PostgreSQL are you using?  In v12, the CTE gets optimized away entirely.Cheers,Jeff", "msg_date": "Tue, 8 Sep 2020 12:05:24 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AWS RDS PostgreSQL CPU Spiking to 100%" }, { "msg_contents": ">\n>\n> Hi,\n> We have an application where one of the APIs calling queries(attached) is\n> spiking the CPU to 100% during load testing.\n> However, queries are making use of indexes(Bitmap Index and Bitmap Heap\n> scan though). When run separately on DB queries hardly take less than 200\n> ms. Is CPU spiking due to Bitmap Heap Scan?\n> These queries are being called thousands of times. Application team says\n> they have handled connection pooling from the Application side. So there is\n> no connection pooling here from DB side. Current db instance size is \"db.m4.4xlarge\"\n> 64 GB RAM 16 vCPU\".\n> The Application dev team has primary keys and foreign keys on tables so\n> they are unable to partition the tables as well due to limitations of\n> postgres partitioning. Columns in WHERE clauses are not constant in all\n> queries to decide partition keys.\n>\n> 1. Does DB need more CPU considering this kind of load?\n> 2. Can the query be tuned further? It is already using indexes(Bitmap\n> though).\n> 3. Will connection pooling resolve the CPU Spike issues?\n>\n> Also pasting Query and plans below.\n>\n> ----------------------exampleCount 1. Without\n> internalexamplecode-----------------------\n>\n> lmp_examples=> explain analyze with exampleCount as ( select\n> examplestatuscode from example j where 1=1 and j.countrycode = 'AD' and\n> j.facilitycode in ('ABCD') and j.internalexamplecode in\n> ('005','006','007','005') and ((j.examplestartdatetime between '2020-05-18\n> 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )\n> group by j.examplestatuscode)\n> lmp_examples-> select js.examplestatuscode,COALESCE(count(*),0) stat_count\n> from exampleCount jc right outer join examplestatus js on\n> jc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;\n>\n>\n> QUERY PLAN\n>\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=79353.80..79353.89 rows=9 width=12) (actual\n> time=88.847..88.850 rows=9 loops=1)\n> Group Key: js.examplestatuscode\n> CTE examplecount\n> -> HashAggregate (cost=79352.42..79352.46 rows=4 width=4) (actual\n> time=88.803..88.805 rows=5 loops=1)\n> Group Key: j.examplestatuscode\n> -> Bitmap Heap Scan on example j (cost=1547.81..79251.08\n> rows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)\n> Recheck Cond: ((((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n> ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18\n> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n> (examplestartdatetime IS NULL))\n> Filter: (((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n> ANY ('{005,006,007,005}'::text[])))\n> Rows Removed by Filter: 3\n> Heap Blocks: exact=18307\n> -> BitmapOr (cost=1547.81..1547.81 rows=40538 width=0)\n> (actual time=15.707..15.707 rows=0 loops=1)\n> -> Bitmap Index Scan on example_list9_idx\n> (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.702..15.702\n> rows=62851 loops=1)\n> Index Cond: (((countrycode)::text =\n> 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND\n> ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND\n> (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time\n> zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without\n> time zone))\n> -> Bitmap Index Scan on example_list10_idx\n> (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)\n> Index Cond: (examplestartdatetime IS NULL)\n> -> Hash Left Join (cost=0.13..1.29 rows=9 width=4) (actual\n> time=88.831..88.840 rows=9 loops=1)\n> Hash Cond: ((js.examplestatuscode)::text =\n> (jc.examplestatuscode)::text)\n> -> Seq Scan on examplestatus js (cost=0.00..1.09 rows=9\n> width=4) (actual time=0.004..0.007 rows=9 loops=1)\n> -> Hash (cost=0.08..0.08 rows=4 width=16) (actual\n> time=88.817..88.817 rows=5 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> -> CTE Scan on examplecount jc (cost=0.00..0.08 rows=4\n> width=16) (actual time=88.807..88.812 rows=5 loops=1)\n> Planning Time: 0.979 ms\n> Execution Time: 89.036 ms\n> (23 rows)\n>\n>\n> ----------------exampleCount 2. With\n> internalexamplecode---------------------------------\n>\n>\n> lmp_examples=> explain analyze with exampleCount as ( select\n> examplestatuscode,count(1) stat_count from example j where 1=1 and\n> j.countrycode = 'AD' and j.facilitycode in ('ABCD') and\n> j.internalexamplecode in ('005','006','007','005') and\n> ((j.examplestartdatetime between '2020-05-18 00:00:00' and '2020-08-19\n> 00:00:00' ) or j.examplestartdatetime IS NULL ) group by\n> j.examplestatuscode)\n> lmp_examples-> select js.examplestatuscode,COALESCE(stat_count,0)\n> stat_count from exampleCount jc right outer join examplestatus js on\n> jc.examplestatuscode=js.examplestatuscode;\n>\n>\n> QUERY PLAN\n>\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=79453.94..79455.10 rows=9 width=12) (actual\n> time=89.660..89.669 rows=9 loops=1)\n> Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)\n> CTE examplecount\n> -> HashAggregate (cost=79453.77..79453.81 rows=4 width=12) (actual\n> time=89.638..89.640 rows=5 loops=1)\n> Group Key: j.examplestatuscode\n> -> Bitmap Heap Scan on example j (cost=1547.81..79251.08\n> rows=40538 width=4) (actual time=18.193..69.710 rows=62851 loops=1)\n> Recheck Cond: ((((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n> ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18\n> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n> (examplestartdatetime IS NULL))\n> Filter: (((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n> ANY ('{005,006,007,005}'::text[])))\n> Rows Removed by Filter: 3\n> Heap Blocks: exact=18307\n> -> BitmapOr (cost=1547.81..1547.81 rows=40538 width=0)\n> (actual time=15.483..15.483 rows=0 loops=1)\n> -> Bitmap Index Scan on example_list9_idx\n> (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.477..15.478\n> rows=62851 loops=1)\n> Index Cond: (((countrycode)::text =\n> 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND\n> ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND\n> (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time\n> zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without\n> time zone))\n> -> Bitmap Index Scan on example_list10_idx\n> (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)\n> Index Cond: (examplestartdatetime IS NULL)\n> -> Seq Scan on examplestatus js (cost=0.00..1.09 rows=9 width=4)\n> (actual time=0.003..0.005 rows=9 loops=1)\n> -> Hash (cost=0.08..0.08 rows=4 width=24) (actual time=89.650..89.651\n> rows=5 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> -> CTE Scan on examplecount jc (cost=0.00..0.08 rows=4\n> width=24) (actual time=89.641..89.647 rows=5 loops=1)\n> Planning Time: 0.470 ms\n> Execution Time: 89.737 ms\n>\n> ------------------------exampleSelect-----------------------------------\n>\n>\n> lmp_examples=> explain analyze select j.id from example j where 1=1 and\n> j.countrycode = 'AD' and j.facilitycode in ('ABCD') and\n> j.examplestatuscode in ('101') and j.internalexamplecode in\n> ('005','006','007','005') and ((j.examplestartdatetime between '2020-05-18\n> 00:00:00' and '2020-08-19 00:00:00') or j.examplestartdatetime IS NULL)\n> ORDER BY createddate DESC limit 10;\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=71286.65..71286.68 rows=10 width=12) (actual\n> time=47.351..47.359 rows=10 loops=1)\n> -> Sort (cost=71286.65..71335.31 rows=19462 width=12) (actual\n> time=47.349..47.352 rows=10 loops=1)\n> Sort Key: createddate DESC\n> Sort Method: top-N heapsort Memory: 25kB\n> -> Bitmap Heap Scan on example j (cost=1176.77..70866.09\n> rows=19462 width=12) (actual time=15.133..46.555 rows=2530 loops=1)\n> Recheck Cond: (((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text =\n> '101'::text) AND ((internalexamplecode)::text = ANY\n> ('{005,006,007,005}'::text[])))\n> Filter: (((examplestartdatetime >= '2020-05-18\n> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n> (examplestartdatetime IS NULL))\n> Rows Removed by Filter: 38724\n> Heap Blocks: exact=20923\n> -> Bitmap Index Scan on example_list1_idx\n> (cost=0.00..1171.90 rows=33211 width=0) (actual time=9.938..9.939\n> rows=41254 loops=1)\n> Index Cond: (((countrycode)::text = 'AD'::text) AND\n> ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text =\n> '101'::text) AND ((internalexamplecode)::text = ANY\n> ('{005,006,007,005}'::text[])))\n> Planning Time: 0.398 ms\n> Execution Time: 47.416 ms\n>\n> Regards,\n> Aditya.\n>\n\nHi,We have an application where one of the APIs calling queries(attached) is spiking the CPU to 100% during load testing.However, queries are making use of indexes(Bitmap Index and Bitmap Heap scan though). When run separately on DB queries hardly take less than 200 ms. Is CPU spiking due to Bitmap Heap Scan?These queries are being called thousands of times. Application team says they have handled connection pooling from the Application side. So there is no connection pooling here from DB side. Current db instance size is \"db.m4.4xlarge\" 64 GB RAM 16 vCPU\".  The Application dev team has primary keys and foreign keys on tables so they are unable to partition the tables as well due to limitations of postgres partitioning. Columns in WHERE clauses are not constant in all queries to decide partition keys.1. Does DB need more CPU considering this kind of load? 2. Can the query be tuned further? It is already using indexes(Bitmap though).3. Will connection pooling resolve the CPU Spike issues?Also pasting Query and plans below.----------------------exampleCount 1. Without internalexamplecode-----------------------lmp_examples=> explain analyze with exampleCount as ( select examplestatuscode from example j where 1=1 and j.countrycode = 'AD'   and j.facilitycode in ('ABCD') and j.internalexamplecode in ('005','006','007','005') and ((j.examplestartdatetime  between '2020-05-18 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )  group by j.examplestatuscode)lmp_examples-> select js.examplestatuscode,COALESCE(count(*),0) stat_count from exampleCount jc right outer join examplestatus js on jc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;                                                                                                                                                                                 QUERY PLAN                                                                                                                                                     ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=79353.80..79353.89 rows=9 width=12) (actual time=88.847..88.850 rows=9 loops=1)   Group Key: js.examplestatuscode   CTE examplecount     ->  HashAggregate  (cost=79352.42..79352.46 rows=4 width=4) (actual time=88.803..88.805 rows=5 loops=1)           Group Key: j.examplestatuscode           ->  Bitmap Heap Scan on example j  (cost=1547.81..79251.08 rows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)                 Recheck Cond: ((((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))                 Filter: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))                 Rows Removed by Filter: 3                 Heap Blocks: exact=18307                 ->  BitmapOr  (cost=1547.81..1547.81 rows=40538 width=0) (actual time=15.707..15.707 rows=0 loops=1)                       ->  Bitmap Index Scan on example_list9_idx  (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.702..15.702 rows=62851 loops=1)                             Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone))                       ->  Bitmap Index Scan on example_list10_idx  (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)                             Index Cond: (examplestartdatetime IS NULL)   ->  Hash Left Join  (cost=0.13..1.29 rows=9 width=4) (actual time=88.831..88.840 rows=9 loops=1)         Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)         ->  Seq Scan on examplestatus js  (cost=0.00..1.09 rows=9 width=4) (actual time=0.004..0.007 rows=9 loops=1)         ->  Hash  (cost=0.08..0.08 rows=4 width=16) (actual time=88.817..88.817 rows=5 loops=1)               Buckets: 1024  Batches: 1  Memory Usage: 9kB               ->  CTE Scan on examplecount jc  (cost=0.00..0.08 rows=4 width=16) (actual time=88.807..88.812 rows=5 loops=1) Planning Time: 0.979 ms Execution Time: 89.036 ms(23 rows)----------------exampleCount 2. With internalexamplecode---------------------------------lmp_examples=> explain analyze with exampleCount as ( select examplestatuscode,count(1) stat_count from example j where 1=1 and j.countrycode = 'AD'   and j.facilitycode in ('ABCD') and j.internalexamplecode in ('005','006','007','005') and ((j.examplestartdatetime  between '2020-05-18 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )  group by j.examplestatuscode)lmp_examples-> select js.examplestatuscode,COALESCE(stat_count,0) stat_count from exampleCount jc right outer join examplestatus js on jc.examplestatuscode=js.examplestatuscode;                                                                                                                                                                                 QUERY PLAN                                                                                                                                                     ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Left Join  (cost=79453.94..79455.10 rows=9 width=12) (actual time=89.660..89.669 rows=9 loops=1)   Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)   CTE examplecount     ->  HashAggregate  (cost=79453.77..79453.81 rows=4 width=12) (actual time=89.638..89.640 rows=5 loops=1)           Group Key: j.examplestatuscode           ->  Bitmap Heap Scan on example j  (cost=1547.81..79251.08 rows=40538 width=4) (actual time=18.193..69.710 rows=62851 loops=1)                 Recheck Cond: ((((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))                 Filter: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))                 Rows Removed by Filter: 3                 Heap Blocks: exact=18307                 ->  BitmapOr  (cost=1547.81..1547.81 rows=40538 width=0) (actual time=15.483..15.483 rows=0 loops=1)                       ->  Bitmap Index Scan on example_list9_idx  (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.477..15.478 rows=62851 loops=1)                             Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone))                       ->  Bitmap Index Scan on example_list10_idx  (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)                             Index Cond: (examplestartdatetime IS NULL)   ->  Seq Scan on examplestatus js  (cost=0.00..1.09 rows=9 width=4) (actual time=0.003..0.005 rows=9 loops=1)   ->  Hash  (cost=0.08..0.08 rows=4 width=24) (actual time=89.650..89.651 rows=5 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         ->  CTE Scan on examplecount jc  (cost=0.00..0.08 rows=4 width=24) (actual time=89.641..89.647 rows=5 loops=1) Planning Time: 0.470 ms Execution Time: 89.737 ms------------------------exampleSelect-----------------------------------lmp_examples=> explain analyze select j.id from example j where 1=1  and j.countrycode = 'AD'  and j.facilitycode in ('ABCD') and j.examplestatuscode in ('101') and j.internalexamplecode in ('005','006','007','005')  and ((j.examplestartdatetime between '2020-05-18 00:00:00' and '2020-08-19 00:00:00') or j.examplestartdatetime IS NULL)  ORDER BY createddate DESC limit 10;                                                                                                          QUERY PLAN                                                    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit  (cost=71286.65..71286.68 rows=10 width=12) (actual time=47.351..47.359 rows=10 loops=1)   ->  Sort  (cost=71286.65..71335.31 rows=19462 width=12) (actual time=47.349..47.352 rows=10 loops=1)         Sort Key: createddate DESC         Sort Method: top-N heapsort  Memory: 25kB         ->  Bitmap Heap Scan on example j  (cost=1176.77..70866.09 rows=19462 width=12) (actual time=15.133..46.555 rows=2530 loops=1)               Recheck Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text = '101'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))               Filter: (((examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))               Rows Removed by Filter: 38724               Heap Blocks: exact=20923               ->  Bitmap Index Scan on example_list1_idx  (cost=0.00..1171.90 rows=33211 width=0) (actual time=9.938..9.939 rows=41254 loops=1)                     Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text = '101'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[]))) Planning Time: 0.398 ms Execution Time: 47.416 msRegards,Aditya.", "msg_date": "Mon, 28 Sep 2020 21:21:28 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AWS RDS PostgreSQL CPU Spiking to 100%" }, { "msg_contents": "We faced a similar issue, adding RDS proxy in front of RDS Postgres can\nhelp.\nIn our situation, there were a lot of connects/disconnects from Lambda\nfunctions although concurrency of Lambda was 100 only.\nAnd adding connection pooler(RDS proxy) helped us to reduce the CPU load\nfrom 100% to 30%\n\nHappy to help :)\nPrince Pathria Systems Engineer | Certified Kubernetes Administrator | AWS\nCertified Solutions Architect Evive +91 9478670472 goevive.com\n\n\nOn Mon, Sep 28, 2020 at 9:21 PM aditya desai <[email protected]> wrote:\n\n>\n>> Hi,\n>> We have an application where one of the APIs calling queries(attached) is\n>> spiking the CPU to 100% during load testing.\n>> However, queries are making use of indexes(Bitmap Index and Bitmap Heap\n>> scan though). When run separately on DB queries hardly take less than 200\n>> ms. Is CPU spiking due to Bitmap Heap Scan?\n>> These queries are being called thousands of times. Application team says\n>> they have handled connection pooling from the Application side. So there is\n>> no connection pooling here from DB side. Current db instance size is \"db.m4.4xlarge\"\n>> 64 GB RAM 16 vCPU\".\n>> The Application dev team has primary keys and foreign keys on tables so\n>> they are unable to partition the tables as well due to limitations of\n>> postgres partitioning. Columns in WHERE clauses are not constant in all\n>> queries to decide partition keys.\n>>\n>> 1. Does DB need more CPU considering this kind of load?\n>> 2. Can the query be tuned further? It is already using indexes(Bitmap\n>> though).\n>> 3. Will connection pooling resolve the CPU Spike issues?\n>>\n>> Also pasting Query and plans below.\n>>\n>> ----------------------exampleCount 1. Without\n>> internalexamplecode-----------------------\n>>\n>> lmp_examples=> explain analyze with exampleCount as ( select\n>> examplestatuscode from example j where 1=1 and j.countrycode = 'AD' and\n>> j.facilitycode in ('ABCD') and j.internalexamplecode in\n>> ('005','006','007','005') and ((j.examplestartdatetime between '2020-05-18\n>> 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )\n>> group by j.examplestatuscode)\n>> lmp_examples-> select js.examplestatuscode,COALESCE(count(*),0)\n>> stat_count from exampleCount jc right outer join examplestatus js on\n>> jc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;\n>>\n>>\n>> QUERY PLAN\n>>\n>>\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> HashAggregate (cost=79353.80..79353.89 rows=9 width=12) (actual\n>> time=88.847..88.850 rows=9 loops=1)\n>> Group Key: js.examplestatuscode\n>> CTE examplecount\n>> -> HashAggregate (cost=79352.42..79352.46 rows=4 width=4) (actual\n>> time=88.803..88.805 rows=5 loops=1)\n>> Group Key: j.examplestatuscode\n>> -> Bitmap Heap Scan on example j (cost=1547.81..79251.08\n>> rows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)\n>> Recheck Cond: ((((countrycode)::text = 'AD'::text) AND\n>> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n>> ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18\n>> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n>> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n>> (examplestartdatetime IS NULL))\n>> Filter: (((countrycode)::text = 'AD'::text) AND\n>> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n>> ANY ('{005,006,007,005}'::text[])))\n>> Rows Removed by Filter: 3\n>> Heap Blocks: exact=18307\n>> -> BitmapOr (cost=1547.81..1547.81 rows=40538 width=0)\n>> (actual time=15.707..15.707 rows=0 loops=1)\n>> -> Bitmap Index Scan on example_list9_idx\n>> (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.702..15.702\n>> rows=62851 loops=1)\n>> Index Cond: (((countrycode)::text =\n>> 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND\n>> ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND\n>> (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time\n>> zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without\n>> time zone))\n>> -> Bitmap Index Scan on example_list10_idx\n>> (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)\n>> Index Cond: (examplestartdatetime IS NULL)\n>> -> Hash Left Join (cost=0.13..1.29 rows=9 width=4) (actual\n>> time=88.831..88.840 rows=9 loops=1)\n>> Hash Cond: ((js.examplestatuscode)::text =\n>> (jc.examplestatuscode)::text)\n>> -> Seq Scan on examplestatus js (cost=0.00..1.09 rows=9\n>> width=4) (actual time=0.004..0.007 rows=9 loops=1)\n>> -> Hash (cost=0.08..0.08 rows=4 width=16) (actual\n>> time=88.817..88.817 rows=5 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n>> -> CTE Scan on examplecount jc (cost=0.00..0.08 rows=4\n>> width=16) (actual time=88.807..88.812 rows=5 loops=1)\n>> Planning Time: 0.979 ms\n>> Execution Time: 89.036 ms\n>> (23 rows)\n>>\n>>\n>> ----------------exampleCount 2. With\n>> internalexamplecode---------------------------------\n>>\n>>\n>> lmp_examples=> explain analyze with exampleCount as ( select\n>> examplestatuscode,count(1) stat_count from example j where 1=1 and\n>> j.countrycode = 'AD' and j.facilitycode in ('ABCD') and\n>> j.internalexamplecode in ('005','006','007','005') and\n>> ((j.examplestartdatetime between '2020-05-18 00:00:00' and '2020-08-19\n>> 00:00:00' ) or j.examplestartdatetime IS NULL ) group by\n>> j.examplestatuscode)\n>> lmp_examples-> select js.examplestatuscode,COALESCE(stat_count,0)\n>> stat_count from exampleCount jc right outer join examplestatus js on\n>> jc.examplestatuscode=js.examplestatuscode;\n>>\n>>\n>> QUERY PLAN\n>>\n>>\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Hash Left Join (cost=79453.94..79455.10 rows=9 width=12) (actual\n>> time=89.660..89.669 rows=9 loops=1)\n>> Hash Cond: ((js.examplestatuscode)::text =\n>> (jc.examplestatuscode)::text)\n>> CTE examplecount\n>> -> HashAggregate (cost=79453.77..79453.81 rows=4 width=12) (actual\n>> time=89.638..89.640 rows=5 loops=1)\n>> Group Key: j.examplestatuscode\n>> -> Bitmap Heap Scan on example j (cost=1547.81..79251.08\n>> rows=40538 width=4) (actual time=18.193..69.710 rows=62851 loops=1)\n>> Recheck Cond: ((((countrycode)::text = 'AD'::text) AND\n>> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n>> ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18\n>> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n>> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n>> (examplestartdatetime IS NULL))\n>> Filter: (((countrycode)::text = 'AD'::text) AND\n>> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n>> ANY ('{005,006,007,005}'::text[])))\n>> Rows Removed by Filter: 3\n>> Heap Blocks: exact=18307\n>> -> BitmapOr (cost=1547.81..1547.81 rows=40538 width=0)\n>> (actual time=15.483..15.483 rows=0 loops=1)\n>> -> Bitmap Index Scan on example_list9_idx\n>> (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.477..15.478\n>> rows=62851 loops=1)\n>> Index Cond: (((countrycode)::text =\n>> 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND\n>> ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND\n>> (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time\n>> zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without\n>> time zone))\n>> -> Bitmap Index Scan on example_list10_idx\n>> (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)\n>> Index Cond: (examplestartdatetime IS NULL)\n>> -> Seq Scan on examplestatus js (cost=0.00..1.09 rows=9 width=4)\n>> (actual time=0.003..0.005 rows=9 loops=1)\n>> -> Hash (cost=0.08..0.08 rows=4 width=24) (actual\n>> time=89.650..89.651 rows=5 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n>> -> CTE Scan on examplecount jc (cost=0.00..0.08 rows=4\n>> width=24) (actual time=89.641..89.647 rows=5 loops=1)\n>> Planning Time: 0.470 ms\n>> Execution Time: 89.737 ms\n>>\n>> ------------------------exampleSelect-----------------------------------\n>>\n>>\n>> lmp_examples=> explain analyze select j.id from example j where 1=1 and\n>> j.countrycode = 'AD' and j.facilitycode in ('ABCD') and\n>> j.examplestatuscode in ('101') and j.internalexamplecode in\n>> ('005','006','007','005') and ((j.examplestartdatetime between '2020-05-18\n>> 00:00:00' and '2020-08-19 00:00:00') or j.examplestartdatetime IS NULL)\n>> ORDER BY createddate DESC limit 10;\n>>\n>> QUERY PLAN\n>>\n>>\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=71286.65..71286.68 rows=10 width=12) (actual\n>> time=47.351..47.359 rows=10 loops=1)\n>> -> Sort (cost=71286.65..71335.31 rows=19462 width=12) (actual\n>> time=47.349..47.352 rows=10 loops=1)\n>> Sort Key: createddate DESC\n>> Sort Method: top-N heapsort Memory: 25kB\n>> -> Bitmap Heap Scan on example j (cost=1176.77..70866.09\n>> rows=19462 width=12) (actual time=15.133..46.555 rows=2530 loops=1)\n>> Recheck Cond: (((countrycode)::text = 'AD'::text) AND\n>> ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text =\n>> '101'::text) AND ((internalexamplecode)::text = ANY\n>> ('{005,006,007,005}'::text[])))\n>> Filter: (((examplestartdatetime >= '2020-05-18\n>> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n>> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n>> (examplestartdatetime IS NULL))\n>> Rows Removed by Filter: 38724\n>> Heap Blocks: exact=20923\n>> -> Bitmap Index Scan on example_list1_idx\n>> (cost=0.00..1171.90 rows=33211 width=0) (actual time=9.938..9.939\n>> rows=41254 loops=1)\n>> Index Cond: (((countrycode)::text = 'AD'::text) AND\n>> ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text =\n>> '101'::text) AND ((internalexamplecode)::text = ANY\n>> ('{005,006,007,005}'::text[])))\n>> Planning Time: 0.398 ms\n>> Execution Time: 47.416 ms\n>>\n>> Regards,\n>> Aditya.\n>>\n>\n\nWe faced a similar issue, adding RDS proxy in front of RDS Postgres can help.In our situation, there were a lot of connects/disconnects from Lambda functions although concurrency of Lambda was 100 only.And adding connection pooler(RDS proxy) helped us to reduce the CPU load from 100% to 30%Happy to help :)Prince Pathria\nSystems Engineer | Certified Kubernetes Administrator | AWS Certified Solutions Architect\nEvive\n+91 9478670472\ngoevive.comOn Mon, Sep 28, 2020 at 9:21 PM aditya desai <[email protected]> wrote:Hi,We have an application where one of the APIs calling queries(attached) is spiking the CPU to 100% during load testing.However, queries are making use of indexes(Bitmap Index and Bitmap Heap scan though). When run separately on DB queries hardly take less than 200 ms. Is CPU spiking due to Bitmap Heap Scan?These queries are being called thousands of times. Application team says they have handled connection pooling from the Application side. So there is no connection pooling here from DB side. Current db instance size is \"db.m4.4xlarge\" 64 GB RAM 16 vCPU\".  The Application dev team has primary keys and foreign keys on tables so they are unable to partition the tables as well due to limitations of postgres partitioning. Columns in WHERE clauses are not constant in all queries to decide partition keys.1. Does DB need more CPU considering this kind of load? 2. Can the query be tuned further? It is already using indexes(Bitmap though).3. Will connection pooling resolve the CPU Spike issues?Also pasting Query and plans below.----------------------exampleCount 1. Without internalexamplecode-----------------------lmp_examples=> explain analyze with exampleCount as ( select examplestatuscode from example j where 1=1 and j.countrycode = 'AD'   and j.facilitycode in ('ABCD') and j.internalexamplecode in ('005','006','007','005') and ((j.examplestartdatetime  between '2020-05-18 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )  group by j.examplestatuscode)lmp_examples-> select js.examplestatuscode,COALESCE(count(*),0) stat_count from exampleCount jc right outer join examplestatus js on jc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;                                                                                                                                                                                 QUERY PLAN                                                                                                                                                     ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=79353.80..79353.89 rows=9 width=12) (actual time=88.847..88.850 rows=9 loops=1)   Group Key: js.examplestatuscode   CTE examplecount     ->  HashAggregate  (cost=79352.42..79352.46 rows=4 width=4) (actual time=88.803..88.805 rows=5 loops=1)           Group Key: j.examplestatuscode           ->  Bitmap Heap Scan on example j  (cost=1547.81..79251.08 rows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)                 Recheck Cond: ((((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))                 Filter: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))                 Rows Removed by Filter: 3                 Heap Blocks: exact=18307                 ->  BitmapOr  (cost=1547.81..1547.81 rows=40538 width=0) (actual time=15.707..15.707 rows=0 loops=1)                       ->  Bitmap Index Scan on example_list9_idx  (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.702..15.702 rows=62851 loops=1)                             Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone))                       ->  Bitmap Index Scan on example_list10_idx  (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)                             Index Cond: (examplestartdatetime IS NULL)   ->  Hash Left Join  (cost=0.13..1.29 rows=9 width=4) (actual time=88.831..88.840 rows=9 loops=1)         Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)         ->  Seq Scan on examplestatus js  (cost=0.00..1.09 rows=9 width=4) (actual time=0.004..0.007 rows=9 loops=1)         ->  Hash  (cost=0.08..0.08 rows=4 width=16) (actual time=88.817..88.817 rows=5 loops=1)               Buckets: 1024  Batches: 1  Memory Usage: 9kB               ->  CTE Scan on examplecount jc  (cost=0.00..0.08 rows=4 width=16) (actual time=88.807..88.812 rows=5 loops=1) Planning Time: 0.979 ms Execution Time: 89.036 ms(23 rows)----------------exampleCount 2. With internalexamplecode---------------------------------lmp_examples=> explain analyze with exampleCount as ( select examplestatuscode,count(1) stat_count from example j where 1=1 and j.countrycode = 'AD'   and j.facilitycode in ('ABCD') and j.internalexamplecode in ('005','006','007','005') and ((j.examplestartdatetime  between '2020-05-18 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )  group by j.examplestatuscode)lmp_examples-> select js.examplestatuscode,COALESCE(stat_count,0) stat_count from exampleCount jc right outer join examplestatus js on jc.examplestatuscode=js.examplestatuscode;                                                                                                                                                                                 QUERY PLAN                                                                                                                                                     ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Left Join  (cost=79453.94..79455.10 rows=9 width=12) (actual time=89.660..89.669 rows=9 loops=1)   Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)   CTE examplecount     ->  HashAggregate  (cost=79453.77..79453.81 rows=4 width=12) (actual time=89.638..89.640 rows=5 loops=1)           Group Key: j.examplestatuscode           ->  Bitmap Heap Scan on example j  (cost=1547.81..79251.08 rows=40538 width=4) (actual time=18.193..69.710 rows=62851 loops=1)                 Recheck Cond: ((((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))                 Filter: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))                 Rows Removed by Filter: 3                 Heap Blocks: exact=18307                 ->  BitmapOr  (cost=1547.81..1547.81 rows=40538 width=0) (actual time=15.483..15.483 rows=0 loops=1)                       ->  Bitmap Index Scan on example_list9_idx  (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.477..15.478 rows=62851 loops=1)                             Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone))                       ->  Bitmap Index Scan on example_list10_idx  (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)                             Index Cond: (examplestartdatetime IS NULL)   ->  Seq Scan on examplestatus js  (cost=0.00..1.09 rows=9 width=4) (actual time=0.003..0.005 rows=9 loops=1)   ->  Hash  (cost=0.08..0.08 rows=4 width=24) (actual time=89.650..89.651 rows=5 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         ->  CTE Scan on examplecount jc  (cost=0.00..0.08 rows=4 width=24) (actual time=89.641..89.647 rows=5 loops=1) Planning Time: 0.470 ms Execution Time: 89.737 ms------------------------exampleSelect-----------------------------------lmp_examples=> explain analyze select j.id from example j where 1=1  and j.countrycode = 'AD'  and j.facilitycode in ('ABCD') and j.examplestatuscode in ('101') and j.internalexamplecode in ('005','006','007','005')  and ((j.examplestartdatetime between '2020-05-18 00:00:00' and '2020-08-19 00:00:00') or j.examplestartdatetime IS NULL)  ORDER BY createddate DESC limit 10;                                                                                                          QUERY PLAN                                                    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit  (cost=71286.65..71286.68 rows=10 width=12) (actual time=47.351..47.359 rows=10 loops=1)   ->  Sort  (cost=71286.65..71335.31 rows=19462 width=12) (actual time=47.349..47.352 rows=10 loops=1)         Sort Key: createddate DESC         Sort Method: top-N heapsort  Memory: 25kB         ->  Bitmap Heap Scan on example j  (cost=1176.77..70866.09 rows=19462 width=12) (actual time=15.133..46.555 rows=2530 loops=1)               Recheck Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text = '101'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))               Filter: (((examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))               Rows Removed by Filter: 38724               Heap Blocks: exact=20923               ->  Bitmap Index Scan on example_list1_idx  (cost=0.00..1171.90 rows=33211 width=0) (actual time=9.938..9.939 rows=41254 loops=1)                     Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text = '101'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[]))) Planning Time: 0.398 ms Execution Time: 47.416 msRegards,Aditya.", "msg_date": "Mon, 28 Sep 2020 21:39:38 +0530", "msg_from": "Prince Pathria <[email protected]>", "msg_from_op": false, "msg_subject": "Re: AWS RDS PostgreSQL CPU Spiking to 100%" }, { "msg_contents": "Thanks, I'll check it out.\n\nOn Mon, Sep 28, 2020 at 9:40 PM Prince Pathria <[email protected]>\nwrote:\n\n> We faced a similar issue, adding RDS proxy in front of RDS Postgres can\n> help.\n> In our situation, there were a lot of connects/disconnects from Lambda\n> functions although concurrency of Lambda was 100 only.\n> And adding connection pooler(RDS proxy) helped us to reduce the CPU load\n> from 100% to 30%\n>\n> Happy to help :)\n> Prince Pathria Systems Engineer | Certified Kubernetes Administrator |\n> AWS Certified Solutions Architect Evive +91 9478670472 goevive.com\n>\n>\n> On Mon, Sep 28, 2020 at 9:21 PM aditya desai <[email protected]> wrote:\n>\n>>\n>>> Hi,\n>>> We have an application where one of the APIs calling queries(attached)\n>>> is spiking the CPU to 100% during load testing.\n>>> However, queries are making use of indexes(Bitmap Index and Bitmap Heap\n>>> scan though). When run separately on DB queries hardly take less than 200\n>>> ms. Is CPU spiking due to Bitmap Heap Scan?\n>>> These queries are being called thousands of times. Application team says\n>>> they have handled connection pooling from the Application side. So there is\n>>> no connection pooling here from DB side. Current db instance size is \"db.m4.4xlarge\"\n>>> 64 GB RAM 16 vCPU\".\n>>> The Application dev team has primary keys and foreign keys on tables so\n>>> they are unable to partition the tables as well due to limitations of\n>>> postgres partitioning. Columns in WHERE clauses are not constant in all\n>>> queries to decide partition keys.\n>>>\n>>> 1. Does DB need more CPU considering this kind of load?\n>>> 2. Can the query be tuned further? It is already using indexes(Bitmap\n>>> though).\n>>> 3. Will connection pooling resolve the CPU Spike issues?\n>>>\n>>> Also pasting Query and plans below.\n>>>\n>>> ----------------------exampleCount 1. Without\n>>> internalexamplecode-----------------------\n>>>\n>>> lmp_examples=> explain analyze with exampleCount as ( select\n>>> examplestatuscode from example j where 1=1 and j.countrycode = 'AD' and\n>>> j.facilitycode in ('ABCD') and j.internalexamplecode in\n>>> ('005','006','007','005') and ((j.examplestartdatetime between '2020-05-18\n>>> 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )\n>>> group by j.examplestatuscode)\n>>> lmp_examples-> select js.examplestatuscode,COALESCE(count(*),0)\n>>> stat_count from exampleCount jc right outer join examplestatus js on\n>>> jc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;\n>>>\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>>\n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> HashAggregate (cost=79353.80..79353.89 rows=9 width=12) (actual\n>>> time=88.847..88.850 rows=9 loops=1)\n>>> Group Key: js.examplestatuscode\n>>> CTE examplecount\n>>> -> HashAggregate (cost=79352.42..79352.46 rows=4 width=4) (actual\n>>> time=88.803..88.805 rows=5 loops=1)\n>>> Group Key: j.examplestatuscode\n>>> -> Bitmap Heap Scan on example j (cost=1547.81..79251.08\n>>> rows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)\n>>> Recheck Cond: ((((countrycode)::text = 'AD'::text) AND\n>>> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n>>> ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18\n>>> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n>>> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n>>> (examplestartdatetime IS NULL))\n>>> Filter: (((countrycode)::text = 'AD'::text) AND\n>>> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n>>> ANY ('{005,006,007,005}'::text[])))\n>>> Rows Removed by Filter: 3\n>>> Heap Blocks: exact=18307\n>>> -> BitmapOr (cost=1547.81..1547.81 rows=40538\n>>> width=0) (actual time=15.707..15.707 rows=0 loops=1)\n>>> -> Bitmap Index Scan on example_list9_idx\n>>> (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.702..15.702\n>>> rows=62851 loops=1)\n>>> Index Cond: (((countrycode)::text =\n>>> 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND\n>>> ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND\n>>> (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time\n>>> zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without\n>>> time zone))\n>>> -> Bitmap Index Scan on example_list10_idx\n>>> (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)\n>>> Index Cond: (examplestartdatetime IS NULL)\n>>> -> Hash Left Join (cost=0.13..1.29 rows=9 width=4) (actual\n>>> time=88.831..88.840 rows=9 loops=1)\n>>> Hash Cond: ((js.examplestatuscode)::text =\n>>> (jc.examplestatuscode)::text)\n>>> -> Seq Scan on examplestatus js (cost=0.00..1.09 rows=9\n>>> width=4) (actual time=0.004..0.007 rows=9 loops=1)\n>>> -> Hash (cost=0.08..0.08 rows=4 width=16) (actual\n>>> time=88.817..88.817 rows=5 loops=1)\n>>> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n>>> -> CTE Scan on examplecount jc (cost=0.00..0.08 rows=4\n>>> width=16) (actual time=88.807..88.812 rows=5 loops=1)\n>>> Planning Time: 0.979 ms\n>>> Execution Time: 89.036 ms\n>>> (23 rows)\n>>>\n>>>\n>>> ----------------exampleCount 2. With\n>>> internalexamplecode---------------------------------\n>>>\n>>>\n>>> lmp_examples=> explain analyze with exampleCount as ( select\n>>> examplestatuscode,count(1) stat_count from example j where 1=1 and\n>>> j.countrycode = 'AD' and j.facilitycode in ('ABCD') and\n>>> j.internalexamplecode in ('005','006','007','005') and\n>>> ((j.examplestartdatetime between '2020-05-18 00:00:00' and '2020-08-19\n>>> 00:00:00' ) or j.examplestartdatetime IS NULL ) group by\n>>> j.examplestatuscode)\n>>> lmp_examples-> select js.examplestatuscode,COALESCE(stat_count,0)\n>>> stat_count from exampleCount jc right outer join examplestatus js on\n>>> jc.examplestatuscode=js.examplestatuscode;\n>>>\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>>\n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Hash Left Join (cost=79453.94..79455.10 rows=9 width=12) (actual\n>>> time=89.660..89.669 rows=9 loops=1)\n>>> Hash Cond: ((js.examplestatuscode)::text =\n>>> (jc.examplestatuscode)::text)\n>>> CTE examplecount\n>>> -> HashAggregate (cost=79453.77..79453.81 rows=4 width=12)\n>>> (actual time=89.638..89.640 rows=5 loops=1)\n>>> Group Key: j.examplestatuscode\n>>> -> Bitmap Heap Scan on example j (cost=1547.81..79251.08\n>>> rows=40538 width=4) (actual time=18.193..69.710 rows=62851 loops=1)\n>>> Recheck Cond: ((((countrycode)::text = 'AD'::text) AND\n>>> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n>>> ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18\n>>> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n>>> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n>>> (examplestartdatetime IS NULL))\n>>> Filter: (((countrycode)::text = 'AD'::text) AND\n>>> ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text =\n>>> ANY ('{005,006,007,005}'::text[])))\n>>> Rows Removed by Filter: 3\n>>> Heap Blocks: exact=18307\n>>> -> BitmapOr (cost=1547.81..1547.81 rows=40538\n>>> width=0) (actual time=15.483..15.483 rows=0 loops=1)\n>>> -> Bitmap Index Scan on example_list9_idx\n>>> (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.477..15.478\n>>> rows=62851 loops=1)\n>>> Index Cond: (((countrycode)::text =\n>>> 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND\n>>> ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND\n>>> (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time\n>>> zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without\n>>> time zone))\n>>> -> Bitmap Index Scan on example_list10_idx\n>>> (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)\n>>> Index Cond: (examplestartdatetime IS NULL)\n>>> -> Seq Scan on examplestatus js (cost=0.00..1.09 rows=9 width=4)\n>>> (actual time=0.003..0.005 rows=9 loops=1)\n>>> -> Hash (cost=0.08..0.08 rows=4 width=24) (actual\n>>> time=89.650..89.651 rows=5 loops=1)\n>>> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n>>> -> CTE Scan on examplecount jc (cost=0.00..0.08 rows=4\n>>> width=24) (actual time=89.641..89.647 rows=5 loops=1)\n>>> Planning Time: 0.470 ms\n>>> Execution Time: 89.737 ms\n>>>\n>>> ------------------------exampleSelect-----------------------------------\n>>>\n>>>\n>>> lmp_examples=> explain analyze select j.id from example j where 1=1\n>>> and j.countrycode = 'AD' and j.facilitycode in ('ABCD') and\n>>> j.examplestatuscode in ('101') and j.internalexamplecode in\n>>> ('005','006','007','005') and ((j.examplestartdatetime between '2020-05-18\n>>> 00:00:00' and '2020-08-19 00:00:00') or j.examplestartdatetime IS NULL)\n>>> ORDER BY createddate DESC limit 10;\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Limit (cost=71286.65..71286.68 rows=10 width=12) (actual\n>>> time=47.351..47.359 rows=10 loops=1)\n>>> -> Sort (cost=71286.65..71335.31 rows=19462 width=12) (actual\n>>> time=47.349..47.352 rows=10 loops=1)\n>>> Sort Key: createddate DESC\n>>> Sort Method: top-N heapsort Memory: 25kB\n>>> -> Bitmap Heap Scan on example j (cost=1176.77..70866.09\n>>> rows=19462 width=12) (actual time=15.133..46.555 rows=2530 loops=1)\n>>> Recheck Cond: (((countrycode)::text = 'AD'::text) AND\n>>> ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text =\n>>> '101'::text) AND ((internalexamplecode)::text = ANY\n>>> ('{005,006,007,005}'::text[])))\n>>> Filter: (((examplestartdatetime >= '2020-05-18\n>>> 00:00:00'::timestamp without time zone) AND (examplestartdatetime <=\n>>> '2020-08-19 00:00:00'::timestamp without time zone)) OR\n>>> (examplestartdatetime IS NULL))\n>>> Rows Removed by Filter: 38724\n>>> Heap Blocks: exact=20923\n>>> -> Bitmap Index Scan on example_list1_idx\n>>> (cost=0.00..1171.90 rows=33211 width=0) (actual time=9.938..9.939\n>>> rows=41254 loops=1)\n>>> Index Cond: (((countrycode)::text = 'AD'::text) AND\n>>> ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text =\n>>> '101'::text) AND ((internalexamplecode)::text = ANY\n>>> ('{005,006,007,005}'::text[])))\n>>> Planning Time: 0.398 ms\n>>> Execution Time: 47.416 ms\n>>>\n>>> Regards,\n>>> Aditya.\n>>>\n>>\n\nThanks, I'll check it out. On Mon, Sep 28, 2020 at 9:40 PM Prince Pathria <[email protected]> wrote:We faced a similar issue, adding RDS proxy in front of RDS Postgres can help.In our situation, there were a lot of connects/disconnects from Lambda functions although concurrency of Lambda was 100 only.And adding connection pooler(RDS proxy) helped us to reduce the CPU load from 100% to 30%Happy to help :)Prince Pathria\nSystems Engineer | Certified Kubernetes Administrator | AWS Certified Solutions Architect\nEvive\n+91 9478670472\ngoevive.comOn Mon, Sep 28, 2020 at 9:21 PM aditya desai <[email protected]> wrote:Hi,We have an application where one of the APIs calling queries(attached) is spiking the CPU to 100% during load testing.However, queries are making use of indexes(Bitmap Index and Bitmap Heap scan though). When run separately on DB queries hardly take less than 200 ms. Is CPU spiking due to Bitmap Heap Scan?These queries are being called thousands of times. Application team says they have handled connection pooling from the Application side. So there is no connection pooling here from DB side. Current db instance size is \"db.m4.4xlarge\" 64 GB RAM 16 vCPU\".  The Application dev team has primary keys and foreign keys on tables so they are unable to partition the tables as well due to limitations of postgres partitioning. Columns in WHERE clauses are not constant in all queries to decide partition keys.1. Does DB need more CPU considering this kind of load? 2. Can the query be tuned further? It is already using indexes(Bitmap though).3. Will connection pooling resolve the CPU Spike issues?Also pasting Query and plans below.----------------------exampleCount 1. Without internalexamplecode-----------------------lmp_examples=> explain analyze with exampleCount as ( select examplestatuscode from example j where 1=1 and j.countrycode = 'AD'   and j.facilitycode in ('ABCD') and j.internalexamplecode in ('005','006','007','005') and ((j.examplestartdatetime  between '2020-05-18 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )  group by j.examplestatuscode)lmp_examples-> select js.examplestatuscode,COALESCE(count(*),0) stat_count from exampleCount jc right outer join examplestatus js on jc.examplestatuscode=js.examplestatuscode group by js.examplestatuscode ;                                                                                                                                                                                 QUERY PLAN                                                                                                                                                     ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=79353.80..79353.89 rows=9 width=12) (actual time=88.847..88.850 rows=9 loops=1)   Group Key: js.examplestatuscode   CTE examplecount     ->  HashAggregate  (cost=79352.42..79352.46 rows=4 width=4) (actual time=88.803..88.805 rows=5 loops=1)           Group Key: j.examplestatuscode           ->  Bitmap Heap Scan on example j  (cost=1547.81..79251.08 rows=40538 width=4) (actual time=18.424..69.658 rows=62851 loops=1)                 Recheck Cond: ((((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))                 Filter: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))                 Rows Removed by Filter: 3                 Heap Blocks: exact=18307                 ->  BitmapOr  (cost=1547.81..1547.81 rows=40538 width=0) (actual time=15.707..15.707 rows=0 loops=1)                       ->  Bitmap Index Scan on example_list9_idx  (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.702..15.702 rows=62851 loops=1)                             Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone))                       ->  Bitmap Index Scan on example_list10_idx  (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)                             Index Cond: (examplestartdatetime IS NULL)   ->  Hash Left Join  (cost=0.13..1.29 rows=9 width=4) (actual time=88.831..88.840 rows=9 loops=1)         Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)         ->  Seq Scan on examplestatus js  (cost=0.00..1.09 rows=9 width=4) (actual time=0.004..0.007 rows=9 loops=1)         ->  Hash  (cost=0.08..0.08 rows=4 width=16) (actual time=88.817..88.817 rows=5 loops=1)               Buckets: 1024  Batches: 1  Memory Usage: 9kB               ->  CTE Scan on examplecount jc  (cost=0.00..0.08 rows=4 width=16) (actual time=88.807..88.812 rows=5 loops=1) Planning Time: 0.979 ms Execution Time: 89.036 ms(23 rows)----------------exampleCount 2. With internalexamplecode---------------------------------lmp_examples=> explain analyze with exampleCount as ( select examplestatuscode,count(1) stat_count from example j where 1=1 and j.countrycode = 'AD'   and j.facilitycode in ('ABCD') and j.internalexamplecode in ('005','006','007','005') and ((j.examplestartdatetime  between '2020-05-18 00:00:00' and '2020-08-19 00:00:00' ) or j.examplestartdatetime IS NULL )  group by j.examplestatuscode)lmp_examples-> select js.examplestatuscode,COALESCE(stat_count,0) stat_count from exampleCount jc right outer join examplestatus js on jc.examplestatuscode=js.examplestatuscode;                                                                                                                                                                                 QUERY PLAN                                                                                                                                                     ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Left Join  (cost=79453.94..79455.10 rows=9 width=12) (actual time=89.660..89.669 rows=9 loops=1)   Hash Cond: ((js.examplestatuscode)::text = (jc.examplestatuscode)::text)   CTE examplecount     ->  HashAggregate  (cost=79453.77..79453.81 rows=4 width=12) (actual time=89.638..89.640 rows=5 loops=1)           Group Key: j.examplestatuscode           ->  Bitmap Heap Scan on example j  (cost=1547.81..79251.08 rows=40538 width=4) (actual time=18.193..69.710 rows=62851 loops=1)                 Recheck Cond: ((((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))                 Filter: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))                 Rows Removed by Filter: 3                 Heap Blocks: exact=18307                 ->  BitmapOr  (cost=1547.81..1547.81 rows=40538 width=0) (actual time=15.483..15.483 rows=0 loops=1)                       ->  Bitmap Index Scan on example_list9_idx  (cost=0.00..1523.10 rows=40538 width=0) (actual time=15.477..15.478 rows=62851 loops=1)                             Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])) AND (examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone))                       ->  Bitmap Index Scan on example_list10_idx  (cost=0.00..4.44 rows=1 width=0) (actual time=0.004..0.004 rows=3 loops=1)                             Index Cond: (examplestartdatetime IS NULL)   ->  Seq Scan on examplestatus js  (cost=0.00..1.09 rows=9 width=4) (actual time=0.003..0.005 rows=9 loops=1)   ->  Hash  (cost=0.08..0.08 rows=4 width=24) (actual time=89.650..89.651 rows=5 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         ->  CTE Scan on examplecount jc  (cost=0.00..0.08 rows=4 width=24) (actual time=89.641..89.647 rows=5 loops=1) Planning Time: 0.470 ms Execution Time: 89.737 ms------------------------exampleSelect-----------------------------------lmp_examples=> explain analyze select j.id from example j where 1=1  and j.countrycode = 'AD'  and j.facilitycode in ('ABCD') and j.examplestatuscode in ('101') and j.internalexamplecode in ('005','006','007','005')  and ((j.examplestartdatetime between '2020-05-18 00:00:00' and '2020-08-19 00:00:00') or j.examplestartdatetime IS NULL)  ORDER BY createddate DESC limit 10;                                                                                                          QUERY PLAN                                                    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit  (cost=71286.65..71286.68 rows=10 width=12) (actual time=47.351..47.359 rows=10 loops=1)   ->  Sort  (cost=71286.65..71335.31 rows=19462 width=12) (actual time=47.349..47.352 rows=10 loops=1)         Sort Key: createddate DESC         Sort Method: top-N heapsort  Memory: 25kB         ->  Bitmap Heap Scan on example j  (cost=1176.77..70866.09 rows=19462 width=12) (actual time=15.133..46.555 rows=2530 loops=1)               Recheck Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text = '101'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[])))               Filter: (((examplestartdatetime >= '2020-05-18 00:00:00'::timestamp without time zone) AND (examplestartdatetime <= '2020-08-19 00:00:00'::timestamp without time zone)) OR (examplestartdatetime IS NULL))               Rows Removed by Filter: 38724               Heap Blocks: exact=20923               ->  Bitmap Index Scan on example_list1_idx  (cost=0.00..1171.90 rows=33211 width=0) (actual time=9.938..9.939 rows=41254 loops=1)                     Index Cond: (((countrycode)::text = 'AD'::text) AND ((facilitycode)::text = 'ABCD'::text) AND ((examplestatuscode)::text = '101'::text) AND ((internalexamplecode)::text = ANY ('{005,006,007,005}'::text[]))) Planning Time: 0.398 ms Execution Time: 47.416 msRegards,Aditya.", "msg_date": "Wed, 30 Sep 2020 12:43:50 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: AWS RDS PostgreSQL CPU Spiking to 100%" } ]
[ { "msg_contents": "Hi,\n\nGood Morning!\n\nPostgres Version : 11.6 (AWS Native Postgres/AWS Aurora tried on both flavours).\n\nWhen i'm joining two tables the primary index is not being used. While is use in clause with values then the index is being used. I have reindexed all the tables, run the auto vaccum as well.\n\n\npgwfc01q=> select count(*) from chr_simple_val;\n count\n-------\n 13158\n(1 row)\n\npgwfc01q=> select count(*) from chr_emp_position;\n count\n-------\n 228\n(1 row)\n\n\nThe primary key for the table chr_Simple_val contains OID. Still not using the index.\n\nI'm sharing the explain plan over here..\n\npgwfc01q=> explain analyze select cep.HOME_DEPT_OID,ctc.oid,ctc.category,ctc.code from chr_emp_position cep inner join chr_Simple_Val ctc on ctc.oid=cep.HOME_DEPT_OID;\n QUERY P\nLAN\n--------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------\n Hash Join (cost=49299.91..51848.83 rows=651 width=42) (actual time=3512.692..3797.583 rows=228 loops=1)\n Hash Cond: ((cep.home_dept_oid)::text = (ctc.oid)::text)\n -> Seq Scan on chr_emp_position cep (cost=0.00..2437.77 rows=436 width=11) (actual time=44.713..329.435 rows=22\n8 loops=1)\n Filter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[])) AND f_sel_policy_all(vpd_key, 'CH\nR_EMP_POSITION'::character varying) AND f_sel_policy_prod_locale((cep.*)::character varying, prod_locale_code))\n Rows Removed by Filter: 3695\n -> Hash (cost=49176.40..49176.40 rows=9881 width=31) (actual time=3467.907..3467.908 rows=13158 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 1031kB\n -> Seq Scan on chr_simple_val ctc (cost=0.00..49176.40 rows=9881 width=31) (actual time=2.191..3460.929 r\nows=13158 loops=1)\n Filter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[])) AND f_sel_policy_ty_static(\nvpd_key) AND f_sel_policy_prod_locale((ctc.*)::character varying, prod_locale_code))\n Rows Removed by Filter: 75771\n Planning Time: 0.297 ms\n Execution Time: 3797.768 ms\n(12 rows)\n\n\nThank you..\n\nRegards,\nRamesh G\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\nGood Morning!\n\n\n\n\nPostgres Version :  11.6  (AWS Native Postgres/AWS Aurora  tried on both flavours).\n\n\n\n\nWhen i'm joining two tables the primary index is not being used.  While is use  in clause with values then the index is being used.  I have reindexed all the tables,  run the auto vaccum as well. \n\n\n\n\n\npgwfc01q=> select count(*) from chr_simple_val;\n count\n-------\n 13158\n(1 row)\n\npgwfc01q=> select count(*) from chr_emp_position;\n count\n-------\n   228\n(1 row)\n\n\n\n\n\n\nThe primary key for the table chr_Simple_val  contains OID.   Still not using the index.\n\n\n\n\nI'm sharing the explain plan over here..  \n\n\n\n\npgwfc01q=> explain analyze select cep.HOME_DEPT_OID,ctc.oid,ctc.category,ctc.code\n from chr_emp_position cep inner join chr_Simple_Val ctc on ctc.oid=cep.HOME_DEPT_OID;\n                                                                                             \n                QUERY P\nLAN\n--------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------\n Hash Join  (cost=49299.91..51848.83 rows=651 width=42) (actual time=3512.692..3797.583 rows=228\n loops=1)\n   Hash Cond: ((cep.home_dept_oid)::text = (ctc.oid)::text)\n   ->  Seq Scan on chr_emp_position cep  (cost=0.00..2437.77 rows=436 width=11) (actual time=44.713..329.435\n rows=22\n8 loops=1)\n         Filter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[]))\n AND f_sel_policy_all(vpd_key, 'CH\nR_EMP_POSITION'::character varying) AND f_sel_policy_prod_locale((cep.*)::character\n varying, prod_locale_code))\n         Rows Removed by Filter: 3695\n   ->  Hash  (cost=49176.40..49176.40 rows=9881 width=31) (actual time=3467.907..3467.908 rows=13158\n loops=1)\n         Buckets: 16384  Batches: 1  Memory Usage: 1031kB\n         ->  Seq Scan on chr_simple_val ctc  (cost=0.00..49176.40 rows=9881 width=31) (actual\n time=2.191..3460.929 r\nows=13158 loops=1)\n               Filter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[]))\n AND f_sel_policy_ty_static(\nvpd_key) AND f_sel_policy_prod_locale((ctc.*)::character\n varying, prod_locale_code))\n               Rows Removed by Filter: 75771\n Planning Time: 0.297 ms\n Execution Time: 3797.768 ms\n(12 rows)\n\n\n\n\n\n\nThank you.. \n\n\n\n\nRegards,\n\nRamesh G", "msg_date": "Sun, 13 Sep 2020 14:58:15 +0000", "msg_from": "\"Gopisetty, Ramesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Issue (Not using Index when joining two tables)." }, { "msg_contents": "On Sun, Sep 13, 2020 at 02:58:15PM +0000, Gopisetty, Ramesh wrote:\n>Hi,\n>\n>Good Morning!\n>\n>Postgres Version : 11.6 (AWS Native Postgres/AWS Aurora tried on both flavours).\n>\n>When i'm joining two tables the primary index is not being used. While is use in clause with values then the index is being used. I have reindexed all the tables, run the auto vaccum as well.\n>\n>\n>pgwfc01q=> select count(*) from chr_simple_val;\n> count\n>-------\n> 13158\n>(1 row)\n>\n>pgwfc01q=> select count(*) from chr_emp_position;\n> count\n>-------\n> 228\n>(1 row)\n>\n>\n>The primary key for the table chr_Simple_val contains OID. Still not using the index.\n>\n>I'm sharing the explain plan over here..\n>\n>pgwfc01q=> explain analyze select cep.HOME_DEPT_OID,ctc.oid,ctc.category,ctc.code from chr_emp_position cep inner join chr_Simple_Val ctc on ctc.oid=cep.HOME_DEPT_OID;\n> QUERY P\n>LAN\n>--------------------------------------------------------------------------------------------------------------------\n>----------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=49299.91..51848.83 rows=651 width=42) (actual time=3512.692..3797.583 rows=228 loops=1)\n> Hash Cond: ((cep.home_dept_oid)::text = (ctc.oid)::text)\n> -> Seq Scan on chr_emp_position cep (cost=0.00..2437.77 rows=436 width=11) (actual time=44.713..329.435 rows=22\n>8 loops=1)\n> Filter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[])) AND f_sel_policy_all(vpd_key, 'CH\n>R_EMP_POSITION'::character varying) AND f_sel_policy_prod_locale((cep.*)::character varying, prod_locale_code))\n> Rows Removed by Filter: 3695\n> -> Hash (cost=49176.40..49176.40 rows=9881 width=31) (actual time=3467.907..3467.908 rows=13158 loops=1)\n> Buckets: 16384 Batches: 1 Memory Usage: 1031kB\n> -> Seq Scan on chr_simple_val ctc (cost=0.00..49176.40 rows=9881 width=31) (actual time=2.191..3460.929 r\n>ows=13158 loops=1)\n> Filter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[])) AND f_sel_policy_ty_static(\n>vpd_key) AND f_sel_policy_prod_locale((ctc.*)::character varying, prod_locale_code))\n> Rows Removed by Filter: 75771\n> Planning Time: 0.297 ms\n> Execution Time: 3797.768 ms\n>(12 rows)\n>\n\nMost of the time (3460ms) is spent in the sequential scan on\nchr_simple_val, and the seqscan on chr_emp_position is taking ~330ms).\nCombined that's 3790ms out of 3797ms, so the join is pretty much\nirrelevant.\n\nEither the seqscans are causing a lot of I/O, or maybe the f_sel_*\nfunctions in the filter are expensive. Judging by how few rows are in\nthe tables (not sure how large the tables are), I'd guess it's the\nlatter ... Hard to say without knowing what the functions do etc.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 13 Sep 2020 18:47:45 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issue (Not using Index when joining two tables)." }, { "msg_contents": "Tomas Vondra <[email protected]> writes:\n> Most of the time (3460ms) is spent in the sequential scan on\n> chr_simple_val, and the seqscan on chr_emp_position is taking ~330ms).\n> Combined that's 3790ms out of 3797ms, so the join is pretty much\n> irrelevant.\n\n> Either the seqscans are causing a lot of I/O, or maybe the f_sel_*\n> functions in the filter are expensive. Judging by how few rows are in\n> the tables (not sure how large the tables are), I'd guess it's the\n> latter ... Hard to say without knowing what the functions do etc.\n\nI think the OP is wishing that the filter functions for the larger table\nwould be postponed till after the join condition is applied. I'm a\nlittle dubious that that's going to save anything meaningful; but maybe\nincreasing the cost attributed to those functions would persuade the\nplanner to try it that way.\n\nFirst though, does forcing a nestloop plan (turn off enable_hashjoin,\nand enable_mergejoin too if needed) produce the shape of plan you\nwant? And if so, is it actually faster? Only if those things are\ntrue is it going to be worth messing with costing parameters.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 13 Sep 2020 13:07:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issue (Not using Index when joining two tables)." }, { "msg_contents": "Filter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[]))\nAND f_sel_policy_all(vpd_key, 'CH\nR_EMP_POSITION'::character varying) AND\nf_sel_policy_prod_locale((cep.*)::character\nvarying, prod_locale_code))\n\nThis looks like some stuff for row level security perhaps. My understanding\nis limited, but perhaps those restrictions are influencing the planners\naccess or reliance on stats.\n\nAlso, it would seem like you need the entire table since you don't have an\nexplicit where clause. Why would scanning an index and then also visiting\nevery row in the table be faster than just going directly to the table?\n\nFilter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[])) AND f_sel_policy_all(vpd_key, 'CHR_EMP_POSITION'::character varying) AND f_sel_policy_prod_locale((cep.*)::character varying, prod_locale_code))This looks like some stuff for row level security perhaps. My understanding is limited, but perhaps those restrictions are influencing the planners access or reliance on stats.Also, it would seem like you need the entire table since you don't have an explicit where clause. Why would scanning an index and then also visiting every row in the table be faster than just going directly to the table?", "msg_date": "Sun, 13 Sep 2020 20:51:00 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issue (Not using Index when joining two tables)." }, { "msg_contents": "@Michael Lewis<mailto:[email protected]>; @Tom Lane<mailto:[email protected]>; @Tomas Vondra<mailto:[email protected]>\n\nHi,\n\nThanks for looking into the problem/issue. Let me give more details about the functions... Yes, we are using row level security.\n\nActually, we have converted an Oracle VPD database (Virtual Private Databases - In short row level security) into postgresql. We have several functions available to filter or to provide the row level security.\n\nf_sel_policy_ty_static; f_sel_policy_all filters the tables where the vpd_key is provided initially.\nf_sel_policy_prod_locale filters the table where the prod_locale_code is provided initially.\n\nBefore running any queries in the database, we will set the context settings/row level security based on the function below..\n\n CALLvpd_filter(vpd_key=>'XXXX',mod_user=>'XXXXX',user_locale=>'en_XX',prod_locale=>'XX');\n\nThis will set the context variables and provide row level security. All the tables in our database consists of vpd_key which is a filter for to run the queries for a given client.\n\nThe tables mentioned below chr_emp_position and chr_simple_val consists of many rows and the functions filter them based on the vpd_key and prod_user_locale_code.\nOnce after providing the row level security we executed the query joining the tables.. And where the index is not being utlitized/ the query runs slower i.e., greater than 8seconds.\n\nThe normal structure of the tables will be like this..\n\nchr_emp_position --- has columns vpd_key,oid, home_Dept_oid, eff_date, start_Date,.....etc., (almost having 200+ columns). -- primary key is vpd_key and oid.\nchr_simple_Val --- has columns vpd_key, oid , category, description..et.c, (almost has around 70 columns). (primary key is vpd_key and oid)\n\nThe rows mentioned below are after setting the row level security on those tables ..\n\ni.e, after executing the function\n\n CALL vpd_filter(spv_vpd_key=>'XXXX',spv_mod_usr=>'XXXXX',spv_user_locale=>'en_XX',spv_prod_locale=>'XX');\n\n\npgwfc01q=> select count(*) from chr_simple_val;\n count\n-------\n 13158\n(1 row)\n\npgwfc01q=> select count(*) from chr_emp_position;\n count\n-------\n 228\n(1 row)\n\n\nThe primary key for the table chr_Simple_val contains OID. Still not using the index.\n\nI'm sharing the explain plan over here..\n\npgwfc01q=> explain analyze select cep.HOME_DEPT_OID,ctc.oid,ctc.category,ctc.code from chr_emp_position cep inner join chr_Simple_Val ctc on ctc.oid=cep.HOME_DEPT_OID;\n QUERY P\nLAN\n--------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------\n Hash Join (cost=49299.91..51848.83 rows=651 width=42) (actual time=3512.692..3797.583 rows=228 loops=1)\n Hash Cond: ((cep.home_dept_oid)::text = (ctc.oid)::text)\n -> Seq Scan on chr_emp_position cep (cost=0.00..2437.77 rows=436 width=11) (actual time=44.713..329.435 rows=22\n8 loops=1)\n Filter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[])) AND f_sel_policy_all(vpd_key, 'CH\nR_EMP_POSITION'::character varying) AND f_sel_policy_prod_locale((cep.*)::character varying, prod_locale_code))\n Rows Removed by Filter: 3695\n -> Hash (cost=49176.40..49176.40 rows=9881 width=31) (actual time=3467.907..3467.908 rows=13158 loops=1)\n Buckets: 16384 Batches: 1 Memory Usage: 1031kB\n -> Seq Scan on chr_simple_val ctc (cost=0.00..49176.40 rows=9881 width=31) (actual time=2.191..3460.929 r\nows=13158 loops=1)\n Filter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[])) AND f_sel_policy_ty_static(\nvpd_key) AND f_sel_policy_prod_locale((ctc.*)::character varying, prod_locale_code))\nPlanning Time: 0.297 ms\n Execution Time: 3797.768 ms\n(12 rows)\n\n\nIf i don't set the context and run as a root user the explain plan is as below.. And it executes in milliseconds even without the index having the full table scan.\n\n\n 1. I'm not sure if my filters are time consuming. Most of the queries works except few. We hadn't seen the problem in Oracle. I'm not comparing between Oracle and Postgres here. I see both are two different flavors. but trying to get my query runs less than 8seconds.\n 2. I'm not sure why the index on chr_simple_val is not being used here vpd_key,oid. I'm confident if it uses index, it will/might be faster as it is looking for 2 or 3 home departments based on oid.\n 3. I'm not sure why even having the full scan it worked for the root user.\n 4. I'm not sure why the bitmap heap scan was not followed after setting the row level security. How to make the bitmap heap scan on chr_emp_position as i observed here.\n\nfyi.,\n\nRunning as a root user.\n\npgwfc01q=> explain analyze select cep.HOME_DEPT_OID,ctc.oid,ctc.category,ctc.code from xxxx.chr_emp_position cep inner join wfnsch001.chr_Simple_Val ctc on ctc.oid=cep.HOME_DEPT_OID and (ctc.vpd_key='COMMON' or ctc.vpd_key=cep.vpd_key) and cep.vpd_key='xxxxxxxxxx';\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------\n-------------------\n Hash Join (cost=5503.95..6742.82 rows=453 width=42) (actual time=131.241..154.201 rows=228 loops=1)\n Hash Cond: ((cep.home_dept_oid)::text = (ctc.oid)::text)\n Join Filter: (((ctc.vpd_key)::text = 'NG_COMMON'::text) OR ((ctc.vpd_key)::text = (cep.vpd_key)::text))\n Rows Removed by Join Filter: 19770\n -> Bitmap Heap Scan on chr_emp_position cep (cost=10.05..362.25 rows=228 width=28) (actual time=0.056..0.253 ro\nws=228 loops=1)\n Recheck Cond: ((vpd_key)::text = 'xxxxxxxxxx'::text)\n Heap Blocks: exact=26\n -> Bitmap Index Scan on uq1_chr_emp_position (cost=0.00..9.99 rows=228 width=0) (actual time=0.041..0.041\n rows=228 loops=1)\n Index Cond: ((vpd_key)::text = 'xxxxxxxxxx'::text)\n -> Hash (cost=3600.29..3600.29 rows=88929 width=48) (actual time=130.826..130.826 rows=88929 loops=1)\n Buckets: 65536 (originally 65536) Batches: 4 (originally 2) Memory Usage: 3585kB\n -> Seq Scan on chr_simple_val ctc (cost=0.00..3600.29 rows=88929 width=48) (actual time=0.005..33.356 row\ns=88929 loops=1)\n Planning Time: 3.977 ms\n Execution Time: 154.535 ms\n(14 rows)\n\npgwfc01q=> select count(*) from wfnsch001.chr_emp_position;\n count\n-------\n 3923\n(1 row)\n\npgwfc01q=> select count(*) from wfnsch001.chr_Simple_Val;\n count\n-------\n 88929\n(1 row)\n\n\n\nI'm not sure if i'm thinking in the right way or not. (As of safety purpose, i have rebuilded indexes, analyzed, did vaccum on those tables). Sorry for the lengthy email and i'm trying to explain my best on this.\n\nThank you.\n\nRegards,\nRamesh G\n\n\n________________________________\nFrom: Michael Lewis <[email protected]>\nSent: Sunday, September 13, 2020 10:51 PM\nTo: Tom Lane <[email protected]>\nCc: Tomas Vondra <[email protected]>; Gopisetty, Ramesh <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Performance Issue (Not using Index when joining two tables).\n\nFilter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[])) AND f_sel_policy_all(vpd_key, 'CH\nR_EMP_POSITION'::character varying) AND f_sel_policy_prod_locale((cep.*)::character varying, prod_locale_code))\n\nThis looks like some stuff for row level security perhaps. My understanding is limited, but perhaps those restrictions are influencing the planners access or reliance on stats.\n\nAlso, it would seem like you need the entire table since you don't have an explicit where clause. Why would scanning an index and then also visiting every row in the table be faster than just going directly to the table?\n\n\n\n\n\n\n\n\n@Michael Lewis;\n\n@Tom Lane; \n@Tomas Vondra\n\n\n\n\n\nHi,\n\n\n\n\n\nThanks for looking into the problem/issue.    Let me give more details about the functions...    Yes,  we are using row level security. \n\n\n\n\nActually, we have converted an Oracle VPD database (Virtual Private Databases -  In short row level security)  into postgresql.   We have several functions available to filter or to provide the row level security.\n\n\n\n\nf_sel_policy_ty_static;  f_sel_policy_all \n filters the tables where the vpd_key is provided initially.\n\n\nf_sel_policy_prod_locale \n filters the table where the prod_locale_code is provided initially.\n\n\n\n\n\nBefore running any queries in the database, we will set the context settings/row level security  based on the function below.. \n\n\n\n\n CALLvpd_filter(vpd_key=>'XXXX',mod_user=>'XXXXX',user_locale=>'en_XX',prod_locale=>'XX');\n\n\n\n\nThis will set the context variables and provide row level security.   All the tables in our database consists of vpd_key which is a filter for to run the queries for a given client. \n\n\n\n\nThe tables mentioned below chr_emp_position and chr_simple_val consists of many rows and the functions filter them based on the vpd_key and prod_user_locale_code.\n\nOnce after providing the row level security we executed the query joining the tables..    And where the index is not being utlitized/ the query runs slower i.e., greater than 8seconds.\n\n\n\n\n\nThe normal structure of the tables will be like this.. \n\n\n\n\nchr_emp_position  --- has columns  vpd_key,oid, home_Dept_oid, eff_date, start_Date,.....etc.,    (almost having 200+ columns).   -- primary key is  vpd_key and oid.\n\nchr_simple_Val   --- has columns   vpd_key, oid , category, description..et.c,     (almost has around 70 columns).    (primary key is  vpd_key and oid)\n\n\n\n\nThe rows mentioned below are after setting the row level security on those tables .. \n\n\n\n\ni.e,  after executing the function \n\n\n\n\n\n CALL vpd_filter(spv_vpd_key=>'XXXX',spv_mod_usr=>'XXXXX',spv_user_locale=>'en_XX',spv_prod_locale=>'XX');\n\n\n\n\npgwfc01q=> select count(*) from chr_simple_val;\n\n\n count\n-------\n 13158\n(1 row)\n\npgwfc01q=> select count(*) from chr_emp_position;\n count\n-------\n   228\n(1 row)\n\n\nThe primary key for the table chr_Simple_val  contains OID.   Still not using the index.\n\nI'm sharing the explain plan over here..\n\npgwfc01q=> explain analyze select cep.HOME_DEPT_OID,ctc.oid,ctc.category,ctc.code from chr_emp_position cep inner join chr_Simple_Val ctc on ctc.oid=cep.HOME_DEPT_OID;\n                                                                                                             QUERY P\nLAN\n--------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------\n Hash Join  (cost=49299.91..51848.83 rows=651 width=42) (actual time=3512.692..3797.583 rows=228 loops=1)\n   Hash Cond: ((cep.home_dept_oid)::text = (ctc.oid)::text)\n   ->  Seq Scan on chr_emp_position cep  (cost=0.00..2437.77 rows=436 width=11) (actual time=44.713..329.435 rows=22\n8 loops=1)\n         Filter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[])) AND f_sel_policy_all(vpd_key, 'CH\nR_EMP_POSITION'::character varying) AND f_sel_policy_prod_locale((cep.*)::character varying, prod_locale_code))\n         Rows Removed by Filter: 3695\n   ->  Hash  (cost=49176.40..49176.40 rows=9881 width=31) (actual time=3467.907..3467.908\n rows=13158 loops=1)\n         Buckets: 16384  Batches: 1  Memory Usage: 1031kB\n         ->  Seq Scan on chr_simple_val ctc  (cost=0.00..49176.40 rows=9881 width=31) (actual time=2.191..3460.929 r\nows=13158 loops=1)\n               Filter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[])) AND f_sel_policy_ty_static(\nvpd_key) AND f_sel_policy_prod_locale((ctc.*)::character varying, prod_locale_code))\n\n\nPlanning\n Time: 0.297 ms\n Execution Time: 3797.768 ms\n(12 rows)\n\n\n\n\n\n\n\n\nIf\n i don't set the context and run as a root user the explain plan is as below..   And it executes in milliseconds even without the index having the full table scan.  \n\n\n\n\n\n I'm\n not sure if my filters are time consuming.  Most of the queries works except few.  We hadn't seen the problem in Oracle.  I'm not comparing between Oracle and Postgres here.   I see both are two different flavors. but trying to get my query runs less than\n 8seconds. I'm\n not sure why the index on chr_simple_val is not being used here  vpd_key,oid.   I'm confident if it uses index, it will/might  be faster as it is looking for 2 or  3 home departments based on oid. I'm\n not sure why even having the full scan it worked for the root user.   I'm\n not sure why the bitmap heap scan was not followed after setting the row level security.   How to make the bitmap heap scan on chr_emp_position as i observed here.\n\n\nfyi.,\n\n\n\nRunning\n as a root user.\n\n\n\n\n\npgwfc01q=>\n explain analyze select cep.HOME_DEPT_OID,ctc.oid,ctc.category,ctc.code from xxxx.chr_emp_position cep inner join wfnsch001.chr_Simple_Val ctc on ctc.oid=cep.HOME_DEPT_OID and (ctc.vpd_key='COMMON' or ctc.vpd_key=cep.vpd_key) and cep.vpd_key='xxxxxxxxxx';\n\n\nQUERY PLAN\n\n\n--------------------------------------------------------------------------------------------------------------------\n-------------------\n Hash Join  (cost=5503.95..6742.82 rows=453 width=42) (actual time=131.241..154.201 rows=228 loops=1)\n   Hash Cond: ((cep.home_dept_oid)::text = (ctc.oid)::text)\n   Join Filter: (((ctc.vpd_key)::text = 'NG_COMMON'::text) OR ((ctc.vpd_key)::text = (cep.vpd_key)::text))\n   Rows Removed by Join Filter: 19770\n   ->  Bitmap Heap Scan on chr_emp_position cep  (cost=10.05..362.25 rows=228 width=28) (actual time=0.056..0.253 ro\nws=228 loops=1)\n         Recheck Cond: ((vpd_key)::text = 'xxxxxxxxxx'::text)\n         Heap Blocks: exact=26\n         ->  Bitmap Index Scan on uq1_chr_emp_position  (cost=0.00..9.99 rows=228 width=0) (actual time=0.041..0.041\n rows=228 loops=1)\n               Index Cond: ((vpd_key)::text = 'xxxxxxxxxx'::text)\n   ->  Hash  (cost=3600.29..3600.29 rows=88929 width=48) (actual time=130.826..130.826 rows=88929 loops=1)\n         Buckets: 65536 (originally 65536)  Batches: 4 (originally 2)  Memory Usage: 3585kB\n         ->  Seq Scan on chr_simple_val ctc  (cost=0.00..3600.29 rows=88929 width=48) (actual time=0.005..33.356 row\ns=88929 loops=1)\n Planning Time: 3.977 ms\n Execution Time: 154.535 ms\n(14 rows)\n\n\npgwfc01q=> select count(*) from wfnsch001.chr_emp_position;\n count\n-------\n  3923\n(1 row)\n\n\npgwfc01q=> select count(*) from wfnsch001.chr_Simple_Val;\n count\n-------\n 88929\n(1 row)\n\n \n\n\n\n\nI'm not sure if i'm thinking in the right way or not. (As of safety purpose, i have  rebuilded indexes, analyzed, did vaccum on those tables).   Sorry for the lengthy email and i'm trying to explain my best on this.\n\n\n\n\nThank you.\n\n\n\n\nRegards,\n\nRamesh G\n\n\n\n\n\n\n\n\nFrom: Michael Lewis <[email protected]>\nSent: Sunday, September 13, 2020 10:51 PM\nTo: Tom Lane <[email protected]>\nCc: Tomas Vondra <[email protected]>; Gopisetty, Ramesh <[email protected]>; [email protected] <[email protected]>\nSubject: Re: Performance Issue (Not using Index when joining two tables).\n \n\n\nFilter: (((\"current_user\"())::text <> ANY ('{wfnadmin,skipvpd}'::text[]))\n AND f_sel_policy_all(vpd_key, 'CH\nR_EMP_POSITION'::character varying) AND f_sel_policy_prod_locale((cep.*)::character\n varying, prod_locale_code))\n\n\nThis looks like some stuff for row level security perhaps. My understanding is limited, but perhaps those restrictions are influencing the planners access or reliance on stats.\n\n\nAlso, it would seem like you need the entire table since you don't have an explicit where clause. Why would scanning an index and then also visiting every row in the table be faster than just going directly to the table?", "msg_date": "Mon, 14 Sep 2020 03:18:26 +0000", "msg_from": "\"Gopisetty, Ramesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Issue (Not using Index when joining two tables)." }, { "msg_contents": "\"Gopisetty, Ramesh\" <[email protected]> writes:\n> Thanks for looking into the problem/issue. Let me give more details about the functions... Yes, we are using row level security.\n\nHm. If those expensive filter functions are being injected by RLS on the\ntarget tables (rather than by something like an intermediate view), then\nthe planner is constrained to ensure that they execute before any query\nconditions that it doesn't know to be \"leakproof\". So unless your join\noperator is leakproof, the shape of plan that you're hoping for will not\nbe allowed. Since you haven't mentioned anything about data types, it's\nhard to know whether that's the issue. (The hash condition seems to be\ntexteq, which is leakproof, but there are also casts involved which\nmight not be.)\n\nThe two queries you provided explain plans for are not the same, so\ncomparing their plans is a fairly pointless activity. *Of course*\nthe query runs faster when you restrict it to fetch fewer rows. The\noriginal query has no restriction clause that corresponds to the\nclauses being used for index conditions in the second query, so it's\nhardly a surprise that you do not get that plan, RLS or no RLS.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Sep 2020 13:40:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Issue (Not using Index when joining two tables)." } ]
[ { "msg_contents": "Howdy folks,\n\nRecently i've run into a problem where autoanalyze is causing a query\nplan to flip over to using an index which is about 10x slower, and the\nproblem is fixed by running an alayze manually. some relevant info:\n\nUPDATE sleeping_intents SET\nraptor_after='2020-09-14T19:21:03.581106'::timestamp,\nstatus='requires_capture',\nupdated_at='2020-09-14T16:21:03.581104+00:00'::timestamptz WHERE\nsleeping_intents.id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid AND\nsleeping_intents.status = 'finna' RETURNING *;\n\nThe plan generated after autoanalyze is:\n\nUpdate on sleeping_intents (cost=0.70..7.11 rows=1 width=272) (actual\ntime=57.945..57.945 rows=0 loops=1)\n Buffers: shared hit=43942\n -> Index Scan using\nsleeping_intents_status_created_at_raptor_after_idx on\nsleeping_intents (cost=0.70..7.11 rows=1 width=272) (actual\ntime=57.943..57.943 rows=0 loops=1)\n Index Cond: (status = 'init'::text)\n Filter: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n Rows Removed by Filter: 1262\n Buffers: shared hit=43942\n Planning time: 0.145 ms\n Execution time: 57.981 ms\n\nafter i run analyze manually, the query plan is changed to this:\n\nUpdate on sleeping_intents (cost=0.57..8.59 rows=1 width=272) (actual\ntime=0.023..0.023 rows=0 loops=1)\n Buffers: shared hit=7\n -> Index Scan using sleeping_intents_pkey on sleeping_intents\n(cost=0.57..8.59 rows=1 width=272) (actual time=0.022..0.022 rows=0\nloops=1)\n Index Cond: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n Filter: (status = 'init'::text)\n Rows Removed by Filter: 1\n Buffers: shared hit=7\n Planning time: 0.092 ms\n Execution time: 0.066 ms\n\nNote that in the second query, it switches back to using the primary\nkey index, which does seem like the logically better choice, even\nthough it shows a higher estimated cost than the \"bad\" case\n(understanding the cost must change somewhere in the process, but\nthere no way to see it afaict).\n\nIn trying to determine why it switches, I dug up some likely useful info:\nIndex definitions:\n (20 GB) \"sleeping_intents_pkey\" PRIMARY KEY, btree (id)\n (37 GB) \"sleeping_intents_status_created_at_raptor_after_idx\" btree\n(status, created_at DESC, raptor_after DESC)\n\nBasic info on the table:\n> select relid::regclass, n_live_tup,n_mod_since_analyze,analyze_count,autoanalyze_count from pg_stat_user_tables where relname='sleeping_intents';\n relid | n_live_tup | n_mod_since_analyze | analyze_count |\nautoanalyze_count\n-----------------+------------+---------------------+---------------+-------------------\n sleeping_intents | 491171179 | 1939347 | 4 |\n 80\n\n(that num mods is in the last ~5 hours, the table is fairly active,\nalthough on a relatively small portion of the data)\n\nStatistics after manual analyze:\n tablename | attname | null_frac | avg_width |\nn_distinct | correlation | most_common_freqs\n-----------------+---------------+-----------+-----------+------------+-------------+--------------------------------------------------------\n sleeping_intents | id | 0 | 16 | -1\n| -0.00133045 | [null]\n sleeping_intents | status | 0 | 9 | 6\n| 0.848468 | {0.918343,0.0543667,0.0267567,0.000513333,1e-05,1e-05}\n sleeping_intents | created_at | 0 | 8 | -1\n| 0.993599 | [null]\n sleeping_intents | raptor_after | 0.0663433 | 8 | -0.933657\n| 0.99392 | [null]\n\nIn a previous go around with this table, I also increased the\nstatistics target for the id column to 1000, vs 100 which is the\ndatabase default.\n\nOriginally I was mostly interested in trying to understand why it\nwould choose something other than the non-pk index, which sort of\nfeels like a bug; what could be faster than seeking an individual\nentry in a pk index? There are cases where it might make sense, but\nthis doesn't seem like one (even accounting for the infrequency of the\nstatus we are looking for, which is 1e-05, the disparity in index size\nshould push it back to the pk imho, unless I am not thinking through\ncorrelation enough?).\n\nHowever, it also seems very odd that this problem occurs at all. In\nthe last couple of times this has happened, the manual analyze has\nbeen run within ~30-45 minutes of the auto-analyze, and while the data\nis changing, it isn't changing that rapidly that this should make a\nsignificant difference, but I don't see any other reason that\nautoanalyze would produce a different result than manual analyze.\n\nAll that said, any insight on the above two items would be great, but\nthe most immediate concern would be around suggestions for preventing\nthis from happening again?\n\nThanks in advance,\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Mon, 14 Sep 2020 19:11:12 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "autoanalyze creates bad plan, manual analyze fixes it?" }, { "msg_contents": "On Mon, Sep 14, 2020 at 07:11:12PM -0400, Robert Treat wrote:\n> Howdy folks,\n> \n> Recently i've run into a problem where autoanalyze is causing a query\n> plan to flip over to using an index which is about 10x slower, and the\n> problem is fixed by running an alayze manually. some relevant info:\n\nI think it's because 1) the costs and scan rowcounts are similar ; and, 2) the\nstats are probably near some threshold which causes the plan to change. I'm\nguessing if you run a manual ANALYZE 100 times, you'll sometimes get the bad\nplan. Maybe depending on the data visible at the time analyze is invoked.\n\n> UPDATE sleeping_intents SET\n> raptor_after='2020-09-14T19:21:03.581106'::timestamp,\n> status='requires_capture',\n> updated_at='2020-09-14T16:21:03.581104+00:00'::timestamptz WHERE\n> sleeping_intents.id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid AND\n> sleeping_intents.status = 'finna' RETURNING *;\n\nDo you mean status='init' ??\n\n> The plan generated after autoanalyze is:\n> \n> Update on sleeping_intents (cost=0.70..7.11 rows=1 width=272) (actual time=57.945..57.945 rows=0 loops=1)\n> Buffers: shared hit=43942\n> -> Index Scan using sleeping_intents_status_created_at_raptor_after_idx on sleeping_intents (cost=0.70..7.11 rows=1 width=272) (actual time=57.943..57.943 rows=0 loops=1)\n> Index Cond: (status = 'init'::text)\n> Filter: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n> Rows Removed by Filter: 1262\n> Buffers: shared hit=43942\n> Planning time: 0.145 ms\n> Execution time: 57.981 ms\n> \n> after i run analyze manually, the query plan is changed to this:\n> \n> Update on sleeping_intents (cost=0.57..8.59 rows=1 width=272) (actual time=0.023..0.023 rows=0 loops=1)\n> Buffers: shared hit=7\n> -> Index Scan using sleeping_intents_pkey on sleeping_intents (cost=0.57..8.59 rows=1 width=272) (actual time=0.022..0.022 rows=0 loops=1)\n> Index Cond: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n> Filter: (status = 'init'::text)\n> Rows Removed by Filter: 1\n> Buffers: shared hit=7\n> Planning time: 0.092 ms\n> Execution time: 0.066 ms\n> \n> Note that in the second query, it switches back to using the primary\n> key index, which does seem like the logically better choice, even\n> though it shows a higher estimated cost than the \"bad\" case\n> (understanding the cost must change somewhere in the process, but\n> there no way to see it afaict).\n\nIf you SET enable_indexscan=off you can try to get an bitmap index scan, which\nwill reveal how much of the cost is attributed to the index component and how\nmuch to the heap. That might help to refine costs, which may help.\n\n> Statistics after manual analyze:\n> tablename | attname | null_frac | avg_width |\n> n_distinct | correlation | most_common_freqs\n> -----------------+---------------+-----------+-----------+------------+-------------+--------------------------------------------------------\n> sleeping_intents | id | 0 | 16 | -1\n> | -0.00133045 | [null]\n> sleeping_intents | status | 0 | 9 | 6\n> | 0.848468 | {0.918343,0.0543667,0.0267567,0.000513333,1e-05,1e-05}\n> sleeping_intents | created_at | 0 | 8 | -1\n> | 0.993599 | [null]\n> sleeping_intents | raptor_after | 0.0663433 | 8 | -0.933657\n> | 0.99392 | [null]\n> \n> In a previous go around with this table, I also increased the\n> statistics target for the id column to 1000, vs 100 which is the\n> database default.\n\nWhat about status ?\nI wonder if sometimes the sample doesn't include *any* rows for the 1e-5\nstatuses. So the planner would estimate the rowcount based on ndistinct and\nthe other frequencies. But if you rerun analyze, then it thinks it'll get one\nrow based on the sampled frequency of status. \n\nWhat postgres version, and what non-default settings ?\nMaybe you can run explain(settings,...).\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 14 Sep 2020 18:41:32 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autoanalyze creates bad plan, manual analyze fixes it?" }, { "msg_contents": "út 15. 9. 2020 v 1:11 odesílatel Robert Treat <[email protected]> napsal:\n\n> Howdy folks,\n>\n> Recently i've run into a problem where autoanalyze is causing a query\n> plan to flip over to using an index which is about 10x slower, and the\n> problem is fixed by running an alayze manually. some relevant info:\n>\n> UPDATE sleeping_intents SET\n> raptor_after='2020-09-14T19:21:03.581106'::timestamp,\n> status='requires_capture',\n> updated_at='2020-09-14T16:21:03.581104+00:00'::timestamptz WHERE\n> sleeping_intents.id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid AND\n> sleeping_intents.status = 'finna' RETURNING *;\n>\n> The plan generated after autoanalyze is:\n>\n> Update on sleeping_intents (cost=0.70..7.11 rows=1 width=272) (actual\n> time=57.945..57.945 rows=0 loops=1)\n> Buffers: shared hit=43942\n> -> Index Scan using\n> sleeping_intents_status_created_at_raptor_after_idx on\n> sleeping_intents (cost=0.70..7.11 rows=1 width=272) (actual\n> time=57.943..57.943 rows=0 loops=1)\n> Index Cond: (status = 'init'::text)\n> Filter: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n> Rows Removed by Filter: 1262\n> Buffers: shared hit=43942\n> Planning time: 0.145 ms\n> Execution time: 57.981 ms\n>\n\nThis looks pretty strange - why for 1262 rows you need to read 43942 pages?\n\nCan you reindex this index. Maybe it is bloated.\n\nRegards\n\nPavel\n\n\n\n>\n> after i run analyze manually, the query plan is changed to this:\n>\n> Update on sleeping_intents (cost=0.57..8.59 rows=1 width=272) (actual\n> time=0.023..0.023 rows=0 loops=1)\n> Buffers: shared hit=7\n> -> Index Scan using sleeping_intents_pkey on sleeping_intents\n> (cost=0.57..8.59 rows=1 width=272) (actual time=0.022..0.022 rows=0\n> loops=1)\n> Index Cond: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n> Filter: (status = 'init'::text)\n> Rows Removed by Filter: 1\n> Buffers: shared hit=7\n> Planning time: 0.092 ms\n> Execution time: 0.066 ms\n>\n> Note that in the second query, it switches back to using the primary\n> key index, which does seem like the logically better choice, even\n> though it shows a higher estimated cost than the \"bad\" case\n> (understanding the cost must change somewhere in the process, but\n> there no way to see it afaict).\n>\n> In trying to determine why it switches, I dug up some likely useful info:\n> Index definitions:\n> (20 GB) \"sleeping_intents_pkey\" PRIMARY KEY, btree (id)\n> (37 GB) \"sleeping_intents_status_created_at_raptor_after_idx\" btree\n> (status, created_at DESC, raptor_after DESC)\n>\n> Basic info on the table:\n> > select relid::regclass,\n> n_live_tup,n_mod_since_analyze,analyze_count,autoanalyze_count from\n> pg_stat_user_tables where relname='sleeping_intents';\n> relid | n_live_tup | n_mod_since_analyze | analyze_count |\n> autoanalyze_count\n>\n> -----------------+------------+---------------------+---------------+-------------------\n> sleeping_intents | 491171179 | 1939347 | 4 |\n> 80\n>\n> (that num mods is in the last ~5 hours, the table is fairly active,\n> although on a relatively small portion of the data)\n>\n> Statistics after manual analyze:\n> tablename | attname | null_frac | avg_width |\n> n_distinct | correlation | most_common_freqs\n>\n> -----------------+---------------+-----------+-----------+------------+-------------+--------------------------------------------------------\n> sleeping_intents | id | 0 | 16 | -1\n> | -0.00133045 | [null]\n> sleeping_intents | status | 0 | 9 | 6\n> | 0.848468 | {0.918343,0.0543667,0.0267567,0.000513333,1e-05,1e-05}\n> sleeping_intents | created_at | 0 | 8 | -1\n> | 0.993599 | [null]\n> sleeping_intents | raptor_after | 0.0663433 | 8 | -0.933657\n> | 0.99392 | [null]\n>\n> In a previous go around with this table, I also increased the\n> statistics target for the id column to 1000, vs 100 which is the\n> database default.\n>\n> Originally I was mostly interested in trying to understand why it\n> would choose something other than the non-pk index, which sort of\n> feels like a bug; what could be faster than seeking an individual\n> entry in a pk index? There are cases where it might make sense, but\n> this doesn't seem like one (even accounting for the infrequency of the\n> status we are looking for, which is 1e-05, the disparity in index size\n> should push it back to the pk imho, unless I am not thinking through\n> correlation enough?).\n>\n> However, it also seems very odd that this problem occurs at all. In\n> the last couple of times this has happened, the manual analyze has\n> been run within ~30-45 minutes of the auto-analyze, and while the data\n> is changing, it isn't changing that rapidly that this should make a\n> significant difference, but I don't see any other reason that\n> autoanalyze would produce a different result than manual analyze.\n>\n> All that said, any insight on the above two items would be great, but\n> the most immediate concern would be around suggestions for preventing\n> this from happening again?\n>\n> Thanks in advance,\n>\n> Robert Treat\n> https://xzilla.net\n>\n>\n>\n\nút 15. 9. 2020 v 1:11 odesílatel Robert Treat <[email protected]> napsal:Howdy folks,\n\nRecently i've run into a problem where autoanalyze is causing a query\nplan to flip over to using an index which is about 10x slower, and the\nproblem is fixed by running an alayze manually. some relevant info:\n\nUPDATE sleeping_intents SET\nraptor_after='2020-09-14T19:21:03.581106'::timestamp,\nstatus='requires_capture',\nupdated_at='2020-09-14T16:21:03.581104+00:00'::timestamptz WHERE\nsleeping_intents.id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid AND\nsleeping_intents.status = 'finna' RETURNING *;\n\nThe plan generated after autoanalyze is:\n\nUpdate on sleeping_intents  (cost=0.70..7.11 rows=1 width=272) (actual\ntime=57.945..57.945 rows=0 loops=1)\n   Buffers: shared hit=43942\n   ->  Index Scan using\nsleeping_intents_status_created_at_raptor_after_idx on\nsleeping_intents  (cost=0.70..7.11 rows=1 width=272) (actual\ntime=57.943..57.943 rows=0 loops=1)\n         Index Cond: (status = 'init'::text)\n         Filter: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n         Rows Removed by Filter: 1262\n         Buffers: shared hit=43942\n Planning time: 0.145 ms\n Execution time: 57.981 msThis looks pretty strange - why for 1262 rows you need to read 43942 pages?Can you reindex this index. Maybe it is bloated.RegardsPavel \n\nafter i run analyze manually, the query plan is changed to this:\n\nUpdate on sleeping_intents  (cost=0.57..8.59 rows=1 width=272) (actual\ntime=0.023..0.023 rows=0 loops=1)\n   Buffers: shared hit=7\n   ->  Index Scan using sleeping_intents_pkey on sleeping_intents\n(cost=0.57..8.59 rows=1 width=272) (actual time=0.022..0.022 rows=0\nloops=1)\n         Index Cond: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n         Filter: (status = 'init'::text)\n         Rows Removed by Filter: 1\n         Buffers: shared hit=7\n Planning time: 0.092 ms\n Execution time: 0.066 ms\n\nNote that in the second query, it switches back to using the primary\nkey index, which does seem like the logically better choice, even\nthough it shows a higher estimated cost than the \"bad\" case\n(understanding the cost must change somewhere in the process, but\nthere no way to see it afaict).\n\nIn trying to determine why it switches, I dug up some likely useful info:\nIndex definitions:\n (20 GB) \"sleeping_intents_pkey\" PRIMARY KEY, btree (id)\n (37 GB) \"sleeping_intents_status_created_at_raptor_after_idx\" btree\n(status, created_at DESC, raptor_after DESC)\n\nBasic info on the table:\n> select relid::regclass, n_live_tup,n_mod_since_analyze,analyze_count,autoanalyze_count from pg_stat_user_tables where relname='sleeping_intents';\n      relid      | n_live_tup | n_mod_since_analyze | analyze_count |\nautoanalyze_count\n-----------------+------------+---------------------+---------------+-------------------\n sleeping_intents |  491171179 |             1939347 |             4 |\n               80\n\n(that num mods is in the last ~5 hours, the table is fairly active,\nalthough on a relatively small portion of the data)\n\nStatistics after manual analyze:\n       tablename    |    attname    | null_frac | avg_width |\nn_distinct | correlation |                   most_common_freqs\n-----------------+---------------+-----------+-----------+------------+-------------+--------------------------------------------------------\n sleeping_intents | id            |         0 |        16 |         -1\n| -0.00133045 | [null]\n sleeping_intents | status        |         0 |         9 |          6\n|    0.848468 | {0.918343,0.0543667,0.0267567,0.000513333,1e-05,1e-05}\n sleeping_intents | created_at    |         0 |         8 |         -1\n|    0.993599 | [null]\n sleeping_intents | raptor_after | 0.0663433 |         8 |  -0.933657\n|     0.99392 | [null]\n\nIn a previous go around with this table, I also increased the\nstatistics target for the id column to 1000, vs 100 which is the\ndatabase default.\n\nOriginally I was mostly interested in trying to understand why it\nwould choose something other than the non-pk index, which sort of\nfeels like a bug; what could be faster than seeking an individual\nentry in a pk index? There are cases where it might make sense, but\nthis doesn't seem like one (even accounting for the infrequency of the\nstatus we are looking for, which is 1e-05, the disparity in index size\nshould push it back to the pk imho, unless I am not thinking through\ncorrelation enough?).\n\nHowever, it also seems very odd that this problem occurs at all. In\nthe last couple of times this has happened, the manual analyze has\nbeen run within ~30-45 minutes of the auto-analyze, and while the data\nis changing, it isn't changing that rapidly that this should make a\nsignificant difference, but I don't see any other reason that\nautoanalyze would produce a different result than manual analyze.\n\nAll that said, any insight on the above two items would be great, but\nthe most immediate concern would be around suggestions for preventing\nthis from happening again?\n\nThanks in advance,\n\nRobert Treat\nhttps://xzilla.net", "msg_date": "Tue, 15 Sep 2020 06:53:54 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autoanalyze creates bad plan, manual analyze fixes it?" }, { "msg_contents": "On Mon, Sep 14, 2020 at 7:41 PM Justin Pryzby <[email protected]> wrote:\n>\n> On Mon, Sep 14, 2020 at 07:11:12PM -0400, Robert Treat wrote:\n> > Howdy folks,\n> >\n> > Recently i've run into a problem where autoanalyze is causing a query\n> > plan to flip over to using an index which is about 10x slower, and the\n> > problem is fixed by running an alayze manually. some relevant info:\n>\n> I think it's because 1) the costs and scan rowcounts are similar ; and, 2) the\n> stats are probably near some threshold which causes the plan to change. I'm\n> guessing if you run a manual ANALYZE 100 times, you'll sometimes get the bad\n> plan. Maybe depending on the data visible at the time analyze is invoked.\n>\n\nI've been thinking to try to capture statistics info in the bad case,\nI wonder if I could reproduce the situation that way.\n\n> > UPDATE sleeping_intents SET\n> > raptor_after='2020-09-14T19:21:03.581106'::timestamp,\n> > status='requires_capture',\n> > updated_at='2020-09-14T16:21:03.581104+00:00'::timestamptz WHERE\n> > sleeping_intents.id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid AND\n> > sleeping_intents.status = 'finna' RETURNING *;\n>\n> Do you mean status='init' ??\nYes, sorry, was playing around with different status's and copy/paste error.\n>\n> > The plan generated after autoanalyze is:\n> >\n> > Update on sleeping_intents (cost=0.70..7.11 rows=1 width=272) (actual time=57.945..57.945 rows=0 loops=1)\n> > Buffers: shared hit=43942\n> > -> Index Scan using sleeping_intents_status_created_at_raptor_after_idx on sleeping_intents (cost=0.70..7.11 rows=1 width=272) (actual time=57.943..57.943 rows=0 loops=1)\n> > Index Cond: (status = 'init'::text)\n> > Filter: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n> > Rows Removed by Filter: 1262\n> > Buffers: shared hit=43942\n> > Planning time: 0.145 ms\n> > Execution time: 57.981 ms\n> >\n> > after i run analyze manually, the query plan is changed to this:\n> >\n> > Update on sleeping_intents (cost=0.57..8.59 rows=1 width=272) (actual time=0.023..0.023 rows=0 loops=1)\n> > Buffers: shared hit=7\n> > -> Index Scan using sleeping_intents_pkey on sleeping_intents (cost=0.57..8.59 rows=1 width=272) (actual time=0.022..0.022 rows=0 loops=1)\n> > Index Cond: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n> > Filter: (status = 'init'::text)\n> > Rows Removed by Filter: 1\n> > Buffers: shared hit=7\n> > Planning time: 0.092 ms\n> > Execution time: 0.066 ms\n> >\n> > Note that in the second query, it switches back to using the primary\n> > key index, which does seem like the logically better choice, even\n> > though it shows a higher estimated cost than the \"bad\" case\n> > (understanding the cost must change somewhere in the process, but\n> > there no way to see it afaict).\n>\n> If you SET enable_indexscan=off you can try to get an bitmap index scan, which\n> will reveal how much of the cost is attributed to the index component and how\n> much to the heap. That might help to refine costs, which may help.\n>\n\nI'm not quite sure what your getting at, but took a look and got this\nsurprising plan:\n\nUpdate on sleeping_intents (cost=4.58..8.60 rows=1 width=272) (actual\ntime=0.025..0.025 rows=0 loops=1)\n Buffers: shared hit=6\n -> Bitmap Heap Scan on sleeping_intents (cost=4.58..8.60 rows=1\nwidth=272) (actual time=0.025..0.025 rows=0 loops=1)\n Recheck Cond: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n Filter: (status = 'init'::text)\n Rows Removed by Filter: 1\n Heap Blocks: exact=2\n Buffers: shared hit=6\n -> Bitmap Index Scan on sleeping_intents_pkey\n(cost=0.00..4.58 rows=1 width=0) (actual time=0.017..0.017 rows=3\nloops=1)\n Index Cond: (id = 'r2d2dcc0-8a44-4d19-c3p0-28522233b836'::uuid)\n Buffers: shared hit=4\n Planning time: 1.170 ms\n Execution time: 0.063 ms\n\nThe one thing about this is that these are 0 update runs, because as\nnoted the data is always changing. I do think it's instructive to see\nthe plans, but in this case it feels a bit unfair.\n\n> > Statistics after manual analyze:\n> > tablename | attname | null_frac | avg_width |\n> > n_distinct | correlation | most_common_freqs\n> > -----------------+---------------+-----------+-----------+------------+-------------+--------------------------------------------------------\n> > sleeping_intents | id | 0 | 16 | -1\n> > | -0.00133045 | [null]\n> > sleeping_intents | status | 0 | 9 | 6\n> > | 0.848468 | {0.918343,0.0543667,0.0267567,0.000513333,1e-05,1e-05}\n> > sleeping_intents | created_at | 0 | 8 | -1\n> > | 0.993599 | [null]\n> > sleeping_intents | raptor_after | 0.0663433 | 8 | -0.933657\n> > | 0.99392 | [null]\n> >\n> > In a previous go around with this table, I also increased the\n> > statistics target for the id column to 1000, vs 100 which is the\n> > database default.\n>\n> What about status ?\n> I wonder if sometimes the sample doesn't include *any* rows for the 1e-5\n> statuses. So the planner would estimate the rowcount based on ndistinct and\n> the other frequencies. But if you rerun analyze, then it thinks it'll get one\n> row based on the sampled frequency of status.\n>\n\nThis is on my list to try (there aren't many other options it seems);\nI guess if the theory is that we are *that* close to the selectivity\nedge that any random analyze might push it one way or the other,\ngiving it more data could help make that less volatile. (otoh, this\nassumes the problem is that the times it is bad are because it doesn't\nsee something it should, and not that it does, in which case giving it\nmore info will push it more towards the bad.\n\n> What postgres version, and what non-default settings ?\n> Maybe you can run explain(settings,...).\n>\n\nSorry, can't, this is 10.11, but I don't think there are any relevant\nchanges outside of what I've mentioned.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n", "msg_date": "Tue, 15 Sep 2020 01:21:08 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autoanalyze creates bad plan, manual analyze fixes it?" } ]
[ { "msg_contents": "Hi,\nI'm running one query, and I created two types of index one is composite and the other one with single column one and query planner showing almost the same cost for both index bitmap scan, I'm not sure which is appropriate to keep in production tables.\n\nexplain analyze SELECT BAN, SUBSCRIBER_NO, ACTV_CODE, ACTV_RSN_CODE, EFFECTIVE_DATE, TRX_SEQ_NO, LOAD_DTTM, rnk AS RNK  FROM ( SELECT CT.BAN, CT.SUBSCRIBER_NO, CT.ACTV_CODE, CT.ACTV_RSN_CODE, CT.EFFECTIVE_DATE, CT.TRX_SEQ_NO, CT.LOAD_DTTM, row_number() over (partition by CT.BAN, CT.SUBSCRIBER_NO, CT.ACTV_CODE, CT.ACTV_RSN_CODE order by CT.TRX_SEQ_NO DESC, CT.LOAD_DTTM DESC) rnk FROM SAM_T.L_CSM_TRANSACTIONS CT WHERE CT.ACTV_CODE in ( 'NAC', 'CAN', 'RSP', 'RCL') AND LOAD_DTTM::DATE >= CURRENT_DATE - 7 ) S WHERE RNK = 1 1st Index with single column: \nCREATE INDEX l_csm_transactions_load_dttm_idx1    ON sam_t.l_csm_transactions USING btree    (load_dttm ASC NULLS LAST)\n\n /*\"Subquery Scan on s  (cost=32454.79..33555.15 rows=129 width=61) (actual time=56.473..56.473 rows=0 loops=1)\"\"  Filter: (s.rnk = 1)\"\"  ->  WindowAgg  (cost=32454.79..33231.52 rows=25891 width=61) (actual time=56.472..56.472 rows=0 loops=1)\"\"        ->  Sort  (cost=32454.79..32519.51 rows=25891 width=53) (actual time=56.470..56.470 rows=0 loops=1)\"\"              Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC\"\"              Sort Method: quicksort  Memory: 25kB\"\"              ->  Bitmap Heap Scan on l_csm_transactions ct  (cost=1271.13..30556.96 rows=25891 width=53) (actual time=56.462..56.462 rows=0 loops=1)\"\"                    Recheck Cond: ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[]))\"\"                    Filter: ((load_dttm)::date >= (CURRENT_DATE - 7))\"\"                    Rows Removed by Filter: 79137\"\"                    Heap Blocks: exact=23976\"\"                    ->  Bitmap Index Scan on l_csm_transactions_actv_code_idx1  (cost=0.00..1264.66 rows=77673 width=0) (actual time=6.002..6.002 rows=79137 loops=1)\"\"Planning Time: 0.270 ms\"\"Execution Time: 56.639 ms\"*/\n2nd one with composite and partial index:\nCREATE INDEX l_csm_transactions_actv_code_load_dttm_idx1    ON sam_t.l_csm_transactions USING btree    (actv_code COLLATE pg_catalog.\"default\" ASC NULLS LAST, (load_dttm::date) DESC NULLS FIRST)    WHERE actv_code::text = ANY (ARRAY['NAC'::character varying, 'CAN'::character varying, 'RSP'::character varying, 'RCL'::character varying]::text[]);\n\n/*\"Subquery Scan on s  (cost=32023.15..33123.52 rows=129 width=61) (actual time=2.256..2.256 rows=0 loops=1)\"\"  Filter: (s.rnk = 1)\"\"  ->  WindowAgg  (cost=32023.15..32799.88 rows=25891 width=61) (actual time=2.255..2.255 rows=0 loops=1)\"\"        ->  Sort  (cost=32023.15..32087.88 rows=25891 width=53) (actual time=2.254..2.254 rows=0 loops=1)\"\"              Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC\"\"              Sort Method: quicksort  Memory: 25kB\"\"              ->  Bitmap Heap Scan on l_csm_transactions ct  (cost=1449.32..30125.32 rows=25891 width=53) (actual time=2.247..2.247 rows=0 loops=1)\"\"                    Recheck Cond: (((load_dttm)::date >= (CURRENT_DATE - 7)) AND ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[])))\"\"                    ->  Bitmap Index Scan on l_csm_transactions_actv_code_load_dttm_idx1  (cost=0.00..1442.85 rows=25891 width=0) (actual time=2.244..2.245 rows=0 loops=1)\"\"                          Index Cond: ((load_dttm)::date >= (CURRENT_DATE - 7))\"\"Planning Time: 0.438 ms\"\"Execution Time: 2.303 ms\"*/\n\n\nPlease suggest me the best choice.\nAppritiated the responce. \n\nThanks,Rj\n\n\nHi,I'm running one query, and I created two types of index one is composite and the other one with single column one and query planner showing almost the same cost for both index bitmap scan, I'm not sure which is appropriate to keep in production tables.explain analyze SELECT BAN, SUBSCRIBER_NO, ACTV_CODE, ACTV_RSN_CODE, EFFECTIVE_DATE, TRX_SEQ_NO, LOAD_DTTM, rnk AS RNK  FROM ( SELECT CT.BAN, CT.SUBSCRIBER_NO, CT.ACTV_CODE, CT.ACTV_RSN_CODE, CT.EFFECTIVE_DATE, CT.TRX_SEQ_NO, CT.LOAD_DTTM, row_number() over (partition by CT.BAN, CT.SUBSCRIBER_NO, CT.ACTV_CODE, CT.ACTV_RSN_CODE order by CT.TRX_SEQ_NO DESC, CT.LOAD_DTTM DESC) rnk FROM SAM_T.L_CSM_TRANSACTIONS CT WHERE CT.ACTV_CODE in ( 'NAC', 'CAN', 'RSP', 'RCL') AND LOAD_DTTM::DATE >= CURRENT_DATE - 7 ) S WHERE RNK = 1 1st Index with single column: CREATE INDEX l_csm_transactions_load_dttm_idx1    ON sam_t.l_csm_transactions USING btree    (load_dttm ASC NULLS LAST) /*\"Subquery Scan on s  (cost=32454.79..33555.15 rows=129 width=61) (actual time=56.473..56.473 rows=0 loops=1)\"\"  Filter: (s.rnk = 1)\"\"  ->  WindowAgg  (cost=32454.79..33231.52 rows=25891 width=61) (actual time=56.472..56.472 rows=0 loops=1)\"\"        ->  Sort  (cost=32454.79..32519.51 rows=25891 width=53) (actual time=56.470..56.470 rows=0 loops=1)\"\"              Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC\"\"              Sort Method: quicksort  Memory: 25kB\"\"              ->  Bitmap Heap Scan on l_csm_transactions ct  (cost=1271.13..30556.96 rows=25891 width=53) (actual time=56.462..56.462 rows=0 loops=1)\"\"                    Recheck Cond: ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[]))\"\"                    Filter: ((load_dttm)::date >= (CURRENT_DATE - 7))\"\"                    Rows Removed by Filter: 79137\"\"                    Heap Blocks: exact=23976\"\"                    ->  Bitmap Index Scan on l_csm_transactions_actv_code_idx1  (cost=0.00..1264.66 rows=77673 width=0) (actual time=6.002..6.002 rows=79137 loops=1)\"\"Planning Time: 0.270 ms\"\"Execution Time: 56.639 ms\"*/2nd one with composite and partial index:CREATE INDEX l_csm_transactions_actv_code_load_dttm_idx1    ON sam_t.l_csm_transactions USING btree    (actv_code COLLATE pg_catalog.\"default\" ASC NULLS LAST, (load_dttm::date) DESC NULLS FIRST)    WHERE actv_code::text = ANY (ARRAY['NAC'::character varying, 'CAN'::character varying, 'RSP'::character varying, 'RCL'::character varying]::text[]);/*\"Subquery Scan on s  (cost=32023.15..33123.52 rows=129 width=61) (actual time=2.256..2.256 rows=0 loops=1)\"\"  Filter: (s.rnk = 1)\"\"  ->  WindowAgg  (cost=32023.15..32799.88 rows=25891 width=61) (actual time=2.255..2.255 rows=0 loops=1)\"\"        ->  Sort  (cost=32023.15..32087.88 rows=25891 width=53) (actual time=2.254..2.254 rows=0 loops=1)\"\"              Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC\"\"              Sort Method: quicksort  Memory: 25kB\"\"              ->  Bitmap Heap Scan on l_csm_transactions ct  (cost=1449.32..30125.32 rows=25891 width=53) (actual time=2.247..2.247 rows=0 loops=1)\"\"                    Recheck Cond: (((load_dttm)::date >= (CURRENT_DATE - 7)) AND ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[])))\"\"                    ->  Bitmap Index Scan on l_csm_transactions_actv_code_load_dttm_idx1  (cost=0.00..1442.85 rows=25891 width=0) (actual time=2.244..2.245 rows=0 loops=1)\"\"                          Index Cond: ((load_dttm)::date >= (CURRENT_DATE - 7))\"\"Planning Time: 0.438 ms\"\"Execution Time: 2.303 ms\"*/Please suggest me the best choice.Appritiated the responce. Thanks,Rj", "msg_date": "Tue, 15 Sep 2020 22:33:24 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Single column vs composite partial index" }, { "msg_contents": "On Tue, Sep 15, 2020 at 10:33:24PM +0000, Nagaraj Raj wrote:\n> Hi,\n> I'm running one query, and I created two types of index one is composite and the other one with single column one and query planner showing almost the same cost for both index bitmap scan, I'm not sure which is appropriate to keep in production tables.\n\nYou're asking whether to keep one index or the other ?\nIt depends on *all* the queries you'll run, not just this one.\nThe most general thing to do would be to make multiple, single column indexes,\nand let the planner figure out which is best (it might bitmap-AND or -OR them\ntogether).\n\nHowever, for this query, you can see the 2nd query is actually faster (2ms vs\n56ms) - the cost is an estimate based on a model.\n\nThe actual performance might change based on thing like maintenance like\nreindex, cluster, vacuum, hardware, and DB state (like cached blocks).\nAnd postgres version.\n\nThe rowcount estimates are bad. Maybe you need to ANALYZE the table (or adjust\nthe autoanalyze thresholds), or evaluate if there's a correlation between\ncolumns. Bad rowcount estimates beget bad plans and poor performance.\n\nAlso: you could use explain(ANALYZE,BUFFERS).\nI think the fast plan would be possible with a tiny BRIN index on load_dttm.\n(Possibly combined indexes on actv_code or others).\nIf you also have a btree index on time, then you can CLUSTER on it (and\nanalyze) and it might improve that plan further (but would affect other\nqueries, too).\n\n> explain analyze SELECT BAN, SUBSCRIBER_NO, ACTV_CODE, ACTV_RSN_CODE, EFFECTIVE_DATE, TRX_SEQ_NO, LOAD_DTTM, rnk AS RNK� FROM ( SELECT CT.BAN, CT.SUBSCRIBER_NO, CT.ACTV_CODE, CT.ACTV_RSN_CODE, CT.EFFECTIVE_DATE, CT.TRX_SEQ_NO, CT.LOAD_DTTM, row_number() over (partition by CT.BAN, CT.SUBSCRIBER_NO, CT.ACTV_CODE, CT.ACTV_RSN_CODE order by CT.TRX_SEQ_NO DESC, CT.LOAD_DTTM DESC) rnk FROM SAM_T.L_CSM_TRANSACTIONS CT WHERE CT.ACTV_CODE in ( 'NAC', 'CAN', 'RSP', 'RCL') AND LOAD_DTTM::DATE >= CURRENT_DATE - 7 ) S WHERE RNK = 1\n\n> 1st Index with single column: \n> CREATE INDEX l_csm_transactions_load_dttm_idx1� � ON sam_t.l_csm_transactions USING btree� � (load_dttm ASC NULLS LAST)\n\n> /*\"Subquery Scan on s� (cost=32454.79..33555.15 rows=129 width=61) (actual time=56.473..56.473 rows=0 loops=1)\n> � Filter: (s.rnk = 1)\n> � ->� WindowAgg� (cost=32454.79..33231.52 rows=25891 width=61) (actual time=56.472..56.472 rows=0 loops=1)\n> � � � � ->� Sort� (cost=32454.79..32519.51 rows=25891 width=53) (actual time=56.470..56.470 rows=0 loops=1)\n> � � � � � � � Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC\n> � � � � � � � Sort Method: quicksort� Memory: 25kB\n> � � � � � � � ->� Bitmap Heap Scan on l_csm_transactions ct� (cost=1271.13..30556.96 rows=25891 width=53) (actual time=56.462..56.462 rows=0 loops=1)\n> � � � � � � � � � � Recheck Cond: ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[]))\n> � � � � � � � � � � Filter: ((load_dttm)::date >= (CURRENT_DATE - 7))\n> � � � � � � � � � � Rows Removed by Filter: 79137\n> � � � � � � � � � � Heap Blocks: exact=23976\n> � � � � � � � � � � ->� Bitmap Index Scan on l_csm_transactions_actv_code_idx1� (cost=0.00..1264.66 rows=77673 width=0) (actual time=6.002..6.002 rows=79137 loops=1)\n> Planning Time: 0.270 ms\n> Execution Time: 56.639 ms\"*/\n\n> 2nd one with composite and partial index:\n> CREATE INDEX l_csm_transactions_actv_code_load_dttm_idx1� � ON sam_t.l_csm_transactions USING btree� � (actv_code COLLATE pg_catalog.\"default\" ASC NULLS LAST, (load_dttm::date) DESC NULLS FIRST)� � WHERE actv_code::text = ANY (ARRAY['NAC'::character varying, 'CAN'::character varying, 'RSP'::character varying, 'RCL'::character varying]::text[]);\n> \n> /*\"Subquery Scan on s� (cost=32023.15..33123.52 rows=129 width=61) (actual time=2.256..2.256 rows=0 loops=1)\n> � Filter: (s.rnk = 1)\n> � ->� WindowAgg� (cost=32023.15..32799.88 rows=25891 width=61) (actual time=2.255..2.255 rows=0 loops=1)\n> � � � � ->� Sort� (cost=32023.15..32087.88 rows=25891 width=53) (actual time=2.254..2.254 rows=0 loops=1)\n> � � � � � � � Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC\n> � � � � � � � Sort Method: quicksort� Memory: 25kB\n> � � � � � � � ->� Bitmap Heap Scan on l_csm_transactions ct� (cost=1449.32..30125.32 rows=25891 width=53) (actual time=2.247..2.247 rows=0 loops=1)\n> � � � � � � � � � � Recheck Cond: (((load_dttm)::date >= (CURRENT_DATE - 7)) AND ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[])))\n> � � � � � � � � � � ->� Bitmap Index Scan on l_csm_transactions_actv_code_load_dttm_idx1� (cost=0.00..1442.85 rows=25891 width=0) (actual time=2.244..2.245 rows=0 loops=1)\n> � � � � � � � � � � � � � Index Cond: ((load_dttm)::date >= (CURRENT_DATE - 7))\n> Planning Time: 0.438 ms\n> Execution Time: 2.303 ms\"*/\n\n\n", "msg_date": "Tue, 15 Sep 2020 23:18:35 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Single column vs composite partial index" }, { "msg_contents": "> You're asking whether to keep one index or the other?\nMy ask is which index can be used for the mentioned query in production for better IO\n> It depends on *all* the queries you'll run, not just this one.\nI'm more concerned about this specific query, this has been using in one block stored procedure, so it will be run more often on the table. \nexplain(ANALYZE, BUFFERS) output: \n\n\"Subquery Scan on s  (cost=32023.15..33123.52 rows=129 width=61) (actual time=2.615..2.615 rows=0 loops=1)\"\"  Filter: (s.rnk = 1)\"\"  Buffers: shared hit=218\"\"  ->  WindowAgg  (cost=32023.15..32799.88 rows=25891 width=61) (actual time=2.614..2.615 rows=0 loops=1)\"\"        Buffers: shared hit=218\"\"        ->  Sort  (cost=32023.15..32087.88 rows=25891 width=53) (actual time=2.613..2.613 rows=0 loops=1)\"\"              Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC\"\"              Sort Method: quicksort  Memory: 25kB\"\"              Buffers: shared hit=218\"\"              ->  Bitmap Heap Scan on l_csm_transactions ct  (cost=1449.32..30125.32 rows=25891 width=53) (actual time=2.605..2.605 rows=0 loops=1)\"\"                    Recheck Cond: (((load_dttm)::date >= (CURRENT_DATE - 7)) AND ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[])))\"\"                    Buffers: shared hit=218\"\"                    ->  Bitmap Index Scan on l_csm_transactions_actv_code_load_dttm_idx1  (cost=0.00..1442.85 rows=25891 width=0) (actual time=2.602..2.602 rows=0 loops=1)\"\"                          Index Cond: ((load_dttm)::date >= (CURRENT_DATE - 7))\"\"                          Buffers: shared hit=218\"\"Planning Time: 0.374 ms\"\"Execution Time: 2.661 ms\"\n\n\n>The actual performance might change based on thing like maintenance like\n>reindex, cluster, vacuum, hardware, and DB state (like cached blocks).\nNote: Stats are up to date\n> And Postgres version.\n\nPostgreSQL 11.7 running on RedHat \n\nThanks,Rj\n On Tuesday, September 15, 2020, 09:18:55 PM PDT, Justin Pryzby <[email protected]> wrote: \n \n On Tue, Sep 15, 2020 at 10:33:24PM +0000, Nagaraj Raj wrote:\n> Hi,\n> I'm running one query, and I created two types of index one is composite and the other one with single column one and query planner showing almost the same cost for both index bitmap scan, I'm not sure which is appropriate to keep in production tables.\n\nYou're asking whether to keep one index or the other ?\nIt depends on *all* the queries you'll run, not just this one.\nThe most general thing to do would be to make multiple, single column indexes,\nand let the planner figure out which is best (it might bitmap-AND or -OR them\ntogether).\n\nHowever, for this query, you can see the 2nd query is actually faster (2ms vs\n56ms) - the cost is an estimate based on a model.\n\nThe actual performance might change based on thing like maintenance like\nreindex, cluster, vacuum, hardware, and DB state (like cached blocks).\nAnd postgres version.\n\nThe rowcount estimates are bad.  Maybe you need to ANALYZE the table (or adjust\nthe autoanalyze thresholds), or evaluate if there's a correlation between\ncolumns.  Bad rowcount estimates beget bad plans and poor performance.\n\nAlso: you could use explain(ANALYZE,BUFFERS).\nI think the fast plan would be possible with a tiny BRIN index on load_dttm.\n(Possibly combined indexes on actv_code or others).\nIf you also have a btree index on time, then you can CLUSTER on it (and\nanalyze) and it might improve that plan further (but would affect other\nqueries, too).\n\n> explain analyze SELECT BAN, SUBSCRIBER_NO, ACTV_CODE, ACTV_RSN_CODE, EFFECTIVE_DATE, TRX_SEQ_NO, LOAD_DTTM, rnk AS RNK  FROM ( SELECT CT.BAN, CT.SUBSCRIBER_NO, CT.ACTV_CODE, CT.ACTV_RSN_CODE, CT.EFFECTIVE_DATE, CT.TRX_SEQ_NO, CT.LOAD_DTTM, row_number() over (partition by CT.BAN, CT.SUBSCRIBER_NO, CT.ACTV_CODE, CT.ACTV_RSN_CODE order by CT.TRX_SEQ_NO DESC, CT.LOAD_DTTM DESC) rnk FROM SAM_T.L_CSM_TRANSACTIONS CT WHERE CT.ACTV_CODE in ( 'NAC', 'CAN', 'RSP', 'RCL') AND LOAD_DTTM::DATE >= CURRENT_DATE - 7 ) S WHERE RNK = 1\n\n> 1st Index with single column: \n> CREATE INDEX l_csm_transactions_load_dttm_idx1    ON sam_t.l_csm_transactions USING btree    (load_dttm ASC NULLS LAST)\n\n>  /*\"Subquery Scan on s  (cost=32454.79..33555.15 rows=129 width=61) (actual time=56.473..56.473 rows=0 loops=1)\n>    Filter: (s.rnk = 1)\n>    ->  WindowAgg  (cost=32454.79..33231.52 rows=25891 width=61) (actual time=56.472..56.472 rows=0 loops=1)\n>          ->  Sort  (cost=32454.79..32519.51 rows=25891 width=53) (actual time=56.470..56.470 rows=0 loops=1)\n>                Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC\n>                Sort Method: quicksort  Memory: 25kB\n>                ->  Bitmap Heap Scan on l_csm_transactions ct  (cost=1271.13..30556.96 rows=25891 width=53) (actual time=56.462..56.462 rows=0 loops=1)\n>                      Recheck Cond: ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[]))\n>                      Filter: ((load_dttm)::date >= (CURRENT_DATE - 7))\n>                      Rows Removed by Filter: 79137\n>                      Heap Blocks: exact=23976\n>                      ->  Bitmap Index Scan on l_csm_transactions_actv_code_idx1  (cost=0.00..1264.66 rows=77673 width=0) (actual time=6.002..6.002 rows=79137 loops=1)\n>  Planning Time: 0.270 ms\n>  Execution Time: 56.639 ms\"*/\n\n> 2nd one with composite and partial index:\n> CREATE INDEX l_csm_transactions_actv_code_load_dttm_idx1    ON sam_t.l_csm_transactions USING btree    (actv_code COLLATE pg_catalog.\"default\" ASC NULLS LAST, (load_dttm::date) DESC NULLS FIRST)    WHERE actv_code::text = ANY (ARRAY['NAC'::character varying, 'CAN'::character varying, 'RSP'::character varying, 'RCL'::character varying]::text[]);\n> \n> /*\"Subquery Scan on s  (cost=32023.15..33123.52 rows=129 width=61) (actual time=2.256..2.256 rows=0 loops=1)\n>    Filter: (s.rnk = 1)\n>    ->  WindowAgg  (cost=32023.15..32799.88 rows=25891 width=61) (actual time=2.255..2.255 rows=0 loops=1)\n>          ->  Sort  (cost=32023.15..32087.88 rows=25891 width=53) (actual time=2.254..2.254 rows=0 loops=1)\n>                Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC\n>                Sort Method: quicksort  Memory: 25kB\n>                ->  Bitmap Heap Scan on l_csm_transactions ct  (cost=1449.32..30125.32 rows=25891 width=53) (actual time=2.247..2.247 rows=0 loops=1)\n>                      Recheck Cond: (((load_dttm)::date >= (CURRENT_DATE - 7)) AND ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[])))\n>                      ->  Bitmap Index Scan on l_csm_transactions_actv_code_load_dttm_idx1  (cost=0.00..1442.85 rows=25891 width=0) (actual time=2.244..2.245 rows=0 loops=1)\n>                            Index Cond: ((load_dttm)::date >= (CURRENT_DATE - 7))\n>  Planning Time: 0.438 ms\n>  Execution Time: 2.303 ms\"*/\n\n\n \n\n> You're asking whether to keep one index or the other?My ask is which index can be used for the mentioned query in production for better IO> It depends on *all* the queries you'll run, not just this one.I'm more concerned about this specific query, this has been using in one block stored procedure, so it will be run more often on the table. explain(ANALYZE, BUFFERS) output: \"Subquery Scan on s  (cost=32023.15..33123.52 rows=129 width=61) (actual time=2.615..2.615 rows=0 loops=1)\"\"  Filter: (s.rnk = 1)\"\"  Buffers: shared hit=218\"\"  ->  WindowAgg  (cost=32023.15..32799.88 rows=25891 width=61) (actual time=2.614..2.615 rows=0 loops=1)\"\"        Buffers: shared hit=218\"\"        ->  Sort  (cost=32023.15..32087.88 rows=25891 width=53) (actual time=2.613..2.613 rows=0 loops=1)\"\"              Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC\"\"              Sort Method: quicksort  Memory: 25kB\"\"              Buffers: shared hit=218\"\"              ->  Bitmap Heap Scan on l_csm_transactions ct  (cost=1449.32..30125.32 rows=25891 width=53) (actual time=2.605..2.605 rows=0 loops=1)\"\"                    Recheck Cond: (((load_dttm)::date >= (CURRENT_DATE - 7)) AND ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[])))\"\"                    Buffers: shared hit=218\"\"                    ->  Bitmap Index Scan on l_csm_transactions_actv_code_load_dttm_idx1  (cost=0.00..1442.85 rows=25891 width=0) (actual time=2.602..2.602 rows=0 loops=1)\"\"                          Index Cond: ((load_dttm)::date >= (CURRENT_DATE - 7))\"\"                          Buffers: shared hit=218\"\"Planning Time: 0.374 ms\"\"Execution Time: 2.661 ms\">The actual performance might change based on thing like maintenance like>reindex, cluster, vacuum, hardware, and DB state (like cached blocks).Note: Stats are up to date> And Postgres version.PostgreSQL 11.7 running on RedHat Thanks,Rj\n\n\n\n On Tuesday, September 15, 2020, 09:18:55 PM PDT, Justin Pryzby <[email protected]> wrote:\n \n\n\nOn Tue, Sep 15, 2020 at 10:33:24PM +0000, Nagaraj Raj wrote:> Hi,> I'm running one query, and I created two types of index one is composite and the other one with single column one and query planner showing almost the same cost for both index bitmap scan, I'm not sure which is appropriate to keep in production tables.You're asking whether to keep one index or the other ?It depends on *all* the queries you'll run, not just this one.The most general thing to do would be to make multiple, single column indexes,and let the planner figure out which is best (it might bitmap-AND or -OR themtogether).However, for this query, you can see the 2nd query is actually faster (2ms vs56ms) - the cost is an estimate based on a model.The actual performance might change based on thing like maintenance likereindex, cluster, vacuum, hardware, and DB state (like cached blocks).And postgres version.The rowcount estimates are bad.  Maybe you need to ANALYZE the table (or adjustthe autoanalyze thresholds), or evaluate if there's a correlation betweencolumns.  Bad rowcount estimates beget bad plans and poor performance.Also: you could use explain(ANALYZE,BUFFERS).I think the fast plan would be possible with a tiny BRIN index on load_dttm.(Possibly combined indexes on actv_code or others).If you also have a btree index on time, then you can CLUSTER on it (andanalyze) and it might improve that plan further (but would affect otherqueries, too).> explain analyze SELECT BAN, SUBSCRIBER_NO, ACTV_CODE, ACTV_RSN_CODE, EFFECTIVE_DATE, TRX_SEQ_NO, LOAD_DTTM, rnk AS RNK  FROM ( SELECT CT.BAN, CT.SUBSCRIBER_NO, CT.ACTV_CODE, CT.ACTV_RSN_CODE, CT.EFFECTIVE_DATE, CT.TRX_SEQ_NO, CT.LOAD_DTTM, row_number() over (partition by CT.BAN, CT.SUBSCRIBER_NO, CT.ACTV_CODE, CT.ACTV_RSN_CODE order by CT.TRX_SEQ_NO DESC, CT.LOAD_DTTM DESC) rnk FROM SAM_T.L_CSM_TRANSACTIONS CT WHERE CT.ACTV_CODE in ( 'NAC', 'CAN', 'RSP', 'RCL') AND LOAD_DTTM::DATE >= CURRENT_DATE - 7 ) S WHERE RNK = 1> 1st Index with single column: > CREATE INDEX l_csm_transactions_load_dttm_idx1    ON sam_t.l_csm_transactions USING btree    (load_dttm ASC NULLS LAST)>  /*\"Subquery Scan on s  (cost=32454.79..33555.15 rows=129 width=61) (actual time=56.473..56.473 rows=0 loops=1)>    Filter: (s.rnk = 1)>    ->  WindowAgg  (cost=32454.79..33231.52 rows=25891 width=61) (actual time=56.472..56.472 rows=0 loops=1)>          ->  Sort  (cost=32454.79..32519.51 rows=25891 width=53) (actual time=56.470..56.470 rows=0 loops=1)>                Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC>                Sort Method: quicksort  Memory: 25kB>                ->  Bitmap Heap Scan on l_csm_transactions ct  (cost=1271.13..30556.96 rows=25891 width=53) (actual time=56.462..56.462 rows=0 loops=1)>                      Recheck Cond: ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[]))>                      Filter: ((load_dttm)::date >= (CURRENT_DATE - 7))>                      Rows Removed by Filter: 79137>                      Heap Blocks: exact=23976>                      ->  Bitmap Index Scan on l_csm_transactions_actv_code_idx1  (cost=0.00..1264.66 rows=77673 width=0) (actual time=6.002..6.002 rows=79137 loops=1)>  Planning Time: 0.270 ms>  Execution Time: 56.639 ms\"*/> 2nd one with composite and partial index:> CREATE INDEX l_csm_transactions_actv_code_load_dttm_idx1    ON sam_t.l_csm_transactions USING btree    (actv_code COLLATE pg_catalog.\"default\" ASC NULLS LAST, (load_dttm::date) DESC NULLS FIRST)    WHERE actv_code::text = ANY (ARRAY['NAC'::character varying, 'CAN'::character varying, 'RSP'::character varying, 'RCL'::character varying]::text[]);> > /*\"Subquery Scan on s  (cost=32023.15..33123.52 rows=129 width=61) (actual time=2.256..2.256 rows=0 loops=1)>    Filter: (s.rnk = 1)>    ->  WindowAgg  (cost=32023.15..32799.88 rows=25891 width=61) (actual time=2.255..2.255 rows=0 loops=1)>          ->  Sort  (cost=32023.15..32087.88 rows=25891 width=53) (actual time=2.254..2.254 rows=0 loops=1)>                Sort Key: ct.ban, ct.subscriber_no, ct.actv_code, ct.actv_rsn_code, ct.trx_seq_no DESC, ct.load_dttm DESC>                Sort Method: quicksort  Memory: 25kB>                ->  Bitmap Heap Scan on l_csm_transactions ct  (cost=1449.32..30125.32 rows=25891 width=53) (actual time=2.247..2.247 rows=0 loops=1)>                      Recheck Cond: (((load_dttm)::date >= (CURRENT_DATE - 7)) AND ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[])))>                      ->  Bitmap Index Scan on l_csm_transactions_actv_code_load_dttm_idx1  (cost=0.00..1442.85 rows=25891 width=0) (actual time=2.244..2.245 rows=0 loops=1)>                            Index Cond: ((load_dttm)::date >= (CURRENT_DATE - 7))>  Planning Time: 0.438 ms>  Execution Time: 2.303 ms\"*/", "msg_date": "Wed, 16 Sep 2020 06:59:51 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Single column vs composite partial index" }, { "msg_contents": "Index Cond: ((load_dttm)::date >= (CURRENT_DATE - 7))\"\n\nThere is no need to cast the load_dttm field to a date in the query. The\nplain index on the field would be usable if you skipped that. In your\nexample, you show creating the single column index but it isn't getting\nused because of the type cast. The second index is both partial, and\nmulti-column. If your data statistics show that ((actv_code)::text = ANY\n('{NAC,CAN,RSP,RCL}'::text[])) only 1% of the time, then it would certainly\nbe helpful to have a partial index if those are the rows you want to find\noften and do so quickly. If the rows with those values for actv_code is\nmore like 75% of the total rows, then there'd be no reason to make it\npartial IMO.\n\nIf you are often/constantly querying for only the last 7-7.999 days of data\nbased on load_dttm, I would put that as the first column of the index since\nthen you would be scanning a contiguous part rather than scanning 3\ndifferent parts of the composite index where actv_code = each of those\nthree values, and then finding the rows that are recent based on the\ntimestamp(tz?) field.\n\nIndex Cond: ((load_dttm)::date >= (CURRENT_DATE - 7))\"There is no need to cast the load_dttm field to a date in the query. The plain index on the field would be usable if you skipped that. In your example, you show creating the single column index but it isn't getting used because of the type cast. The second index is both partial, and multi-column. If your data statistics show that ((actv_code)::text = ANY ('{NAC,CAN,RSP,RCL}'::text[])) only 1% of the time, then it would certainly be helpful to have a partial index if those are the rows you want to find often and do so quickly. If the rows with those values for actv_code is more like 75% of the total rows, then there'd be no reason to make it partial IMO.If you are often/constantly querying for only the last 7-7.999 days of data based on load_dttm, I would put that as the first column of the index since then you would be scanning a contiguous part rather than scanning 3 different parts of the composite index where actv_code = each of those three values, and then finding the rows that are recent based on the timestamp(tz?) field.", "msg_date": "Fri, 18 Sep 2020 14:54:04 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Single column vs composite partial index" } ]
[ { "msg_contents": "Hi,\n\nI'm seeing a strange behavior when we implement policies (for RLS - Row level security) using functions.\n\ntable test consists of columns testkey,oid,category,type,description...\n\nPolicy\n\ncreate policy policy_sel on test FOR SELECT to ram1 USING ( testkey in (f_sel_policy_test(testkey)) );\n\nGoing to a Sequential scan instead of index scan. Hence, performance issue.\n\npgwfc01q=> explain analyze select * from test;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..25713.12 rows=445 width=712) (actual time=1849.592..1849.592 rows=0 loops=1)\n Filter: ((testkey )::text = (f_sel_policy_test(testkey ))::text)\n Rows Removed by Filter: 88930\n Planning Time: 0.414 ms\n Execution Time: 1849.614 ms\n(5 rows)\n\n\nThe function is\n\nCREATE OR REPLACE FUNCTION vpd_sec_usr.f_sel_policy_test(testkey character varying)\nRETURNS character varying\nLANGUAGE plpgsql\nAS $function$\nDeclare\n v_status character varying;\nBEGIN\n\n if vpd_key = 'COMMON' then\n return '''COMMON''';\n elsif vpd_key = ('COMMON_' || SYS_CONTEXT('ctx_ng', 'ctx_prod_locale')) then\n return '''COMMON_' || SYS_CONTEXT('ctx_ng', 'ctx_prod_locale')||'''';\n elsif vpd_key = SYS_CONTEXT('ctx_ng_vpd', 'ctx_key_fil') then\n return '''co'','''||SYS_CONTEXT('ctx_ng', 'ctx_testkey_fil')||'''';\n end if;\n return 'false';\n exception when undefined_object then\n return 'failed';\n\nEND;\n$function$\n;\n\n\nIf i replace the policy with stright forward without function then it chooses the index. Not sure how i can implement with the function.\n\ncreate policy policy_sel on test FOR SELECT to ram1 USING ( testkey in ('COMMON',current_setting('ctx_ng'||'.'||'ctx_key_fil')));\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------\n Bitmap Heap Scan on test (cost=396.66..2966.60 rows=13396 width=712) (actual time=0.693..2.318 rows=13159 loops=1)\n Recheck Cond: ((testkey )::text = ANY ((ARRAY['COMMON'::character varying, (current_setting('ctx_vpd.ctx_key_fil'::text))::character varying])::text[]))\n Heap Blocks: exact=373\n -> Bitmap Index Scan on test_pkey (cost=0.00..393.31 rows=13396 width=0) (actual time=0.653..0.653 rows=13159 l\noops=1)\n Index Cond: ((testkey )::text = ANY ((ARRAY['COMMON'::character varying, (current_setting('ctx_vpd.ctx\n_key_fil'::text))::character varying])::text[]))\n Planning Time: 0.136 ms\n Execution Time: 2.843 ms\n(7 rows)\n\n\nIf i replace the policy with stright forward without function then it chooses the index. Not sure how i can implement with the function. I thought of creating the policy with a lot of business logic in the function. If i have the function then i notice going for full table scan instead of index.\n\nPlease help me if i miss anything in writing a function or how to use functions in the policy.\n\nThank you.\n\n\nRegards,\nRamesh G\n\n\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\nI'm seeing a strange behavior when we implement policies (for RLS - Row level security)  using functions. \n\n\n\n\ntable test  consists of columns  testkey,oid,category,type,description... \n\n\n\n\nPolicy\n\n\n\n\ncreate policy  policy_sel on test FOR SELECT to ram1 USING  (  testkey in (f_sel_policy_test(testkey))  );\n\n\n\n\nGoing to a Sequential scan instead of index scan.  Hence, performance issue.\n\n\n\n\npgwfc01q=> explain analyze select * from test;\n                                                 QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Seq Scan on test  (cost=0.00..25713.12 rows=445 width=712) (actual time=1849.592..1849.592 rows=0 loops=1)\n   Filter: ((testkey )::text = (f_sel_policy_test(testkey ))::text)\n   Rows Removed by Filter: 88930\n Planning Time: 0.414 ms\n Execution Time: 1849.614 ms\n(5 rows)\n\n\n\n\n\nThe function is \n\n\n\n\nCREATE OR REPLACE FUNCTION vpd_sec_usr.f_sel_policy_test(testkey character varying)\nRETURNS character varying\nLANGUAGE plpgsql\nAS $function$\nDeclare\n            v_status character varying;\nBEGIN\n\n\n            if vpd_key = 'COMMON' then\n                        return '''COMMON''';\n            elsif vpd_key = ('COMMON_' || SYS_CONTEXT('ctx_ng', 'ctx_prod_locale')) then\n                        return '''COMMON_' || SYS_CONTEXT('ctx_ng', 'ctx_prod_locale')||'''';\n            elsif vpd_key = SYS_CONTEXT('ctx_ng_vpd', 'ctx_key_fil') then\n                        return '''co'','''||SYS_CONTEXT('ctx_ng', 'ctx_testkey_fil')||'''';\n\n            end if;\n            return 'false';    \n            exception when undefined_object then\n                        return 'failed';\n            \nEND;\n$function$\n;\n\n\n\n\n\n\n\n\nIf i replace the policy with stright forward without function then it chooses the index.   Not sure how i can implement with the function.\n\n\n\n\ncreate policy  policy_sel on test FOR SELECT to ram1 USING  ( testkey in ('COMMON',current_setting('ctx_ng'||'.'||'ctx_key_fil')));\n\n\n\n\n\nQUERY PLAN\n\n\n--------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------\n Bitmap Heap Scan on test  (cost=396.66..2966.60 rows=13396 width=712) (actual time=0.693..2.318 rows=13159 loops=1)\n   Recheck Cond: ((testkey )::text\n = ANY ((ARRAY['COMMON'::character varying, (current_setting('ctx_vpd.ctx_key_fil'::text))::character varying])::text[]))\n   Heap Blocks: exact=373\n   ->  Bitmap Index Scan on test_pkey  (cost=0.00..393.31 rows=13396 width=0) (actual time=0.653..0.653 rows=13159 l\noops=1)\n         Index Cond: ((testkey )::text\n = ANY ((ARRAY['COMMON'::character varying, (current_setting('ctx_vpd.ctx\n_key_fil'::text))::character varying])::text[]))\n Planning Time: 0.136 ms\n Execution Time: 2.843 ms\n(7 rows)\n\n\n\n\n\nIf i replace the policy with stright forward without function then it chooses the index.   Not sure how i can implement with the function.   I thought of creating the policy with a lot of business logic in the function.  If i have the function then i notice\n going for full table scan instead of index. \n\n\n\n\nPlease help me if i miss anything in writing a function or how to use functions in the policy. \n\n\n\nThank you.\n\n\n\n\n\n\nRegards,\n\nRamesh G", "msg_date": "Wed, 16 Sep 2020 03:39:08 +0000", "msg_from": "\"Gopisetty, Ramesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue when we use policies for Row Level Security along\n with functions" }, { "msg_contents": "De: \"Gopisetty, Ramesh\" <[email protected]> \nPara: \"pgsql-performance\" <[email protected]> \nEnviadas: Quarta-feira, 16 de setembro de 2020 0:39:08 \nAssunto: Performance issue when we use policies for Row Level Security along with functions \n\n\n\n\n\nBQ_BEGIN\n\nHi, \n\nI'm seeing a strange behavior when we implement policies (for RLS - Row level security) using functions. \n\ntable test consists of columns testkey,oid,category,type,description... \n\nPolicy \n\ncreate policy policy_sel on test FOR SELECT to ram1 USING ( testkey in (f_sel_policy_test(testkey)) ); \n\nGoing to a Sequential scan instead of index scan. Hence, performance issue. \n\npgwfc01q=> explain analyze select * from test; \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------ \nSeq Scan on test (cost=0.00..25713.12 rows=445 width=712) (actual time=1849.592..1849.592 rows=0 loops=1) \nFilter: (( testkey )::text = (f_sel_policy_test( testkey ))::text) \nRows Removed by Filter: 88930 \nPlanning Time: 0.414 ms \nExecution Time: 1849.614 ms \n(5 rows) \n\n\nThe function is \n\nCREATE OR REPLACE FUNCTION vpd_sec_usr.f_sel_policy_test(testkey character varying) \nRETURNS character varying \nLANGUAGE plpgsql \nAS $function$ \nDeclare \nv_status character varying; \nBEGIN \n\nif vpd_key = 'COMMON' then \nreturn ''' COMMON '''; \nelsif vpd_key = (' COMMON_ ' || SYS_CONTEXT('ctx_ng', 'ctx_prod_locale')) then \nreturn ''' COMMON_ ' || SYS_CONTEXT('ctx_ng', 'ctx_prod_locale')||''''; \nelsif vpd_key = SYS_CONTEXT('ctx_ng_vpd', 'ctx_key_fil') then \nreturn '''co'','''||SYS_CONTEXT('ctx_ng', 'ctx_testkey_fil')||''''; \nend if; \nreturn 'false'; \nexception when undefined_object then \nreturn 'failed'; \nEND; \n$function$ \n; \n\n\nIf i replace the policy with stright forward without function then it chooses the index. Not sure how i can implement with the function. \n\ncreate policy policy_sel on test FOR SELECT to ram1 USING ( testkey in ('COMMON',current_setting('ctx_ng'||'.'||'ctx_key_fil'))); \n\nQUERY PLAN \n\n-------------------------------------------------------------------------------------------------------------------- \n----------------------------------------------------- \nBitmap Heap Scan on test (cost=396.66..2966.60 rows=13396 width=712) (actual time=0.693..2.318 rows=13159 loops=1) \nRecheck Cond: (( testkey )::text = ANY ((ARRAY['COMMON'::character varying, (current_setting('ctx_vpd.ctx_key_fil'::text))::character varying])::text[])) \nHeap Blocks: exact=373 \n-> Bitmap Index Scan on test_pkey (cost=0.00..393.31 rows=13396 width=0) (actual time=0.653..0.653 rows=13159 l \noops=1) \nIndex Cond: (( testkey )::text = ANY ((ARRAY['COMMON'::character varying, (current_setting('ctx_vpd.ctx \n_key_fil'::text))::character varying])::text[])) \nPlanning Time: 0.136 ms \nExecution Time: 2.843 ms \n(7 rows) \n\n\nIf i replace the policy with stright forward without function then it chooses the index. Not sure how i can implement with the function. I thought of creating the policy with a lot of business logic in the function. If i have the function then i notice going for full table scan instead of index. \n\nPlease help me if i miss anything in writing a function or how to use functions in the policy. \n\nThank you. \n\n\nRegards, \nRamesh G \n\n\nBQ_END\n\n\nYou could try seeting the function as immutable. By default it is volatile. \n\n\n\n\nDe: \"Gopisetty, Ramesh\" <[email protected]>Para: \"pgsql-performance\" <[email protected]>Enviadas: Quarta-feira, 16 de setembro de 2020 0:39:08Assunto: Performance issue when we use policies for Row Level Security along with functions\n\n\nHi,\n\n\n\n\nI'm seeing a strange behavior when we implement policies (for RLS - Row level security)  using functions. \n\n\n\n\ntable test  consists of columns  testkey,oid,category,type,description... \n\n\n\n\nPolicy\n\n\n\n\ncreate policy  policy_sel on test FOR SELECT to ram1 USING  (  testkey in (f_sel_policy_test(testkey))  );\n\n\n\n\nGoing to a Sequential scan instead of index scan.  Hence, performance issue.\n\n\n\n\npgwfc01q=> explain analyze select * from test;\n                                                 QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Seq Scan on test  (cost=0.00..25713.12 rows=445 width=712) (actual time=1849.592..1849.592 rows=0 loops=1)\n   Filter: ((testkey )::text = (f_sel_policy_test(testkey ))::text)\n   Rows Removed by Filter: 88930\n Planning Time: 0.414 ms\n Execution Time: 1849.614 ms\n(5 rows)\n\n\n\n\n\nThe function is \n\n\n\n\nCREATE OR REPLACE FUNCTION vpd_sec_usr.f_sel_policy_test(testkey character varying)\nRETURNS character varying\nLANGUAGE plpgsql\nAS $function$\nDeclare\n            v_status character varying;\nBEGIN\n\n\n            if vpd_key = 'COMMON' then\n                        return '''COMMON''';\n            elsif vpd_key = ('COMMON_' || SYS_CONTEXT('ctx_ng', 'ctx_prod_locale')) then\n                        return '''COMMON_' || SYS_CONTEXT('ctx_ng', 'ctx_prod_locale')||'''';\n            elsif vpd_key = SYS_CONTEXT('ctx_ng_vpd', 'ctx_key_fil') then\n                        return '''co'','''||SYS_CONTEXT('ctx_ng', 'ctx_testkey_fil')||'''';\n\n            end if;\n            return 'false';    \n            exception when undefined_object then\n                        return 'failed';\n            \nEND;\n$function$\n;\n\n\n\n\n\n\n\n\nIf i replace the policy with stright forward without function then it chooses the index.   Not sure how i can implement with the function.\n\n\n\n\ncreate policy  policy_sel on test FOR SELECT to ram1 USING  ( testkey in ('COMMON',current_setting('ctx_ng'||'.'||'ctx_key_fil')));\n\n\n\n\n\nQUERY PLAN\n\n\n--------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------\n Bitmap Heap Scan on test  (cost=396.66..2966.60 rows=13396 width=712) (actual time=0.693..2.318 rows=13159 loops=1)\n   Recheck Cond: ((testkey )::text\n = ANY ((ARRAY['COMMON'::character varying, (current_setting('ctx_vpd.ctx_key_fil'::text))::character varying])::text[]))\n   Heap Blocks: exact=373\n   ->  Bitmap Index Scan on test_pkey  (cost=0.00..393.31 rows=13396 width=0) (actual time=0.653..0.653 rows=13159 l\noops=1)\n         Index Cond: ((testkey )::text\n = ANY ((ARRAY['COMMON'::character varying, (current_setting('ctx_vpd.ctx\n_key_fil'::text))::character varying])::text[]))\n Planning Time: 0.136 ms\n Execution Time: 2.843 ms\n(7 rows)\n\n\n\n\n\nIf i replace the policy with stright forward without function then it chooses the index.   Not sure how i can implement with the function.   I thought of creating the policy with a lot of business logic in the function.  If i have the function then i notice\n going for full table scan instead of index. \n\n\n\n\nPlease help me if i miss anything in writing a function or how to use functions in the policy. \n\n\n\nThank you.\n\n\n\n\n\n\nRegards,\n\nRamesh G\nYou could try seeting the function as immutable. By default it is volatile.", "msg_date": "Wed, 16 Sep 2020 07:52:47 -0300 (BRT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Performance issue when we use policies for Row Level Security\n along with functions" }, { "msg_contents": "\"Gopisetty, Ramesh\" <[email protected]> writes:\n> Policy\n> create policy policy_sel on test FOR SELECT to ram1 USING ( testkey in (f_sel_policy_test(testkey)) );\n> Going to a Sequential scan instead of index scan. Hence, performance issue.\n\n> If i replace the policy with stright forward without function then it chooses the index. Not sure how i can implement with the function.\n> create policy policy_sel on test FOR SELECT to ram1 USING ( testkey in ('COMMON',current_setting('ctx_ng'||'.'||'ctx_key_fil')));\n\n\" testkey in ('COMMON',current_setting('ctx_ng'||'.'||'ctx_key_fil')) \"\nis an indexable condition on testkey, because it compares testkey to\na constant (or at least, a value that's fixed for the life of the query).\n\n\" testkey in (f_sel_policy_test(testkey)) \"\nis not an indexable condition on anything, because there are variables\non both sides of the condition. So there's no fixed value that the\nindex can search on.\n\nIf you intend f_sel_policy_test() to be equivalent to the other condition,\nwhy are you passing it an argument it doesn't need?\n\nAs Luis noted, there's also the problem that an indexable condition\ncan't be volatile. I gather that SYS_CONTEXT ends up being a probe\nof some GUC setting, which means that marking the function IMMUTABLE\nwould be a lie, but you ought to be able to mark it STABLE.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Sep 2020 10:17:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue when we use policies for Row Level Security\n along with functions" }, { "msg_contents": "Hi,\n\nThanks for providing the details. But things didn't work out even after changing the functions to STABLE/IMMUTABLE. If i don't use the function it works for RLS. If i use functions it doesn't work.\n\nI tried with both IMMUTABLE and STABLE. Both didn't work. Is there a way to use function in RLS to have the index scan rather than the seq scan. Please help me out if that works or not.\n\nCurrently, we are in the processes of converting oracle to postgres. Under oracle we have used functions and there exists a lot of logic in it.\n\nThank you.\n\nFunction\n\ndrop function f_sel_1;\nCREATE OR REPLACE FUNCTION f_sel_1(key character varying)\n RETURNS character varying\n LANGUAGE plpgsql\n IMMUTABLE\nAS $function$\nDeclare\n v_status boolean;\n key_ctx varchar(4000);\nBEGIN\n\n SELECT INTO key_ctx current_setting('key_header' || '.'||'ctx_key_fil');\n\n if key = key_ctx then\n return key_ctx;\n end if;\n return '';\n exception when undefined_object then\n return '';\n\nEND;\n$function$\n;\n\n\n\ndrop policy policy_sel on test1;\ncreate policy policy_sel on test1 FOR\nSELECT\n to sch USING ( key =\n f_sel_1(key)\n );\n\nexplain analyze select * from test1;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Seq Scan on test1 (cost=0.00..1555.61 rows=25 width=555) (actual time=35.124..35.124 rows=0 loops=1)\n Filter: ((key)::text = (f_sel_1(key))::text)\n Rows Removed by Filter: 4909\n Planning Time: 0.070 ms\n Execution Time: 35.142 ms\n(5 rows)\n\n\n\ndrop policy policy_sel on test1;\ncreate policy policy_sel on test1 FOR\nSELECT\n to sch USING (\n key =\n (\n current_setting('key_header'|| '.' || 'ctx_key_fil')\n )\n );\n\n\nexplain analyze select * from test1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on test1 (cost=9.78..270.01 rows=193 width=555) (actual time=0.040..0.069 rows=193 loops=1)\n Recheck Cond: ((key)::text = current_setting('key_header.ctx_key_fil'::text))\n Heap Blocks: exact=13\n -> Bitmap Index Scan on test1_pkey (cost=0.00..9.73 rows=193 width=0) (actual time=0.030..0.030 rows=193 loops=1)\n Index Cond: ((key)::text = current_setting('key_header.ctx_key_fil'::text))\n Planning Time: 0.118 ms\n Execution Time: 0.094 ms\n(7 rows)\n\n\nCREATE TABLE sch.test1 (\n key varchar(50) NOT NULL,\n id varchar(32) NOT NULL,\n begin_date date NOT NULL,\n eff_date_end date NULL,\n code varchar(100) NULL,\n CONSTRAINT test1_pkey PRIMARY KEY (vpd_key, id, begin_date)\n);\n\n\nThank you.\n\nRegards,\nRamesh G\n________________________________\nFrom: Tom Lane <[email protected]>\nSent: Wednesday, September 16, 2020 10:17 AM\nTo: Gopisetty, Ramesh <[email protected]>\nCc: [email protected] <[email protected]>\nSubject: Re: Performance issue when we use policies for Row Level Security along with functions\n\n\"Gopisetty, Ramesh\" <[email protected]> writes:\n> Policy\n> create policy policy_sel on test FOR SELECT to ram1 USING ( testkey in (f_sel_policy_test(testkey)) );\n> Going to a Sequential scan instead of index scan. Hence, performance issue.\n\n> If i replace the policy with stright forward without function then it chooses the index. Not sure how i can implement with the function.\n> create policy policy_sel on test FOR SELECT to ram1 USING ( testkey in ('COMMON',current_setting('ctx_ng'||'.'||'ctx_key_fil')));\n\n\" testkey in ('COMMON',current_setting('ctx_ng'||'.'||'ctx_key_fil')) \"\nis an indexable condition on testkey, because it compares testkey to\na constant (or at least, a value that's fixed for the life of the query).\n\n\" testkey in (f_sel_policy_test(testkey)) \"\nis not an indexable condition on anything, because there are variables\non both sides of the condition. So there's no fixed value that the\nindex can search on.\n\nIf you intend f_sel_policy_test() to be equivalent to the other condition,\nwhy are you passing it an argument it doesn't need?\n\nAs Luis noted, there's also the problem that an indexable condition\ncan't be volatile. I gather that SYS_CONTEXT ends up being a probe\nof some GUC setting, which means that marking the function IMMUTABLE\nwould be a lie, but you ought to be able to mark it STABLE.\n\n regards, tom lane\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\nThanks for providing the details.  But things didn't work out even after changing the functions to STABLE/IMMUTABLE.   If i don't use the function it works for RLS.  If i use functions it doesn't work. \n\n\n\n\nI tried with both IMMUTABLE and STABLE.  Both didn't work.    Is there a way to use function in RLS to have the index scan rather than the seq scan.   Please help me out if that works or not.  \n\n\n\n\nCurrently, we are in the processes of converting oracle to postgres.  Under oracle we have used functions and there exists a lot of logic in it. \n\n\n\n\n\nThank you.\n\n\n\n\n\n\n\n\nFunction\n\ndrop function f_sel_1;\nCREATE OR REPLACE FUNCTION f_sel_1(key character varying)\n RETURNS character varying\n LANGUAGE plpgsql\n IMMUTABLE\nAS $function$\nDeclare\n    v_status boolean;\n    key_ctx varchar(4000);\nBEGIN\n\n   SELECT INTO key_ctx current_setting('key_header' || '.'||'ctx_key_fil');\n   \n    if key = key_ctx then\n        return key_ctx;\n    end if;\n    return '';  \n    exception when undefined_object then\n        return '';\n    \nEND;\n$function$\n;\n\n\n\ndrop policy policy_sel on test1;\ncreate policy policy_sel on test1 FOR\nSELECT\n    to sch USING  ( key = \n        f_sel_1(key)\n    );\n\nexplain analyze select * from test1;\n                                              QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Seq Scan on test1  (cost=0.00..1555.61 rows=25 width=555) (actual time=35.124..35.124 rows=0 loops=1)\n   Filter: ((key)::text = (f_sel_1(key))::text)\n   Rows Removed by Filter: 4909\n Planning Time: 0.070 ms\n Execution Time: 35.142 ms\n(5 rows)\n\n\n\ndrop policy policy_sel on test1;\ncreate policy policy_sel on test1 FOR\nSELECT\n    to sch USING  (\n     key = \n        (\n            current_setting('key_header'|| '.' || 'ctx_key_fil')\n        )\n  );\n\n\nexplain analyze select * from test1;\n                                                      QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on test1  (cost=9.78..270.01 rows=193 width=555) (actual time=0.040..0.069 rows=193 loops=1)\n   Recheck Cond: ((key)::text = current_setting('key_header.ctx_key_fil'::text))\n   Heap Blocks: exact=13\n   ->  Bitmap Index Scan on test1_pkey  (cost=0.00..9.73 rows=193 width=0) (actual time=0.030..0.030 rows=193 loops=1)\n         Index Cond: ((key)::text = current_setting('key_header.ctx_key_fil'::text))\n Planning Time: 0.118 ms\n Execution Time: 0.094 ms\n(7 rows)\n\n\nCREATE TABLE sch.test1 (\n    key varchar(50) NOT NULL,\n    id varchar(32) NOT NULL,\n    begin_date date NOT NULL,\n    eff_date_end date NULL,\n    code varchar(100) NULL,\n    CONSTRAINT test1_pkey PRIMARY KEY (vpd_key, id, begin_date)\n);\n\n\n\n\n\n\n\nThank you.\n\n\n\n\nRegards,\n\nRamesh G\n\n\nFrom: Tom Lane <[email protected]>\nSent: Wednesday, September 16, 2020 10:17 AM\nTo: Gopisetty, Ramesh <[email protected]>\nCc: [email protected] <[email protected]>\nSubject: Re: Performance issue when we use policies for Row Level Security along with functions\n \n\n\n\"Gopisetty, Ramesh\" <[email protected]> writes:\n> Policy\n> create policy  policy_sel on test FOR SELECT to ram1 USING  (  testkey in (f_sel_policy_test(testkey))  );\n> Going to a Sequential scan instead of index scan.  Hence, performance issue.\n\n> If i replace the policy with stright forward without function then it chooses the index.   Not sure how i can implement with the function.\n> create policy  policy_sel on test FOR SELECT to ram1 USING  ( testkey in ('COMMON',current_setting('ctx_ng'||'.'||'ctx_key_fil')));\n\n\" testkey in ('COMMON',current_setting('ctx_ng'||'.'||'ctx_key_fil')) \"\nis an indexable condition on testkey, because it compares testkey to\na constant (or at least, a value that's fixed for the life of the query).\n\n\" testkey in (f_sel_policy_test(testkey)) \"\nis not an indexable condition on anything, because there are variables\non both sides of the condition.  So there's no fixed value that the\nindex can search on.\n\nIf you intend f_sel_policy_test() to be equivalent to the other condition,\nwhy are you passing it an argument it doesn't need?\n\nAs Luis noted, there's also the problem that an indexable condition\ncan't be volatile.  I gather that SYS_CONTEXT ends up being a probe\nof some GUC setting, which means that marking the function IMMUTABLE\nwould be a lie, but you ought to be able to mark it STABLE.\n\n                        regards, tom lane", "msg_date": "Mon, 12 Oct 2020 06:46:56 +0000", "msg_from": "\"Gopisetty, Ramesh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issue when we use policies for Row Level Security\n along with functions" }, { "msg_contents": "On Sunday, October 11, 2020, Gopisetty, Ramesh <[email protected]>\nwrote:\n\n>\n> to sch USING ( key =\n> f_sel_1(key)\n> );\n>\n\nAs Tom said it doesn’t matter what you classify the function as (stable,\netc) if your function call accepts a column reference as an input and\ncompares its output to another column reference. With a column reference\nyou need a row to find a value and if you already have a row the index\nserves no purpose.\n\nDavid J.\n\nOn Sunday, October 11, 2020, Gopisetty, Ramesh <[email protected]> wrote:\n\n\n    to sch USING  ( key = \n        f_sel_1(key)\n    );As Tom said it doesn’t matter what you classify the function as (stable, etc) if your function call accepts a column reference as an input and compares its output to another column reference.  With a column reference you need a row to find a value and if you already have a row the index serves no purpose.David J.", "msg_date": "Mon, 12 Oct 2020 00:26:21 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Performance issue when we use policies for Row Level Security along\n with functions" } ]
[ { "msg_contents": "Hi,\nWe have Amazon RDS Postgres. Currently we are using .pgpass file and\nrunning psql from different EC2 instances to connect to DB. But the\npassword in this file is not encrypted. What are our options to encrypt the\npassword? Or do passwordless connection from EC2 to database? Lambda\nfunctions have limitations of running only for 15 minutes.\n\nHow can we setup different authentication methods for AWS RDS Postgres as\nwe don't have access pg_hba.conf?\n\nRegards,\nAditya.\n\nHi,We have Amazon RDS Postgres. Currently we are using .pgpass file and running psql from different EC2 instances to connect to DB. But the password in this file is not encrypted. What are our options to encrypt the password? Or do passwordless connection from EC2 to database? Lambda functions have limitations of running only for 15 minutes.How can we setup different authentication methods for AWS RDS Postgres as we don't have access pg_hba.conf?Regards,Aditya.", "msg_date": "Fri, 25 Sep 2020 15:04:56 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "How to encrypt database password in pgpass or unix file to run batch\n jobs through shell script" }, { "msg_contents": "On Fri, Sep 25, 2020 at 03:04:56PM +0530, aditya desai wrote:\n> Hi,\n> We have Amazon RDS Postgres. Currently we are using .pgpass file and running\n> psql from different EC2 instances to connect to DB. But the password in this\n> file is not encrypted. What are our options to encrypt the password? Or do\n> passwordless connection from EC2 to database? Lambda functions have limitations\n> of running only for 15 minutes.\n> \n> How can we setup different authentication methods for AWS RDS Postgres as we\n> don't have access pg_hba.conf?\n\nThere is no encryption facility, though you can used the hashed value\nrather than the literal password. To encrypt, you would need to decrypt\nit and then pass it to libpq, but there is no _pipe_ facility to do\nthat.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 25 Sep 2020 15:51:47 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to encrypt database password in pgpass or unix file to run\n batch jobs through shell script" } ]
[ { "msg_contents": "Is there some way to tell the planner that unless it's guaranteed by a\nconstraint or some such it shouldn't guess that the selectivity of a\nfilter/anti-join is 1 row (e.g. minimum to consider is 2 rows unless it's\nguaranteed to be 1 row) or somehow otherwise make it more conservative\naround the worst case possibilities. I feel like this would cover something\nlike 1/3 of the more problematic planner performance issues I run into. The\nkind where a query suddenly runs 60,000 times slower than it did\npreviously. I can live with some queries being slightly slower if I can\navoid the case where they will all of sudden never complete.\n\nMy current motivating example is this (... abridged) query:\n\npostgresql 11.7 on ubuntu linux\n\n-> **Nested Loop Left Join** (cost=3484616.45..5873755.65 rows=1\nwidth=295)\n Join Filter: (hsc.c_field = c.id)\n ...\n -> *Nested Loop Left Join (cost=1072849.19..3286800.99 rows=1\nwidth=190)*\n -> Hash Anti Join (cost=1072848.62..3286798.53\n***rows=1***[actually 65k] width=189)\n Hash Cond: (c.id = trc.field)\n -> Seq Scan on c (cost=0.00..1096064.73\nrows=14328573 width=189)\n -> Hash (cost=830118.31..830118.31 rows=14794985\nwidth=4)\n -> Seq Scan on trc (cost=0.00..830118.31\nrows=14794985 width=4)\n -> Index Scan using con_pkey on con (cost=0.56..2.46\nrows=1 width=9)\n Index Cond: (c.con_field = id)\n ...\n -> Unique (cost=2411766.83..2479065.82 rows=4794957 width=29)\n -> Sort (cost=2411766.83..2445416.33 rows=13459797 width=29)\n Sort Key: hsc.c_field, xxx\n -> Hash Join (cost=11143.57..599455.83 rows=13459797\nwidth=29)\n ...\n\n*** is where the planner is off in it's row estimation\nc.id is unique for that table, statistics set to 10k and freshly analyzed\ntrc.field is unique for that table, statistics set to 10k and freshly\nanalyzed\nrow estimates for those tables are pretty close to correct (within a couple\nof %)\nthere is no foreign key constraint between those two tables\nc.id and trc.field are both integers with pretty similar distributions over\n1...22 million\n\n** is where it picks a disastrous join plan based on that misstaken-row\nestimate\nthis has to be a close call with doing a merge_join as the other side is\nalready sorted\n\n* this join is ok, since even if it isn't the fastest join here with the\ncorrect row count, given the index it's not much worse\n\n\nI can work around this by breaking up the query (e.g. creating a temporary\ntable of the selected ids, analyzing it then using it in the rest of the\nquery) or by temporarily disabling nestedloop joins (which makes other\nparts of the query slower, but not dramatically so), but is there some\nother reasonable proactive way to avoid it? It was running fine for a year\nbefore blowing up (trigger is I suspect the trc table getting enough larger\nthan the c table, originally it was smaller) and I hit similarish kinds of\nissues every so often.\n\nTim\n\nIs there some way to tell the planner that unless it's guaranteed by a constraint or some such it shouldn't guess that the selectivity of a filter/anti-join is 1 row (e.g. minimum to consider is 2 rows unless it's guaranteed to be 1 row) or somehow otherwise make it more conservative around the worst case possibilities. I feel like this would cover something like 1/3 of the more problematic planner performance issues I run into. The kind where a query suddenly runs 60,000 times slower than it did previously. I can live with some queries being slightly slower if I can avoid the case where they will all of sudden never complete.My current motivating example is this (... abridged) query:postgresql 11.7 on ubuntu linux->  **Nested Loop Left Join**  (cost=3484616.45..5873755.65 rows=1 width=295)      Join Filter: (hsc.c_field = c.id)      ...            ->  *Nested Loop Left Join  (cost=1072849.19..3286800.99 rows=1 width=190)*                  ->  Hash Anti Join  (cost=1072848.62..3286798.53 ***rows=1***[actually 65k] width=189)                        Hash Cond: (c.id = trc.field)                        ->  Seq Scan on c  (cost=0.00..1096064.73 rows=14328573 width=189)                        ->  Hash  (cost=830118.31..830118.31 rows=14794985 width=4)                              ->  Seq Scan on trc  (cost=0.00..830118.31 rows=14794985 width=4)                  ->  Index Scan using con_pkey on con  (cost=0.56..2.46 rows=1 width=9)                        Index Cond: (c.con_field = id)            ...      ->  Unique  (cost=2411766.83..2479065.82 rows=4794957 width=29)            ->  Sort  (cost=2411766.83..2445416.33 rows=13459797 width=29)                  Sort Key: hsc.c_field, xxx                  ->  Hash Join  (cost=11143.57..599455.83 rows=13459797 width=29)                  ...*** is where the planner is off in it's row estimationc.id is unique for that table, statistics set to 10k and freshly analyzedtrc.field is unique for that table, statistics set to 10k and freshly analyzedrow estimates for those tables are pretty close to correct (within a couple of %)there is no foreign key constraint between those two tablesc.id and trc.field are both integers with pretty similar distributions over 1...22 million** is where it picks a disastrous join plan based on that misstaken-row estimatethis has to be a close call with doing a merge_join as the other side is already sorted* this join is ok, since even if it isn't the fastest join here with the correct row count, given the index it's not much worseI can work around this by breaking up the query (e.g. creating a temporary table of the selected ids, analyzing it then using it in the rest of the query) or by temporarily disabling nestedloop joins (which makes other parts of the query slower, but not dramatically so), but is there some other reasonable proactive way to avoid it?  It was running fine for a year before blowing up (trigger is I suspect the trc table getting enough larger than the c table, originally it was smaller) and I hit similarish kinds of issues every so often.Tim", "msg_date": "Mon, 28 Sep 2020 16:12:37 -0400", "msg_from": "Timothy Garnett <[email protected]>", "msg_from_op": true, "msg_subject": "Is it possible to specify minimum number of rows planner should\n consider?" }, { "msg_contents": "Timothy Garnett <[email protected]> writes:\n> Is there some way to tell the planner that unless it's guaranteed by a\n> constraint or some such it shouldn't guess that the selectivity of a\n> filter/anti-join is 1 row (e.g. minimum to consider is 2 rows unless it's\n> guaranteed to be 1 row) or somehow otherwise make it more conservative\n> around the worst case possibilities.\n\nThere's been some discussion in that area, but it's a hard problem\nto solve in general, and especially so if you'd like to not break\na ton of queries that work nicely today.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Sep 2020 17:06:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to specify minimum number of rows planner should\n consider?" }, { "msg_contents": "Here is a commit that accomplishes this with a configuration parameter.\n\nhttps://github.com/labkey-matthewb/postgres/commit/b1fd99f4deffbbf3db2172ccaba51a34f18d1b1a\n\nOn Mon, Sep 28, 2020 at 2:07 PM Tom Lane <[email protected]> wrote:\n\n> Timothy Garnett <[email protected]> writes:\n> > Is there some way to tell the planner that unless it's guaranteed by a\n> > constraint or some such it shouldn't guess that the selectivity of a\n> > filter/anti-join is 1 row (e.g. minimum to consider is 2 rows unless it's\n> > guaranteed to be 1 row) or somehow otherwise make it more conservative\n> > around the worst case possibilities.\n>\n> There's been some discussion in that area, but it's a hard problem\n> to solve in general, and especially so if you'd like to not break\n> a ton of queries that work nicely today.\n>\n> regards, tom lane\n>\n>\n>\n\nHere is a commit that accomplishes this with a configuration parameter.https://github.com/labkey-matthewb/postgres/commit/b1fd99f4deffbbf3db2172ccaba51a34f18d1b1aOn Mon, Sep 28, 2020 at 2:07 PM Tom Lane <[email protected]> wrote:Timothy Garnett <[email protected]> writes:\n> Is there some way to tell the planner that unless it's guaranteed by a\n> constraint or some such it shouldn't guess that the selectivity of a\n> filter/anti-join is 1 row (e.g. minimum to consider is 2 rows unless it's\n> guaranteed to be 1 row) or somehow otherwise make it more conservative\n> around the worst case possibilities.\n\nThere's been some discussion in that area, but it's a hard problem\nto solve in general, and especially so if you'd like to not break\na ton of queries that work nicely today.\n\n                        regards, tom lane", "msg_date": "Mon, 28 Sep 2020 14:45:38 -0700", "msg_from": "Matthew Bellew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it possible to specify minimum number of rows planner should\n consider?" }, { "msg_contents": "That's a really straightforward patch that looks pretty safe, I may play\naround with that a bit.\n\nThanks,\nTim\n\nOn Mon, Sep 28, 2020 at 5:45 PM Matthew Bellew <[email protected]> wrote:\n\n> Here is a commit that accomplishes this with a configuration parameter.\n>\n>\n> https://github.com/labkey-matthewb/postgres/commit/b1fd99f4deffbbf3db2172ccaba51a34f18d1b1a\n>\n> On Mon, Sep 28, 2020 at 2:07 PM Tom Lane <[email protected]> wrote:\n>\n>> Timothy Garnett <[email protected]> writes:\n>> > Is there some way to tell the planner that unless it's guaranteed by a\n>> > constraint or some such it shouldn't guess that the selectivity of a\n>> > filter/anti-join is 1 row (e.g. minimum to consider is 2 rows unless\n>> it's\n>> > guaranteed to be 1 row) or somehow otherwise make it more conservative\n>> > around the worst case possibilities.\n>>\n>> There's been some discussion in that area, but it's a hard problem\n>> to solve in general, and especially so if you'd like to not break\n>> a ton of queries that work nicely today.\n>>\n>> regards, tom lane\n>>\n>>\n>>\n\nThat's a really straightforward patch that looks pretty safe, I may play around with that a bit.Thanks,TimOn Mon, Sep 28, 2020 at 5:45 PM Matthew Bellew <[email protected]> wrote:Here is a commit that accomplishes this with a configuration parameter.https://github.com/labkey-matthewb/postgres/commit/b1fd99f4deffbbf3db2172ccaba51a34f18d1b1aOn Mon, Sep 28, 2020 at 2:07 PM Tom Lane <[email protected]> wrote:Timothy Garnett <[email protected]> writes:\n> Is there some way to tell the planner that unless it's guaranteed by a\n> constraint or some such it shouldn't guess that the selectivity of a\n> filter/anti-join is 1 row (e.g. minimum to consider is 2 rows unless it's\n> guaranteed to be 1 row) or somehow otherwise make it more conservative\n> around the worst case possibilities.\n\nThere's been some discussion in that area, but it's a hard problem\nto solve in general, and especially so if you'd like to not break\na ton of queries that work nicely today.\n\n                        regards, tom lane", "msg_date": "Mon, 28 Sep 2020 20:22:22 -0400", "msg_from": "Timothy Garnett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is it possible to specify minimum number of rows planner should\n consider?" } ]
[ { "msg_contents": "Hi,\nWe have AWS RDS and we are trying to connect to DB remotely from EC2\ninstance.as client connection using psql. We are trying to set up IAM\nroles. We did all the necessary settings but got below error. Could you\nplease advise?\n\nPassword for user lmp_cloud_dev:\n\npsql: FATAL: PAM authentication failed for user \"testuser\"\n\nFATAL: pg_hba.conf rejects connection for host \"192.168.1.xxx\", user\n\"testuser\", database \"testdb\", SSL off\n\n\nRegards,\n\nAditya.\n\nHi,We have AWS RDS and we are trying to connect to DB remotely from EC2 instance.as client connection using psql. We are trying to set up IAM roles. We did all the necessary settings but got below error. Could you please advise?Password for user lmp_cloud_dev:\npsql: FATAL:  PAM authentication failed for user\n\"testuser\"\nFATAL:  pg_hba.conf rejects connection for host\n\"192.168.1.xxx\", user \"testuser\", database\n\"testdb\", SSL offRegards,Aditya.", "msg_date": "Wed, 30 Sep 2020 12:49:43 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "SSL connection getting rejected on AWS RDS" }, { "msg_contents": "> On 30 Sep 2020, at 5:19 pm, aditya desai <[email protected]> wrote:\n> \n> Hi,\n> We have AWS RDS and we are trying to connect to DB remotely from EC2 instance.as <http://instance.as/> client connection using psql. We are trying to set up IAM roles. We did all the necessary settings but got below error. Could you please advise?\n> \n> Password for user lmp_cloud_dev:\n> psql: FATAL: PAM authentication failed for user \"testuser\"\n> FATAL: pg_hba.conf rejects connection for host \"192.168.1.xxx\", user \"testuser\", database \"testdb\", SSL off\n> \n> Regards,\n> Aditya.\n> \n\nHi Aditya,\n\nSee the below example of me connecting to RDS from an EC2 instance:\n\nYou need to change the $RDSHOST value\nyou need to replace my “app_user” to your “testuser” and database “postgres” to your “testdb”\n\n[ec2-user@ip-172-31-13-121 ~]$ export RDSHOST=\"mypg.cfvvs1nh3f7i.ap-southeast-2.rds.amazonaws.com\"\n\n[ec2-user@ip-172-31-13-121 ~]$ export PGPASSWORD=\"$(aws rds generate-db-auth-token \\\n--hostname $RDSHOST \\\n--port 5432 \\\n--username app_user)”\n\n[ec2-user@ip-172-31-13-121 ~]$ psql \"host=$RDSHOST port=5432 sslmode=require dbname=postgres user= app_user\"\n\npsql (11.5, server 12.3)\nWARNING: psql major version 11, server major version 12.\nSome psql features might not work.\nSSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)\nType \"help\" for help.\npostgres=>\n\nThanks,\nHannah\nOn 30 Sep 2020, at 5:19 pm, aditya desai <[email protected]> wrote:Hi,We have AWS RDS and we are trying to connect to DB remotely from EC2 instance.as client connection using psql. We are trying to set up IAM roles. We did all the necessary settings but got below error. Could you please advise?Password for user lmp_cloud_dev:psql: FATAL:  PAM authentication failed for user\n\"testuser\"FATAL:  pg_hba.conf rejects connection for host\n\"192.168.1.xxx\", user \"testuser\", database\n\"testdb\", SSL offRegards,Aditya.\nHi Aditya,See the below example of me connecting to RDS from an EC2 instance:You need to change the $RDSHOST valueyou need to replace my “app_user” to your “testuser” and database “postgres” to your “testdb”[ec2-user@ip-172-31-13-121 ~]$ export RDSHOST=\"mypg.cfvvs1nh3f7i.ap-southeast-2.rds.amazonaws.com\"[ec2-user@ip-172-31-13-121 ~]$ export PGPASSWORD=\"$(aws rds generate-db-auth-token \\--hostname $RDSHOST \\--port 5432 \\--username app_user)”[ec2-user@ip-172-31-13-121 ~]$ psql \"host=$RDSHOST port=5432 sslmode=require dbname=postgres user= app_user\"psql (11.5, server 12.3)WARNING: psql major version 11, server major version 12.Some psql features might not work.SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)Type \"help\" for help.postgres=>Thanks,Hannah", "msg_date": "Wed, 30 Sep 2020 21:17:37 +1000", "msg_from": "Hannah Huang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL connection getting rejected on AWS RDS" }, { "msg_contents": "Hi Hannah,\nThank you very much!! this is really helpful. Do we need to pass\n'sslrootcert\" as mentioned in the doc below? I see that you have not used\nit in your command.\n\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.PostgreSQL.html\n\nAlso do we have to grant the role below to the user?\n\ngrant rds_iam to app_user;\n\n\nIf you have any document/Steps to set this up from scratch,could you please\nforward? That would be really helpful.\n\nRegards,\nAditya.\n\n\nOn Wed, Sep 30, 2020 at 4:47 PM Hannah Huang <[email protected]>\nwrote:\n\n>\n>\n> On 30 Sep 2020, at 5:19 pm, aditya desai <[email protected]> wrote:\n>\n> Hi,\n> We have AWS RDS and we are trying to connect to DB remotely from EC2\n> instance.as client connection using psql. We are trying to set up IAM\n> roles. We did all the necessary settings but got below error. Could you\n> please advise?\n>\n> Password for user lmp_cloud_dev:\n>\n> psql: FATAL: PAM authentication failed for user \"testuser\"\n>\n> FATAL: pg_hba.conf rejects connection for host \"192.168.1.xxx\", user\n> \"testuser\", database \"testdb\", SSL off\n>\n>\n> Regards,\n>\n> Aditya.\n>\n>\n> Hi Aditya,\n>\n> See the below example of me connecting to RDS from an EC2 instance:\n>\n> You need to change the $RDSHOST value\n> you need to replace my “app_user” to your “testuser” and database\n> “postgres” to your “testdb”\n>\n> [ec2-user@ip-172-31-13-121 ~]$ export RDSHOST=\"mypg.cfvvs1nh3f7i.ap-\n> southeast-2.rds.amazonaws.com\"\n>\n> [ec2-user@ip-172-31-13-121 ~]$ export PGPASSWORD=\"$(aws rds\n> generate-db-auth-token \\\n> --hostname $RDSHOST \\\n> --port 5432 \\\n> --username app_user)”\n>\n> [ec2-user@ip-172-31-13-121 ~]$ psql \"host=$RDSHOST port=5432\n> sslmode=require dbname=postgres user= app_user\"\n>\n> psql (11.5, server 12.3)\n> WARNING: psql major version 11, server major version 12.\n> Some psql features might not work.\n> SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384,\n> bits: 256, compression: off)\n> Type \"help\" for help.\n> postgres=>\n>\n> Thanks,\n> Hannah\n>\n\nHi Hannah,Thank you very much!! this is really helpful. Do we need to pass 'sslrootcert\" as mentioned in the doc below? I see that you have not used it in  your command. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.PostgreSQL.htmlAlso do we have to grant the role below to the user?grant rds_iam to app_user;If you have any document/Steps to set this up from scratch,could you please forward? That would be really helpful.Regards,Aditya.On Wed, Sep 30, 2020 at 4:47 PM Hannah Huang <[email protected]> wrote:On 30 Sep 2020, at 5:19 pm, aditya desai <[email protected]> wrote:Hi,We have AWS RDS and we are trying to connect to DB remotely from EC2 instance.as client connection using psql. We are trying to set up IAM roles. We did all the necessary settings but got below error. Could you please advise?Password for user lmp_cloud_dev:psql: FATAL:  PAM authentication failed for user\n\"testuser\"FATAL:  pg_hba.conf rejects connection for host\n\"192.168.1.xxx\", user \"testuser\", database\n\"testdb\", SSL offRegards,Aditya.\nHi Aditya,See the below example of me connecting to RDS from an EC2 instance:You need to change the $RDSHOST valueyou need to replace my “app_user” to your “testuser” and database “postgres” to your “testdb”[ec2-user@ip-172-31-13-121 ~]$ export RDSHOST=\"mypg.cfvvs1nh3f7i.ap-southeast-2.rds.amazonaws.com\"[ec2-user@ip-172-31-13-121 ~]$ export PGPASSWORD=\"$(aws rds generate-db-auth-token \\--hostname $RDSHOST \\--port 5432 \\--username app_user)”[ec2-user@ip-172-31-13-121 ~]$ psql \"host=$RDSHOST port=5432 sslmode=require dbname=postgres user= app_user\"psql (11.5, server 12.3)WARNING: psql major version 11, server major version 12.Some psql features might not work.SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)Type \"help\" for help.postgres=>Thanks,Hannah", "msg_date": "Wed, 30 Sep 2020 21:20:03 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSL connection getting rejected on AWS RDS" }, { "msg_contents": "Hi Aditya,\n\nYes, you need to grant the role to the user inside PostgreSQL database.\n\nPlease checkout this article: https://suyahuang.wordpress.com/2020/10/01/hands-on-lab-access-rds-postgresql-from-ec2-instance-without-password-how-to-configure-iam-db-authentication/\n\nLet me know if you have any problem following through.\n\nThanks,\nHannah\n\n> On 1 Oct 2020, at 1:50 am, aditya desai <[email protected]> wrote:\n> \n> Hi Hannah,\n> Thank you very much!! this is really helpful. Do we need to pass 'sslrootcert\" as mentioned in the doc below? I see that you have not used it in your command. \n> \n> https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.PostgreSQL.html <https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.PostgreSQL.html>\n> \n> Also do we have to grant the role below to the user?\n> \n> grant rds_iam to app_user;\n> \n> \n> If you have any document/Steps to set this up from scratch,could you please forward? That would be really helpful.\n> \n> Regards,\n> Aditya.\n> \n> \n> On Wed, Sep 30, 2020 at 4:47 PM Hannah Huang <[email protected] <mailto:[email protected]>> wrote:\n> \n> \n>> On 30 Sep 2020, at 5:19 pm, aditya desai <[email protected] <mailto:[email protected]>> wrote:\n>> \n>> Hi,\n>> We have AWS RDS and we are trying to connect to DB remotely from EC2 instance.as <http://instance.as/> client connection using psql. We are trying to set up IAM roles. We did all the necessary settings but got below error. Could you please advise?\n>> \n>> Password for user lmp_cloud_dev:\n>> psql: FATAL: PAM authentication failed for user \"testuser\"\n>> FATAL: pg_hba.conf rejects connection for host \"192.168.1.xxx\", user \"testuser\", database \"testdb\", SSL off\n>> \n>> Regards,\n>> Aditya.\n>> \n> \n> Hi Aditya,\n> \n> See the below example of me connecting to RDS from an EC2 instance:\n> \n> You need to change the $RDSHOST value\n> you need to replace my “app_user” to your “testuser” and database “postgres” to your “testdb”\n> \n> [ec2-user@ip-172-31-13-121 ~]$ export RDSHOST=\"mypg.cfvvs1nh3f7i.ap-southeast-2.rds.amazonaws.com <http://southeast-2.rds.amazonaws.com/>\"\n> \n> [ec2-user@ip-172-31-13-121 ~]$ export PGPASSWORD=\"$(aws rds generate-db-auth-token \\\n> --hostname $RDSHOST \\\n> --port 5432 \\\n> --username app_user)”\n> \n> [ec2-user@ip-172-31-13-121 ~]$ psql \"host=$RDSHOST port=5432 sslmode=require dbname=postgres user= app_user\"\n> \n> psql (11.5, server 12.3)\n> WARNING: psql major version 11, server major version 12.\n> Some psql features might not work.\n> SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)\n> Type \"help\" for help.\n> postgres=>\n> \n> Thanks,\n> Hannah\n\n\nHi Aditya,Yes, you need to grant the role to the user inside PostgreSQL database.Please checkout this article: https://suyahuang.wordpress.com/2020/10/01/hands-on-lab-access-rds-postgresql-from-ec2-instance-without-password-how-to-configure-iam-db-authentication/Let me know if you have any problem following through.Thanks,HannahOn 1 Oct 2020, at 1:50 am, aditya desai <[email protected]> wrote:Hi Hannah,Thank you very much!! this is really helpful. Do we need to pass 'sslrootcert\" as mentioned in the doc below? I see that you have not used it in  your command. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.AWSCLI.PostgreSQL.htmlAlso do we have to grant the role below to the user?grant rds_iam to app_user;If you have any document/Steps to set this up from scratch,could you please forward? That would be really helpful.Regards,Aditya.On Wed, Sep 30, 2020 at 4:47 PM Hannah Huang <[email protected]> wrote:On 30 Sep 2020, at 5:19 pm, aditya desai <[email protected]> wrote:Hi,We have AWS RDS and we are trying to connect to DB remotely from EC2 instance.as client connection using psql. We are trying to set up IAM roles. We did all the necessary settings but got below error. Could you please advise?Password for user lmp_cloud_dev:psql: FATAL:  PAM authentication failed for user\n\"testuser\"FATAL:  pg_hba.conf rejects connection for host\n\"192.168.1.xxx\", user \"testuser\", database\n\"testdb\", SSL offRegards,Aditya.\nHi Aditya,See the below example of me connecting to RDS from an EC2 instance:You need to change the $RDSHOST valueyou need to replace my “app_user” to your “testuser” and database “postgres” to your “testdb”[ec2-user@ip-172-31-13-121 ~]$ export RDSHOST=\"mypg.cfvvs1nh3f7i.ap-southeast-2.rds.amazonaws.com\"[ec2-user@ip-172-31-13-121 ~]$ export PGPASSWORD=\"$(aws rds generate-db-auth-token \\--hostname $RDSHOST \\--port 5432 \\--username app_user)”[ec2-user@ip-172-31-13-121 ~]$ psql \"host=$RDSHOST port=5432 sslmode=require dbname=postgres user= app_user\"psql (11.5, server 12.3)WARNING: psql major version 11, server major version 12.Some psql features might not work.SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)Type \"help\" for help.postgres=>Thanks,Hannah", "msg_date": "Thu, 1 Oct 2020 14:51:24 +1000", "msg_from": "Hannah Huang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSL connection getting rejected on AWS RDS" } ]
[ { "msg_contents": "Hi Team,\n\nCan someone please guide me how to improve/reduce these wait events.\n\nPostgres Version:9.5\n\nLOG: process 3718 still waiting for ExclusiveLock on extension of relation\n266775 of database 196511 after 1000.057 ms\n\n*Detail:* Process holding the lock: 6423. Wait queue: 3718, 4600, 2670,\n4046.\n*Context:* SQL statement \"INSERT INTO\ncms_c207c1e2_0ce7_422c_aafb_77d43f61e563.cms_item\n(display_name,ancestor_ids,content_size,case_node_id,case_model_id,case_instance_id,properties,mime_type,linked_ancestor_ids,linked_parent_folder_ids,payload_id,category,owner_id,version_no,latest,lock_time,lock_owner_id,version_label,chronicle_id,acl_id,trait_names,tags,parent_folder_id,updated_by,created_by,update_time,create_time,description,type,name,etag,id)\nVALUES\n(new.display_name,new.ancestor_ids,new.content_size,new.case_node_id,new.case_model_id,new.case_instance_id,json,new.mime_type,new.linked_ancestor_ids,new.linked_parent_folder_ids,new.payload_id,new.category,new.owner_id,new.version_no,new.latest,new.lock_time,new.lock_owner_id,new.version_label,new.chronicle_id,new.acl_id,new.trait_names,new.tags,new.parent_folder_id,new.updated_by,new.created_by,new.update_time,new.create_time,new.description,new.type,\nnew.name,new.etag,new.id)\"\n\nThanks & Regards,\nAvinash.\n\nHi Team,Can someone please guide me how to improve/reduce these wait events.Postgres Version:9.5LOG: process 3718 still waiting for ExclusiveLock on extension of relation 266775 of database 196511 after 1000.057 msDetail: Process holding the lock: 6423. Wait queue: 3718, 4600, 2670, 4046.Context: SQL statement \"INSERT INTO cms_c207c1e2_0ce7_422c_aafb_77d43f61e563.cms_item (display_name,ancestor_ids,content_size,case_node_id,case_model_id,case_instance_id,properties,mime_type,linked_ancestor_ids,linked_parent_folder_ids,payload_id,category,owner_id,version_no,latest,lock_time,lock_owner_id,version_label,chronicle_id,acl_id,trait_names,tags,parent_folder_id,updated_by,created_by,update_time,create_time,description,type,name,etag,id) VALUES (new.display_name,new.ancestor_ids,new.content_size,new.case_node_id,new.case_model_id,new.case_instance_id,json,new.mime_type,new.linked_ancestor_ids,new.linked_parent_folder_ids,new.payload_id,new.category,new.owner_id,new.version_no,new.latest,new.lock_time,new.lock_owner_id,new.version_label,new.chronicle_id,new.acl_id,new.trait_names,new.tags,new.parent_folder_id,new.updated_by,new.created_by,new.update_time,new.create_time,new.description,new.type,new.name,new.etag,new.id)\" Thanks & Regards,Avinash.", "msg_date": "Mon, 5 Oct 2020 10:32:55 +0530", "msg_from": "avinash varma <[email protected]>", "msg_from_op": true, "msg_subject": "Too many waits on extension of relation" }, { "msg_contents": "What is relation 266775 of database 196511? Is\nit cms_c207c1e2_0ce7_422c_aafb_77d43f61e563.cms_item or some system catalog\ntable?\n\nWhen I search google for \"ExclusiveLock on extension of relation\" I find\none thread about shared_buffers being very high but not big enough to fit\nthe entire data in the cluster. How much ram, what is shared buffers and\nwhat is the total size of the database(s) on that Postgres instance?\n\n>\n\nWhat is relation 266775 of database 196511? Is it cms_c207c1e2_0ce7_422c_aafb_77d43f61e563.cms_item or some system catalog table?When I search google for \"ExclusiveLock on extension of relation\" I find one thread about shared_buffers being very high but not big enough to fit the entire data in the cluster. How much ram, what is shared buffers and what is the total size of the database(s) on that Postgres instance?", "msg_date": "Mon, 5 Oct 2020 10:44:10 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too many waits on extension of relation" }, { "msg_contents": "Hi Michael,\n\nThanks for the response.\n\nYes, CMS_ITEM is the relname for id 266775 .\n\nOverall DB's Size is 17GB and the size of shared_buffers is 1GB whereas\nthe RAM size is around 32G.\n\n\n[image: image.png]\n\nThanks,\nAvinash", "msg_date": "Mon, 5 Oct 2020 22:36:12 +0530", "msg_from": "avinash varma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Too many waits on extension of relation" }, { "msg_contents": "We are also getting similar warning messages in the log file, for Insert\noperation as it is blocking concurrent inserts on the same table. As per\nthe online documents, I have come across, suggest is because the Postgres\nprocess takes time to search for the relevant buffer in the shared_buffer\narea if shared_buffer is too big.\n\nIn the highly transactional system, there may not be enough free buffers to\nallocate for incoming transactions. In our case allocated shared buffer is\n24GB and has RAM 120GB, not sure whether we can call it too big but while\nquerying pg_buffercache has always given indication that 12-13GB\nshared_buffers would be appropriate in our case. I have used the below URL\nto evaluate the shared buffer sizing.\n\nhttps://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/\n\n\n\nBest Regards,\n\n*Sushant Pawar *\n\n\nOn Mon, Oct 5, 2020 at 10:14 PM Michael Lewis <[email protected]> wrote:\n\n> What is relation 266775 of database 196511? Is\n> it cms_c207c1e2_0ce7_422c_aafb_77d43f61e563.cms_item or some system catalog\n> table?\n>\n> When I search google for \"ExclusiveLock on extension of relation\" I find\n> one thread about shared_buffers being very high but not big enough to fit\n> the entire data in the cluster. How much ram, what is shared buffers and\n> what is the total size of the database(s) on that Postgres instance?\n>\n>>\n\nWe are also getting similar warning messages \n\nin the log file, for Insert operation as it is blocking concurrent inserts on the same table. As per the online documents, I have come across, suggest is because the Postgres process takes time to search for the relevant buffer in the shared_buffer area if shared_buffer is too big.In the highly transactional system, there may not be enough free buffers to allocate for incoming transactions.  In our case allocated shared buffer is 24GB and has RAM 120GB, not sure whether we can call it too big but while querying pg_buffercache  has always given indication that 12-13GB shared_buffers would be appropriate in our case. I have used the below URL to evaluate the shared buffer sizing.https://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/Best Regards,Sushant Pawar On Mon, Oct 5, 2020 at 10:14 PM Michael Lewis <[email protected]> wrote:What is relation 266775 of database 196511? Is it cms_c207c1e2_0ce7_422c_aafb_77d43f61e563.cms_item or some system catalog table?When I search google for \"ExclusiveLock on extension of relation\" I find one thread about shared_buffers being very high but not big enough to fit the entire data in the cluster. How much ram, what is shared buffers and what is the total size of the database(s) on that Postgres instance?", "msg_date": "Mon, 5 Oct 2020 23:08:48 +0530", "msg_from": "Sushant Pawar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too many waits on extension of relation" }, { "msg_contents": "Are you having locks where the type = extend?\n\nIf so, this is a symptom of slow insertsdue to many concurrent \nconnections trying to insert into the same table at the same time. Each \ninsert request may result in an extend lock (8k extension), which blocks \nother writers. What normally happens is that these extend locks happen \nso fast that you hardly ever see them in the*pg_locks*table, except in \nthe case where many concurrent connections are trying to do inserts into \nthe same table.\n\nRegards,\nMichael Vitale\n\nSushant Pawar wrote on 10/5/2020 1:38 PM:\n> We are also getting similar warning messages in the log file, for \n> Insert operation as it is blocking concurrent inserts on the same \n> table. As per the online documents, I have come across, suggest \n> is because the Postgres process takes time to search for the relevant \n> buffer in the shared_buffer area if shared_buffer is too big.\n>\n> In the highly transactional system, there may not be enough free \n> buffers to allocate for incoming transactions.  In our case allocated \n> shared buffer is 24GB and has RAM 120GB, not sure whether we can call \n> it too big but while querying pg_buffercache  has always given \n> indication that 12-13GB shared_buffers would be appropriate in our \n> case. I have used the below URL to evaluate the shared buffer sizing.\n>\n> https://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/\n>\n>\n>\n> Best Regards,\n>\n> *Sushant Pawar *\n>\n>\n>\n> On Mon, Oct 5, 2020 at 10:14 PM Michael Lewis <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> What is relation 266775 of database 196511? Is\n> it cms_c207c1e2_0ce7_422c_aafb_77d43f61e563.cms_item or some\n> system catalog table?\n>\n> When I search google for \"ExclusiveLock on extension of relation\"\n> I find one thread about shared_buffers being very high but not big\n> enough to fit the entire data in the cluster. How much ram, what\n> is shared buffers and what is the total size of the database(s) on\n> that Postgres instance?\n>\n\n\n\n\nAre you having locks where the type = extend?\n\nIf so, this is a symptom of slow inserts due to many concurrent connections trying to insert into the \nsame table at the same time. Each insert request may result in an extend\n lock (8k extension), which blocks other writers. What normally happens \nis that these extend locks happen so fast that you hardly ever see them \nin the pg_locks table, except in the \ncase where many concurrent connections are trying to do inserts into the\n same table.\n\nRegards,\nMichael Vitale \n\nSushant Pawar wrote on 10/5/2020 1:38 PM:\n\n\nWe are also getting similar warning messages \n\nin the log file, for Insert operation as it is blocking concurrent \ninserts on the same table. As per the online documents, I have come \nacross, suggest is because the Postgres process takes time to search for\n the relevant buffer in the shared_buffer area if shared_buffer is too \nbig.In the highly transactional system, there may \nnot be enough free buffers to allocate for incoming transactions.  In \nour case allocated shared buffer is 24GB and has RAM 120GB, not sure \nwhether we can call it too big but while querying pg_buffercache  has \nalways given indication that 12-13GB shared_buffers would be appropriate\n in our case. I have used the below URL to evaluate the shared buffer \nsizing.https://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/Best Regards,Sushant\n Pawar \n\nOn Mon, Oct\n 5, 2020 at 10:14 PM Michael Lewis <[email protected]>\n wrote:What is relation 266775 of database 196511? Is \nit cms_c207c1e2_0ce7_422c_aafb_77d43f61e563.cms_item or some system \ncatalog table?When I search google for \"ExclusiveLock on \nextension of relation\" I find one thread about shared_buffers being very\n high but not big enough to fit the entire data in the cluster. How much\n ram, what is shared buffers and what is the total size of the \ndatabase(s) on that Postgres instance?", "msg_date": "Mon, 5 Oct 2020 14:03:05 -0400", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too many waits on extension of relation" }, { "msg_contents": "On Mon, 2020-10-05 at 10:32 +0530, avinash varma wrote:\n> Can someone please guide me how to improve/reduce these wait events.\n> \n> Postgres Version:9.5\n> \n> LOG: process 3718 still waiting for ExclusiveLock on extension of relation 266775 of database 196511 after 1000.057 ms\n> Detail: Process holding the lock: 6423. Wait queue: 3718, 4600, 2670, 4046.\n> Context: SQL statement \"INSERT INTO cms_c207c1e2_0ce7_422c_aafb_77d43f61e563.cms_item [...]\n\nProcess 6423 is holding a lock on the table into which you'd like to INSERT\nthat blocks several other sessions.\n\nMake sure that the transaction in this database session ends, e.g. by\n\n SELECT pg_cancel_backend(6423);\n\nEither there is a session that did not close its transaction (coding bug),\nor a database statement ran inordinately long.\n\nYours,\nLaurenz Albe\n-- \n+43-670-6056265\nCYBERTEC PostgreSQL International GmbH\nGröhrmühlgasse 26, A-2700 Wiener Neustadt\nWeb: https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Tue, 06 Oct 2020 05:37:10 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too many waits on extension of relation" }, { "msg_contents": "Hi Michael,\n\nYes, All the locks are of type= extend.\nIs there a way where we can improve the performance of concurrent inserts\non the same table.\n\nThanks,\nAvinash\n\nHi Michael,Yes, All the locks are of type= extend. Is there a way where we can improve the performance of concurrent inserts on the same table.Thanks,Avinash", "msg_date": "Tue, 6 Oct 2020 09:29:08 +0530", "msg_from": "avinash varma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Too many waits on extension of relation" } ]
[ { "msg_contents": "Hi there!\r\n\r\nIs there any way to use indexes with XMLTABLE-query to query XML type data?\r\nI've only managed to use text[] indexes with plain xpath queries. Is there any similar workaround for XMLTABLE type queries?\r\n\r\nCheers!\r\n\r\n * Anssi Kanninen, Helsinki, Finland\r\n\n\n\n\n\n\n\n\n\n\n\nHi there!\n \nIs there any way to use indexes with XMLTABLE-query to query XML type data?\nI've only managed to use text[] indexes with plain xpath queries. Is there any similar workaround for XMLTABLE type queries?\n \nCheers!\n\nAnssi Kanninen, Helsinki, Finland", "msg_date": "Fri, 9 Oct 2020 05:39:49 +0000", "msg_from": "Kanninen Anssi EXT <[email protected]>", "msg_from_op": true, "msg_subject": "Indexing an XMLTABLE query?" }, { "msg_contents": "Hi\n\npá 9. 10. 2020 v 7:40 odesílatel Kanninen Anssi EXT <\[email protected]> napsal:\n\n> Hi there!\n>\n>\n>\n> Is there any way to use indexes with XMLTABLE-query to query XML type data?\n>\n> I've only managed to use text[] indexes with plain xpath queries. Is there\n> any similar workaround for XMLTABLE type queries?\n>\n\nXMLTABLE returning set of records, and for these functions there are not\nfunctional indexes\n\nRegards\n\nPavel\n\n\n\n>\n> Cheers!\n>\n> - Anssi Kanninen, Helsinki, Finland\n>\n>\n\nHipá 9. 10. 2020 v 7:40 odesílatel Kanninen Anssi EXT <[email protected]> napsal:\n\n\n\n\nHi there!\n \nIs there any way to use indexes with XMLTABLE-query to query XML type data?\nI've only managed to use text[] indexes with plain xpath queries. Is there any similar workaround for XMLTABLE type queries?XMLTABLE returning set of records, and for these functions there are not functional indexesRegardsPavel\n \nCheers!\n\nAnssi Kanninen, Helsinki, Finland", "msg_date": "Fri, 9 Oct 2020 07:59:04 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing an XMLTABLE query?" } ]
[ { "msg_contents": "Hi all,\n\nWe've been struggling with a slow query! -- and it's been exploding as rows\nhave been added to relevant tables. It seems like a fairly common workflow,\nso we think we're overlooking the potential for an index or rewriting the\nquery.\n\nI've linked a document compiling the information as per the Postgresql\nrecommendation for Slow Query Questions. Here's the link:\nhttps://docs.google.com/document/d/10qO5jkQNVtKw2Af1gcKAKiNw7tYFNQruzOQrUYXd4hk/edit?usp=sharing\n(we've enabled commenting)\n\nHere's a high-level summary of the issue:\n______\n\nWe’re trying to show a list of active conversations. Each conversation\n(named a spool in the database) has multiple threads, kind of like Slack\nchannels. And the messages are stored in each thread. We want to return the\n30 most recent conversations with recency determined as the most recent\nmessage in any thread of the conversation you are a participant of (you may\nnot be a participant of certain threads in a conversation so it’s important\nthose don’t leak sensitive data).\n\nWe found that as the number of threads increases, the query slowed down\ndramatically. We think the issue has to do with the fact that there is no\neasy way to go from a thread you are a participant to its most recent\nmessage, however, it is possible the issue is elsewhere. We’ve provided the\nfull query and a simplified query of where we think the issue is, along\nwith the EXPLAIN ANALYZE BUFFERS result. We figure this is not exactly an\nuncommon use case, so it’s likely that we are overlooking the potential for\nsome missing indices or a better way to write the query. We appreciate the\nhelp and any advice!\n\n______\n\nWe'd really appreciate any help and advice!\n\nBest,\nParth\n\nHi all,We've been struggling with a slow query! -- and it's been exploding as rows have been added to relevant tables. It seems like a fairly common workflow, so we think we're overlooking the potential for an index or rewriting the query.I've linked a document compiling the information as per the Postgresql recommendation for Slow Query Questions. Here's the link: https://docs.google.com/document/d/10qO5jkQNVtKw2Af1gcKAKiNw7tYFNQruzOQrUYXd4hk/edit?usp=sharing (we've enabled commenting)Here's a high-level summary of the issue:______We’re trying to show a list of active conversations. Each conversation (named a spool in the database) has multiple threads, kind of like Slack channels. And the messages are stored in each thread. We want to return the 30 most recent conversations with recency determined as the most recent message in any thread of the conversation you are a participant of (you may not be a participant of certain threads in a conversation so it’s important those don’t leak sensitive data).We found that as the number of threads increases, the query slowed down dramatically. We think the issue has to do with the fact that there is no easy way to go from a thread you are a participant to its most recent message, however, it is possible the issue is elsewhere. We’ve provided the full query and a simplified query of where we think the issue is, along with the EXPLAIN ANALYZE BUFFERS result. We figure this is not exactly an uncommon use case, so it’s likely that we are overlooking the potential for some missing indices or a better way to write the query. We appreciate the help and any advice!______We'd really appreciate any help and advice!Best,Parth", "msg_date": "Wed, 14 Oct 2020 13:30:41 -0400", "msg_from": "Parth Shah <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Query" }, { "msg_contents": "Is there no index on thread.spool? What about notification.user? How about\nmessage.time (without thread as a leading column). Those would all seem\nvery significant. Your row counts are very low to have a query perform so\nbadly. Work_mem could probably be increased above 4MB, but it isn't hurting\nthis query in particular.\n\nMy primary concern is that the query is rather chaotic at a glance. It\nwould be great to re-write and remove the unneeded keywords, double quotes,\ntotally worthless parentheses, etc. Something like the below may help you\nsee the crux of the query and what could be done and understand how many\nrows might be coming out of those subqueries. I re-ordered some joins and\nthere might be syntax errors, but give it a shot once you've added the\nindexes suggested above.\n\nSELECT\n\nspool.id,\n\nhandle.handle,\n\nspool.name,\n\nthread.id,\n\ncase.closed,\n\nnotification.read,\n\nnotification2.time,\n\nmessage.message,\n\nmessage.time,\n\nmessage.author,\n\nthread.name,\n\nlocation.geo\n\nFROM\n\nspool\n\nJOIN handle ON handle.id = spool.id\n\nJOIN thread ON thread.spool = spool.id\n\nJOIN message ON message.thread = thread.id\n\nLEFT JOIN location ON location.id = spool.location\n\nLEFT JOIN case ON case.id = spool.id\n\nLEFT JOIN notification ON notification.user =\n'b16690e4-a3c5-4868-945e-c2458c27a525'\n\nAND\n\nnotification.id = (\n\nSELECT\n\nnotification3.id\n\nFROM\n\nnotification AS notification3\n\nJOIN notification_thread ON notification_thread.id = notification3.id\n\nJOIN thread AS thread2 ON thread2.id = notification_thread.thread\n\nWHERE\n\nthread2.spool = spool.id\n\nAND notification3.user = 'b16690e4-a3c5-4868-945e-c2458c27a525'\n\nAND notification3.time <= '2020-09-30 16:32:38.054558'\n\nORDER BY\n\nnotification3.time DESC\n\nLIMIT 1\n\n)\n\nLEFT JOIN notification AS notification2 ON notification2.user =\n'b16690e4-a3c5-4868-945e-c2458c27a525'\n\nAND notification2.id = (\n\nSELECT\n\nnotification3.id\n\nFROM\n\nnotification AS notification3\n\nJOIN notification_thread ON notification_thread.id = notification3.id\n\nJOIN thread AS thread2 ON thread2.id = notification_thread.thread\n\nWHERE\n\nthread2.spool = spool.id\n\nAND notification3.user = 'b16690e4-a3c5-4868-945e-c2458c27a525'\n\nAND notification3.time > '2020-09-30 16:32:38.054558'\n\nORDER BY\n\nnotification3.time DESC\n\nLIMIT 1\n\n)\n\nWHERE\n\nmessage.time = (\n\nSELECT\n\nMAX ( message2.time )\n\nFROM\n\nmessage AS message2\n\nJOIN thread AS thread2 ON thread2.id = message2.thread\n\nJOIN participant ON participant.thread = thread2.id\n\nJOIN identity ON identity.id = participant.identity\n\nLEFT JOIN relation ON relation.to = identity.id\n\nAND relation.from = 'b16690e4-a3c5-4868-945e-c2458c27a525'\n\nAND relation.manages = TRUE\n\nWHERE\n\nNOT message2.draft\n\nAND ( identity.id = 'b16690e4-a3c5-4868-945e-c2458c27a525' OR NOT\nrelation.to IS NULL )\n\nAND thread2.spool = spool.id\n\nLIMIT 1\n\n)\n\nAND notification.id IS NOT NULL\n\nORDER BY\n\nmessage.time DESC\n\nLIMIT 31;\n\nIs there no index on thread.spool? What about notification.user? How about message.time (without thread as a leading column). Those would all seem very significant. Your row counts are very low to have a query perform so badly. Work_mem could probably be increased above 4MB, but it isn't hurting this query in particular.My primary concern is that the query is rather chaotic at a glance. It would be great to re-write and remove the unneeded keywords, double quotes, totally worthless parentheses, etc. Something like the below may help you see the crux of the query and what could be done and understand how many rows might be coming out of those subqueries. I re-ordered some joins and there might be syntax errors, but give it a shot once you've added the indexes suggested above.\nSELECT\n spool.id,\n handle.handle,\n spool.name,\n thread.id,\n case.closed,\n notification.read,\n notification2.time,\n message.message,\n message.time,\n message.author,\n thread.name,\n location.geo \nFROM\n spool\n JOIN handle ON handle.id = spool.id\n JOIN thread ON thread.spool = spool.id\n JOIN message ON message.thread = thread.id\n LEFT JOIN location ON location.id = spool.location\n LEFT JOIN case ON case.id = spool.id \n LEFT JOIN notification ON notification.user = 'b16690e4-a3c5-4868-945e-c2458c27a525'\n AND \n notification.id = (\n SELECT\n notification3.id \n FROM\n notification AS notification3\n JOIN notification_thread ON notification_thread.id = notification3.id\n JOIN thread AS thread2 ON thread2.id = notification_thread.thread \n WHERE\n thread2.spool = spool.id\n AND notification3.user = 'b16690e4-a3c5-4868-945e-c2458c27a525'\n AND notification3.time <= '2020-09-30 16:32:38.054558'\n ORDER BY\n notification3.time DESC \n LIMIT 1 \n )\n LEFT JOIN notification AS notification2 ON notification2.user = 'b16690e4-a3c5-4868-945e-c2458c27a525'\n AND notification2.id = (\n SELECT\n notification3.id \n FROM\n notification AS notification3\n JOIN notification_thread ON notification_thread.id = notification3.id\n JOIN thread AS thread2 ON thread2.id = notification_thread.thread \n WHERE\n thread2.spool = spool.id\n AND notification3.user = 'b16690e4-a3c5-4868-945e-c2458c27a525'\n AND notification3.time > '2020-09-30 16:32:38.054558'\n ORDER BY\n notification3.time DESC \n LIMIT 1 \n ) \nWHERE\n message.time = (\n SELECT \n MAX ( message2.time ) \n FROM\n message AS message2\n JOIN thread AS thread2 ON thread2.id = message2.thread\n JOIN participant ON participant.thread = thread2.id\n JOIN identity ON identity.id = participant.identity\n LEFT JOIN relation ON relation.to = identity.id\n AND relation.from = 'b16690e4-a3c5-4868-945e-c2458c27a525'\n AND relation.manages = TRUE\n WHERE\n NOT message2.draft \n AND ( identity.id = 'b16690e4-a3c5-4868-945e-c2458c27a525' OR NOT relation.to IS NULL )\n AND thread2.spool = spool.id\n LIMIT 1\n ) \n AND notification.id IS NOT NULL\nORDER BY\n message.time DESC \nLIMIT 31;", "msg_date": "Wed, 14 Oct 2020 13:17:51 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" }, { "msg_contents": "Hi all,\n\nThanks, Michael (and Martin other thread)! We added those indexes you\nsuggested, and went ahead and added indexes for all our foreign keys. We\nalso added one combination index on notification (user, time). It led to a\nsmall constant factor speed up (2x) but is still taking a 13+ seconds. :(\nStill seems aggressively bad.\n\nI've attached the updated, cleaned up query and explain analyze result (the\nextra chaos was due to the fact that we're using\nhttps://hackage.haskell.org/package/esqueleto-3.2.3/docs/Database-Esqueleto.html\nto\ngenerate the SQL). Maybe we're missing some multi-column indexes?\n\nBest,\nParth\n\nOn Wed, Oct 14, 2020 at 3:18 PM Michael Lewis <[email protected]> wrote:\n\n> Is there no index on thread.spool? What about notification.user? How about\n> message.time (without thread as a leading column). Those would all seem\n> very significant. Your row counts are very low to have a query perform so\n> badly. Work_mem could probably be increased above 4MB, but it isn't hurting\n> this query in particular.\n>\n> My primary concern is that the query is rather chaotic at a glance. It\n> would be great to re-write and remove the unneeded keywords, double quotes,\n> totally worthless parentheses, etc. Something like the below may help you\n> see the crux of the query and what could be done and understand how many\n> rows might be coming out of those subqueries. I re-ordered some joins and\n> there might be syntax errors, but give it a shot once you've added the\n> indexes suggested above.\n>\n> SELECT\n>\n> spool.id,\n>\n> handle.handle,\n>\n> spool.name,\n>\n> thread.id,\n>\n> case.closed,\n>\n> notification.read,\n>\n> notification2.time,\n>\n> message.message,\n>\n> message.time,\n>\n> message.author,\n>\n> thread.name,\n>\n> location.geo\n>\n> FROM\n>\n> spool\n>\n> JOIN handle ON handle.id = spool.id\n>\n> JOIN thread ON thread.spool = spool.id\n>\n> JOIN message ON message.thread = thread.id\n>\n> LEFT JOIN location ON location.id = spool.location\n>\n> LEFT JOIN case ON case.id = spool.id\n>\n> LEFT JOIN notification ON notification.user =\n> 'b16690e4-a3c5-4868-945e-c2458c27a525'\n>\n> AND\n>\n> notification.id = (\n>\n> SELECT\n>\n> notification3.id\n>\n> FROM\n>\n> notification AS notification3\n>\n> JOIN notification_thread ON notification_thread.id = notification3.id\n>\n> JOIN thread AS thread2 ON thread2.id = notification_thread.thread\n>\n> WHERE\n>\n> thread2.spool = spool.id\n>\n> AND notification3.user = 'b16690e4-a3c5-4868-945e-c2458c27a525'\n>\n> AND notification3.time <= '2020-09-30 16:32:38.054558'\n>\n> ORDER BY\n>\n> notification3.time DESC\n>\n> LIMIT 1\n>\n> )\n>\n> LEFT JOIN notification AS notification2 ON notification2.user =\n> 'b16690e4-a3c5-4868-945e-c2458c27a525'\n>\n> AND notification2.id = (\n>\n> SELECT\n>\n> notification3.id\n>\n> FROM\n>\n> notification AS notification3\n>\n> JOIN notification_thread ON notification_thread.id = notification3.id\n>\n> JOIN thread AS thread2 ON thread2.id = notification_thread.thread\n>\n> WHERE\n>\n> thread2.spool = spool.id\n>\n> AND notification3.user = 'b16690e4-a3c5-4868-945e-c2458c27a525'\n>\n> AND notification3.time > '2020-09-30 16:32:38.054558'\n>\n> ORDER BY\n>\n> notification3.time DESC\n>\n> LIMIT 1\n>\n> )\n>\n> WHERE\n>\n> message.time = (\n>\n> SELECT\n>\n> MAX ( message2.time )\n>\n> FROM\n>\n> message AS message2\n>\n> JOIN thread AS thread2 ON thread2.id = message2.thread\n>\n> JOIN participant ON participant.thread = thread2.id\n>\n> JOIN identity ON identity.id = participant.identity\n>\n> LEFT JOIN relation ON relation.to = identity.id\n>\n> AND relation.from = 'b16690e4-a3c5-4868-945e-c2458c27a525'\n>\n> AND relation.manages = TRUE\n>\n> WHERE\n>\n> NOT message2.draft\n>\n> AND ( identity.id = 'b16690e4-a3c5-4868-945e-c2458c27a525' OR NOT\n> relation.to IS NULL )\n>\n> AND thread2.spool = spool.id\n>\n> LIMIT 1\n>\n> )\n>\n> AND notification.id IS NOT NULL\n>\n> ORDER BY\n>\n> message.time DESC\n>\n> LIMIT 31;\n>", "msg_date": "Wed, 14 Oct 2020 21:11:29 -0400", "msg_from": "Parth Shah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Query" }, { "msg_contents": "Based on the execution plan, it looks like the part that takes 13 seconds\nof the total 14.4 seconds is just calculating the max time used in the\nwhere clause. Anytime I see an OR involved in a plan gone off the rails, I\nalways always check if re-writing the query some other way may be faster.\nHow's the plan for something like this?\n\n\nWHERE message.time = greatest( *sub1.time*, *sub2.time* )\n\n/* sub1.time */\n(\nselect\nMAX ( message2.time )\nFROM\nmessage AS message2\nJOIN thread AS thread2 ON thread2.id = message2.thread\nJOIN participant ON participant.thread = thread2.id\nWHERE\nNOT message2.draft\nAND participant.identity = 'b16690e4-a3c5-4868-945e-c2458c27a525'\nAND thread2.spool = spool.id\n)\n\n/* sub2.time */\n(\nselect\nMAX ( message2.time )\nFROM\nmessage AS message2\nJOIN thread AS thread2 ON thread2.id = message2.thread\nJOIN participant ON participant.thread = thread2.id\nJOIN relation ON relation.to = participant.identity\nAND relation.from = 'b16690e4-a3c5-4868-945e-c2458c27a525'\nAND relation.manages = TRUE\nWHERE\nNOT message2.draft\nAND thread2.spool = spool.id\n)\n\n>\n\nBased on the execution plan, it looks like the part that takes 13 seconds of the total 14.4 seconds is just calculating the max time used in the where clause. Anytime I see an OR involved in a plan gone off the rails, I always always check if re-writing the query some other way may be faster. How's the plan for something like this?WHERE message.time = greatest( sub1.time, sub2.time )/* sub1.time */(selectMAX ( message2.time )FROMmessage AS message2JOIN thread AS thread2 ON thread2.id = message2.threadJOIN participant ON participant.thread = thread2.idWHERENOT message2.draftAND participant.identity = 'b16690e4-a3c5-4868-945e-c2458c27a525'AND thread2.spool = spool.id)/* sub2.time */(selectMAX ( message2.time )FROMmessage AS message2JOIN thread AS thread2 ON thread2.id = message2.threadJOIN participant ON participant.thread = thread2.idJOIN relation ON relation.to = participant.identityAND relation.from = 'b16690e4-a3c5-4868-945e-c2458c27a525'AND relation.manages = TRUEWHERENOT message2.draftAND thread2.spool = spool.id)", "msg_date": "Wed, 14 Oct 2020 22:51:47 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" } ]
[ { "msg_contents": "We have a large Django application running against a Postgresql database.\n\nThe test suite for the application runs in GitLab-CI using Docker\nCompose to run the unit tests inside the application container against\na Postgresql database running in another container.\n\nWhen Django runs the unit tests for the application, it starts by\ncreating a new test database and then it runs the database migrations\nto create all the tables and the necessary reference data.Then for\neach test it opens a transaction, runs the test and then then rolls\nback the transaction. This ensures that the database is \"clean\" before\neach test run and reduces the risk that data created by one test will\ncause a different test to fail. Consequently this means that almost\nall the tables in the test database have zero or very few rows in\nthem. It also means that the statistics for the tables in the test\ndatabase are pretty meaningless. The statistics, if they exist, will\nprobably say there are zero rows, and the query will actually be\ndealing with 0 - 10 rows that are visible in the transaction, but\nwhich will be rolled back.\n\nWhen we run the test suite using Postgresql 10.7 in a Docker container\nwe consistently get:\n\nRan 1166 tests in 1291.855s\n\nWhen we first started running the same test suite against Postgresql\n12.4 we got:\n\nRan 1166 tests in 8502.030s\n\nI think that this reduction in performance is caused by the lack of\naccurate statistics because we had a similar problem (a large\nreduction in performance) in a load test that we used which we cured\nby running ANALYZE after creating the test data and before running the\nload test. The load test is using the same Django application code,\nbut creates a \"large amount\" of test data (in the 100s to 1000s of\nrows per table - it is looking for N+1 query problems rather than\nabsolute performance\".\n\nWe have since managed to get the performance of the test run using\n12.4 back to approximately the normal range by customizing the\nPostgresql parameters. `seq_page_cost=0.1` and `random_page_cost=0.11`\nseem to be key, but we are also setting `shared_buffers`, etc. and all\nthe other typical parameters. With Postgresql 10.7 we weren't setting\nanything and performance was fine using just the defaults, given the\ntiny data volumes.\n\nHowever, even though we have similar performance for 12.4 for most\ntest runs, it remains very variable. About 30% of the time we get\nsomething like:\n\nRan 1166 tests in 5362.690s.\n\nWe also see similar performance reductions and inconsistent results\nwith 11.9, so whatever change is causing the problem was likely\nintroduced in 11 rather than in 12.\n\nI think we have narrowed down the problem to a single, very complex,\nmaterialized view using CTEs; the unit tests create the test data and\nthen refresh the materialized view before executing the actual test\ncode.\n\nDatabase logging using autoexplain shows things like:\n\ndb_1 | 2020-10-14 10:27:59.692 UTC [255] LOG: duration:\n4134.625 ms plan:\ndb_1 | Query Text: REFRESH MATERIALIZED VIEW\nprice_marketpricefacts_materialized\ndb_1 | Merge Join\n(cost=14141048331504.30..9635143213364288.00 rows=116618175994107184\nwidth=3302) (actual time=4134.245..4134.403 rows=36 loops=1)\n\nFor comparison, the equivalent query on 10.7 has:\n\ndb_1 | 2020-10-15 03:28:58.382 UTC [163] LOG: duration:\n10.500 ms plan:\ndb_1 | Query Text: REFRESH MATERIALIZED VIEW\nprice_marketpricefacts_materialized\ndb_1 | Hash Left Join (cost=467650.55..508612.80\nrows=199494 width=3302) (actual time=10.281..10.341 rows=40 loops=1)\n\nThe staggering cost implies that the statistics are badly wrong, but\ngiven how few rows are in the result (36, and it's not an aggregate) I\nwould expect the query to be fast regardless of what the plan is. In\n10.7 the materialized view refreshes in 150 ms or\n\nI also don't understand why the performance would be so inconsistent\nacross test runs for 12.4 but not for 10.7. It is as though sometimes\nit gets a good plan and sometimes it doesn't.\n\nI can get performance almost identical to 10.7 by altering the unit\ntests so that in each test that refreshes the materialized view prior\nto executing the query, we execute `ANALYZE;` prior to refreshing the\nview.\n\nIs it worth us trying to debug the plan for situations with low row\ncounts and poor statistics? Or is this use case not really covered:\nthe general advice is obviously to make sure that statistics are up to\ndate before troubleshooting performance problems. On the other hand,\nit is not easy for us to make sure that we run analyze inside the\ntransaction in each unit test; it also seems a bit wasteful.\n\nOpinions and advice gratefully received.\n\nRoger\n\n\n", "msg_date": "Thu, 15 Oct 2020 01:21:50 -0400", "msg_from": "Roger Hunwicks <[email protected]>", "msg_from_op": true, "msg_subject": "Poor Performance running Django unit tests after upgrading from 10.6" }, { "msg_contents": "On Thu, 2020-10-15 at 01:21 -0400, Roger Hunwicks wrote:\n> We have a large Django application running against a Postgresql database.\n> \n> When we run the test suite using Postgresql 10.7 in a Docker container\n> we consistently get:\n> \n> Ran 1166 tests in 1291.855s\n> \n> When we first started running the same test suite against Postgresql\n> 12.4 we got:\n> \n> Ran 1166 tests in 8502.030s\n> \n> I think that this reduction in performance is caused by the lack of\n> accurate statistics [...]\n> \n> We have since managed to get the performance of the test run using\n> 12.4 back to approximately the normal range by customizing the\n> Postgresql parameters. `seq_page_cost=0.1` and `random_page_cost=0.11`\n> seem to be key, but we are also setting `shared_buffers`, etc. and all\n> the other typical parameters. With Postgresql 10.7 we weren't setting\n> anything and performance was fine using just the defaults, given the\n> tiny data volumes.\n> \n> However, even though we have similar performance for 12.4 for most\n> test runs, it remains very variable. About 30% of the time we get\n> something like:\n> \n> I think we have narrowed down the problem to a single, very complex,\n> materialized view using CTEs; the unit tests create the test data and\n> then refresh the materialized view before executing the actual test\n> code.\n> \n> Database logging using autoexplain shows things like:\n> \n> db_1 | 2020-10-14 10:27:59.692 UTC [255] LOG: duration:\n> 4134.625 ms plan:\n> db_1 | Query Text: REFRESH MATERIALIZED VIEW\n> price_marketpricefacts_materialized\n> db_1 | Merge Join\n> (cost=14141048331504.30..9635143213364288.00 rows=116618175994107184\n> width=3302) (actual time=4134.245..4134.403 rows=36 loops=1)\n> \n> For comparison, the equivalent query on 10.7 has:\n> \n> db_1 | 2020-10-15 03:28:58.382 UTC [163] LOG: duration:\n> 10.500 ms plan:\n> db_1 | Query Text: REFRESH MATERIALIZED VIEW\n> price_marketpricefacts_materialized\n> db_1 | Hash Left Join (cost=467650.55..508612.80\n> rows=199494 width=3302) (actual time=10.281..10.341 rows=40 loops=1)\n> \n> I can get performance almost identical to 10.7 by altering the unit\n> tests so that in each test that refreshes the materialized view prior\n> to executing the query, we execute `ANALYZE;` prior to refreshing the\n> view.\n> \n> Is it worth us trying to debug the plan for situations with low row\n> counts and poor statistics? Or is this use case not really covered:\n> the general advice is obviously to make sure that statistics are up to\n> date before troubleshooting performance problems. On the other hand,\n> it is not easy for us to make sure that we run analyze inside the\n> transaction in each unit test; it also seems a bit wasteful.\n> \n> Opinions and advice gratefully received.\n\nYes, the query plan for the query that defines the materialized view\nis the interesting data point. Run an EXPLAIN (ANALYZE, BUFFERS) on\nthat query.\n\nIf your statistics are off because the data have just been imported a\nsecond ago, run an explicit ANALYZE on the affected tables after import.\n\nIf your statistics are off because they are not calculated often enough,\nconsider lowering \"autovacuum_analyze_scale_factor\".\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Thu, 15 Oct 2020 08:56:47 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance running Django unit tests after upgrading from\n 10.6" }, { "msg_contents": "\nOn 10/15/20 1:21 AM, Roger Hunwicks wrote:\n>\n> I think we have narrowed down the problem to a single, very complex,\n> materialized view using CTEs; the unit tests create the test data and\n> then refresh the materialized view before executing the actual test\n> code.\n>\n\n\nHave you checked to see if the CTE query is affected by the change to\nhow CTEs are run in release 12?\n\n\nThe release notes say:\n\n Allow common table expressions (CTEs) to be inlined into the outer\n query (Andreas Karlsson, Andrew Gierth, David Fetter, Tom Lane)\n\n Specifically, CTEs are automatically inlined if they have no\n side-effects, are not recursive, and are referenced only once in the\n query. Inlining can be prevented by specifying MATERIALIZED, or\n forced for multiply-referenced CTEs by specifying NOT MATERIALIZED.\n Previously, CTEs were never inlined and were always evaluated before\n the rest of the query.\n\nSo if you haven't already, start by putting MATERIALIZED before each CTE\nclause:\n\n with foo as MATERIALIZED (select ...),\n\n bar as MATERIALIZED  (select ...),\n\n ...\n\nand see if that changes anything.\n\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Thu, 15 Oct 2020 06:59:39 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance running Django unit tests after upgrading from\n 10.6" }, { "msg_contents": "Roger Hunwicks <[email protected]> writes:\n> ...\n> However, even though we have similar performance for 12.4 for most\n> test runs, it remains very variable.\n> ...\n> I think we have narrowed down the problem to a single, very complex,\n> materialized view using CTEs; the unit tests create the test data and\n> then refresh the materialized view before executing the actual test\n> code.\n\nIn addition to others' nearby comments, I'd suggest that running all this\nunder auto_explain would be informative. You evidently are not getting a\nstable plan for your troublesome query, so you need to see what the range\nof plans is, not just probe it once with a manual EXPLAIN.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Oct 2020 09:56:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance running Django unit tests after upgrading from\n 10.6" } ]
[ { "msg_contents": "Hi,\nBelow query always shows up on top in the CPU matrix. Also despite having\nindexes it does sequential scans(probably because WHERE condition satisfies\nalmost all of the data from table). This query runs on the default landing\npage in application and needs to fetch records in less that 100 ms without\nconsuming too much CPU.\n\n Any opinions? Table is very huge and due to referential identity and\nbusiness requirements we could not implement partitioning as well.\n\nThere is index on (countrycode,facilitycode,jobstartdatetime)\n\nexplain (analyze,buffers) with JobCount as ( select jobstatuscode,count(1)\nstat_count from job j where 1=1 and j.countrycode = 'TH' and\nj.facilitycode in\n('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\nand ((j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30\n00:00:00' ) or j.jobstartdatetime IS NULL ) group by j.jobstatuscode)\n select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc\nright outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------\n Hash Right Join (cost=98845.93..98846.10 rows=10 width=12) (actual\ntime=1314.809..1314.849 rows=10 loops=1)\n Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)\n Buffers: shared hit=21314 read=3231\n I/O Timings: read=19.867\n CTE jobcount\n -> Finalize GroupAggregate (cost=98842.93..98844.71 rows=7 width=12)\n(actual time=1314.780..1314.802 rows=6 loops=1)\n Group Key: j.jobstatuscode\n Buffers: shared hit=21313 read=3231\n I/O Timings: read=19.867\n -> Gather Merge (cost=98842.93..98844.57 rows=14 width=12)\n(actual time=1314.766..1314.857 rows=18 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=60102 read=11834\n I/O Timings: read=59.194\n -> Sort (cost=97842.91..97842.93 rows=7 width=12)\n(actual time=1305.044..1305.047 rows=6 loops=3)\n Sort Key: j.jobstatuscode\n Sort Method: quicksort Memory: 25kB\n Worker 0: Sort Method: quicksort Memory: 25kB\n Worker 1: Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=60102 read=11834\n I/O Timings: read=59.194\n -> Partial HashAggregate (cost=97842.74..97842.81\nrows=7 width=12) (actual time=1305.010..1305.013 rows=6 loops=3)\n Group Key: j.jobstatuscode\n Buffers: shared hit=60086 read=11834\n I/O Timings: read=59.194\n -> Parallel Seq Scan on job j\n(cost=0.00..96837.93 rows=200963 width=4) (actual time=13.010..1144.434\nrows=163200 loops=3)\n Filter: (((countrycode)::text =\n'TH'::text) AND (((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp\nwithout time zone) AND (jobst\nartdatetime <= '2020-09-30 00:00:00'::timestamp without time zone)) OR\n(jobstartdatetime IS NULL)) AND ((facilitycode)::text = ANY\n('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1\n,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n Rows Removed by Filter: 449035\n Buffers: shared hit=60086 read=11834\n I/O Timings: read=59.194\n -> CTE Scan on jobcount jc (cost=0.00..0.14 rows=7 width=24) (actual\ntime=1314.784..1314.811 rows=6 loops=1)\n Buffers: shared hit=21313 read=3231\n I/O Timings: read=19.867\n -> Hash (cost=1.10..1.10 rows=10 width=4) (actual time=0.014..0.015\nrows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on jobstatus js (cost=0.00..1.10 rows=10 width=4)\n(actual time=0.005..0.008 rows=10 loops=1)\n Buffers: shared hit=1\n Planning Time: 0.949 ms\n Execution Time: 1314.993 ms\n(40 rows)\n\nRegards,\nAditya.\n\nHi,Below query always shows up on top in the CPU matrix. Also despite having indexes it does sequential scans(probably because WHERE condition satisfies almost all of the data from table). This query runs on the default landing page in application and needs to fetch records in less that 100 ms without consuming too much CPU. Any opinions? Table is very huge and due to referential identity and business requirements we could not implement partitioning as well.There is index on (countrycode,facilitycode,jobstartdatetime)explain (analyze,buffers) with JobCount as ( select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and ((j.jobstartdatetime  between '2020-08-01 00:00:00' and '2020-09-30 00:00:00' ) or j.jobstartdatetime IS NULL )  group by j.jobstatuscode) select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;                          QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Right Join  (cost=98845.93..98846.10 rows=10 width=12) (actual time=1314.809..1314.849 rows=10 loops=1)   Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)   Buffers: shared hit=21314 read=3231   I/O Timings: read=19.867   CTE jobcount     ->  Finalize GroupAggregate  (cost=98842.93..98844.71 rows=7 width=12) (actual time=1314.780..1314.802 rows=6 loops=1)           Group Key: j.jobstatuscode           Buffers: shared hit=21313 read=3231           I/O Timings: read=19.867           ->  Gather Merge  (cost=98842.93..98844.57 rows=14 width=12) (actual time=1314.766..1314.857 rows=18 loops=1)                 Workers Planned: 2                 Workers Launched: 2                 Buffers: shared hit=60102 read=11834                 I/O Timings: read=59.194                 ->  Sort  (cost=97842.91..97842.93 rows=7 width=12) (actual time=1305.044..1305.047 rows=6 loops=3)                       Sort Key: j.jobstatuscode                       Sort Method: quicksort  Memory: 25kB                       Worker 0:  Sort Method: quicksort  Memory: 25kB                       Worker 1:  Sort Method: quicksort  Memory: 25kB                       Buffers: shared hit=60102 read=11834                       I/O Timings: read=59.194                       ->  Partial HashAggregate  (cost=97842.74..97842.81 rows=7 width=12) (actual time=1305.010..1305.013 rows=6 loops=3)                             Group Key: j.jobstatuscode                             Buffers: shared hit=60086 read=11834                             I/O Timings: read=59.194                             ->  Parallel Seq Scan on job j  (cost=0.00..96837.93 rows=200963 width=4) (actual time=13.010..1144.434 rows=163200 loops=3)                                   Filter: (((countrycode)::text = 'TH'::text) AND (((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp without time zone) AND (jobstartdatetime <= '2020-09-30 00:00:00'::timestamp without time zone)) OR (jobstartdatetime IS NULL)) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))                                   Rows Removed by Filter: 449035                                   Buffers: shared hit=60086 read=11834                                   I/O Timings: read=59.194   ->  CTE Scan on jobcount jc  (cost=0.00..0.14 rows=7 width=24) (actual time=1314.784..1314.811 rows=6 loops=1)         Buffers: shared hit=21313 read=3231         I/O Timings: read=19.867   ->  Hash  (cost=1.10..1.10 rows=10 width=4) (actual time=0.014..0.015 rows=10 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         Buffers: shared hit=1         ->  Seq Scan on jobstatus js  (cost=0.00..1.10 rows=10 width=4) (actual time=0.005..0.008 rows=10 loops=1)               Buffers: shared hit=1 Planning Time: 0.949 ms Execution Time: 1314.993 ms(40 rows)Regards,Aditya.", "msg_date": "Thu, 15 Oct 2020 20:34:54 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "CPU Consuming query. Sequential scan despite indexing." }, { "msg_contents": "What version by the way? Do you get a faster execution if you disable\nsequential scan? Or set parallel workers per gather to 0? Your estimates\nlook decent as do cache hits, so other than caching data or upgrading\nhardware, not sure what else there is to be done.\n\nAlthough... you are hitting 70k blocks to read only 612k rows? Are these\njob records very wide perhaps, or do you need to do some vacuuming? Perhaps\nautovacuum is not keeping up and you could use some repacking or vacuum\nfull if/when you can afford downtime. If you create a temp table copy of\nthe job table, how does the size compare to the live table?\n\nWhat version by the way? Do you get a faster execution if you disable sequential scan? Or set parallel workers per gather to 0? Your estimates look decent as do cache hits, so other than caching data or upgrading hardware, not sure what else there is to be done.Although... you are hitting 70k blocks to read only 612k rows? Are these job records very wide perhaps, or do you need to do some vacuuming? Perhaps autovacuum is not keeping up and you could use some repacking or vacuum full if/when you can afford downtime. If you create a temp table copy of the job table, how does the size compare to the live table?", "msg_date": "Thu, 15 Oct 2020 10:26:35 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU Consuming query. Sequential scan despite indexing." }, { "msg_contents": "On Thu, 2020-10-15 at 20:34 +0530, aditya desai wrote:\n> Below query always shows up on top in the CPU matrix. Also despite having indexes it does sequential scans\n> (probably because WHERE condition satisfies almost all of the data from table). This query\n> runs on the default landing page in application and needs to fetch records in less that 100 ms\n> without consuming too much CPU.\n> \n> Any opinions? Table is very huge and due to referential identity and business requirements we could not\n> implement partitioning as well.\n> \n> There is index on (countrycode,facilitycode,jobstartdatetime)\n> \n> explain (analyze,buffers) with JobCount as ( select jobstatuscode,count(1) stat_count from job j\n> where 1=1 and j.countrycode = 'TH'\n> and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n> and ((j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30 00:00:00' ) or j.jobstartdatetime IS NULL ) group by j.jobstatuscode)\n> select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;\n> \n> QUERY PLAN\n> \n> Hash Right Join (cost=98845.93..98846.10 rows=10 width=12) (actual time=1314.809..1314.849 rows=10 loops=1)\n> -> Parallel Seq Scan on job j (cost=0.00..96837.93 rows=200963 width=4) (actual time=13.010..1144.434 rows=163200 loops=3)\n> Filter: (((countrycode)::text = 'TH'::text) AND (((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp without time zone) AND (jobst\n> artdatetime <= '2020-09-30 00:00:00'::timestamp without time zone)) OR (jobstartdatetime IS NULL)) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1\n> ,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n> Rows Removed by Filter: 449035\n> Buffers: shared hit=60086 read=11834\n> I/O Timings: read=59.194\n> \n\nYou should rewrite the subquery as a UNION to avoid the OR:\n\n ... WHERE j.countrycode = 'TH'\n and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n and j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30 00:00:00'\n\nand\n\n ... WHERE j.countrycode = 'TH'\n and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n and j.jobstartdatetime IS NULL\n\nThese indexes could speed up the resulting query:\n\n CREATE INDEX ON job (countrycode, facilitycode);\n CREATE INDEX ON job (countrycode, jobstartdatetime);\n CREATE INDEX ON job (countrycode, facilitycode) WHERE jobstartdaytime IS NULL;\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Fri, 16 Oct 2020 10:36:10 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU Consuming query. Sequential scan despite indexing." }, { "msg_contents": "Reply to the group, not just me please. Btw, when you do reply to the\ngroup, it is best practice on these lists to reply in-line and not just\nreply on top with all prior messages quoted.\n\nOn Sun, Oct 18, 2020 at 3:23 AM aditya desai <[email protected]> wrote:\n\n> I tried vacuum full and execution time came down to half.\n>\nGreat to hear.\n\n\n> However, it still consumes CPU. Setting parallel workers per gather to 0\n> did not help much.\n>\nYou didn't answer all of my questions, particularly about disabling\nsequential scan. If you still have the default random_page_cost of 4, it\nmight be that 1.5 allows better estimates for cost on index (random) vs\nsequential scan of a table.\n\nLaurenz is a brilliant guy. I would implement the indexes he suggests if\nyou don't have them already and report back. If the indexes don't get used,\ntry set enable_seqscan = false; before the query and if it is way faster,\nthen reduce random_page_cost to maybe 1-2 depending how your overall cache\nhit ratio is across the system.\n\n\n> Auto vacuuming is catching up just fine. No issues in that area.\n>\nIf the time came down by half after 'vacuum full', I would question that\nstatement.\n\n\n> Temp table size is less that original tables without indexes.\n>\nSignificantly less would indicate the regular table still being bloated I\nthink. Maybe someone else will suggest otherwise.\n\n\n> Does this mean we need to upgrade the hardware? Also by caching data , do\n> you mean caching at application side(microservices side) ? Or on postgres\n> side? I tried pg_prewarm, it did not help much.\n>\nI can't say about hardware. Until you have exhausted options like configs\nand indexing, spending more money forever onwards seems premature. I meant\npre-aggregated data, wherever it makes sense to do that. I wouldn't expect\npg_prewarm to do a ton since you already show high cache hits.\n\n\n> It is actually the CPU consumption which is the issue. Query is fast\n> otherwise.\n>\nSure, but that is a symptom of reading and processing a lot of data.\n\n>\n\nReply to the group, not just me please. Btw, when you do reply to the group, it is best practice on these lists to reply in-line and not just reply on top with all prior messages quoted.On Sun, Oct 18, 2020 at 3:23 AM aditya desai <[email protected]> wrote:I tried vacuum full and execution time came down to half.Great to hear. However, it still consumes CPU. Setting parallel workers per gather to 0 did not help much.You didn't answer all of my questions, particularly about disabling sequential scan. If you still have the default random_page_cost of 4, it might be that 1.5 allows better estimates for cost on index (random) vs sequential scan of a table. Laurenz is a brilliant guy. I would implement the indexes he suggests if you don't have them already and report back. If the indexes don't get used, try set enable_seqscan = false; before the query and if it is way faster, then reduce random_page_cost to maybe 1-2 depending how your overall cache hit ratio is across the system. Auto vacuuming is catching up just fine. No issues in that area.If the time came down by half after 'vacuum full', I would question that statement.  Temp table size is less that original tables without indexes.Significantly less would indicate the regular table still being bloated I think. Maybe someone else will suggest otherwise. Does this mean we need to upgrade the hardware? Also by caching data , do you mean caching at application side(microservices side) ? Or on postgres side? I tried pg_prewarm, it did not help much.I can't say about hardware. Until you have exhausted options like configs and indexing, spending more money forever onwards seems premature. I meant pre-aggregated data, wherever it makes sense to do that. I wouldn't expect pg_prewarm to do a ton since you already show high cache hits. It is actually the CPU consumption which is the issue. Query is fast otherwise.Sure, but that is a symptom of reading and processing a lot of data.", "msg_date": "Mon, 19 Oct 2020 10:20:12 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU Consuming query. Sequential scan despite indexing." }, { "msg_contents": "Hi Michael,\nWill follow standard practice going forward. We are in the process of\nrebuilding the PST environment equivalent to Prod where these Load tests\nwere done. I will implement all these suggestions on that environment and\nreply back. Sincere apologies for the delay.\n\nRegards,\nAditya.\n\nOn Mon, Oct 19, 2020 at 9:50 PM Michael Lewis <[email protected]> wrote:\n\n> Reply to the group, not just me please. Btw, when you do reply to the\n> group, it is best practice on these lists to reply in-line and not just\n> reply on top with all prior messages quoted.\n>\n> On Sun, Oct 18, 2020 at 3:23 AM aditya desai <[email protected]> wrote:\n>\n>> I tried vacuum full and execution time came down to half.\n>>\n> Great to hear.\n>\n>\n>> However, it still consumes CPU. Setting parallel workers per gather to 0\n>> did not help much.\n>>\n> You didn't answer all of my questions, particularly about disabling\n> sequential scan. If you still have the default random_page_cost of 4, it\n> might be that 1.5 allows better estimates for cost on index (random) vs\n> sequential scan of a table.\n>\n> Laurenz is a brilliant guy. I would implement the indexes he suggests if\n> you don't have them already and report back. If the indexes don't get used,\n> try set enable_seqscan = false; before the query and if it is way faster,\n> then reduce random_page_cost to maybe 1-2 depending how your overall cache\n> hit ratio is across the system.\n>\n>\n>> Auto vacuuming is catching up just fine. No issues in that area.\n>>\n> If the time came down by half after 'vacuum full', I would question that\n> statement.\n>\n>\n>> Temp table size is less that original tables without indexes.\n>>\n> Significantly less would indicate the regular table still being bloated I\n> think. Maybe someone else will suggest otherwise.\n>\n>\n>> Does this mean we need to upgrade the hardware? Also by caching data , do\n>> you mean caching at application side(microservices side) ? Or on postgres\n>> side? I tried pg_prewarm, it did not help much.\n>>\n> I can't say about hardware. Until you have exhausted options like configs\n> and indexing, spending more money forever onwards seems premature. I meant\n> pre-aggregated data, wherever it makes sense to do that. I wouldn't expect\n> pg_prewarm to do a ton since you already show high cache hits.\n>\n>\n>> It is actually the CPU consumption which is the issue. Query is fast\n>> otherwise.\n>>\n> Sure, but that is a symptom of reading and processing a lot of data.\n>\n>>\n\nHi Michael,Will follow standard practice going forward. We are in the process of rebuilding the PST environment equivalent to Prod where these Load tests were done. I will implement all these suggestions on that environment and reply back. Sincere apologies for the delay.Regards,Aditya.On Mon, Oct 19, 2020 at 9:50 PM Michael Lewis <[email protected]> wrote:Reply to the group, not just me please. Btw, when you do reply to the group, it is best practice on these lists to reply in-line and not just reply on top with all prior messages quoted.On Sun, Oct 18, 2020 at 3:23 AM aditya desai <[email protected]> wrote:I tried vacuum full and execution time came down to half.Great to hear. However, it still consumes CPU. Setting parallel workers per gather to 0 did not help much.You didn't answer all of my questions, particularly about disabling sequential scan. If you still have the default random_page_cost of 4, it might be that 1.5 allows better estimates for cost on index (random) vs sequential scan of a table. Laurenz is a brilliant guy. I would implement the indexes he suggests if you don't have them already and report back. If the indexes don't get used, try set enable_seqscan = false; before the query and if it is way faster, then reduce random_page_cost to maybe 1-2 depending how your overall cache hit ratio is across the system. Auto vacuuming is catching up just fine. No issues in that area.If the time came down by half after 'vacuum full', I would question that statement.  Temp table size is less that original tables without indexes.Significantly less would indicate the regular table still being bloated I think. Maybe someone else will suggest otherwise. Does this mean we need to upgrade the hardware? Also by caching data , do you mean caching at application side(microservices side) ? Or on postgres side? I tried pg_prewarm, it did not help much.I can't say about hardware. Until you have exhausted options like configs and indexing, spending more money forever onwards seems premature. I meant pre-aggregated data, wherever it makes sense to do that. I wouldn't expect pg_prewarm to do a ton since you already show high cache hits. It is actually the CPU consumption which is the issue. Query is fast otherwise.Sure, but that is a symptom of reading and processing a lot of data.", "msg_date": "Tue, 20 Oct 2020 12:47:45 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU Consuming query. Sequential scan despite indexing." }, { "msg_contents": "Hi Laurenz,\nI created\n\nOn Fri, Oct 16, 2020 at 2:06 PM Laurenz Albe <[email protected]>\nwrote:\n\n> On Thu, 2020-10-15 at 20:34 +0530, aditya desai wrote:\n> > Below query always shows up on top in the CPU matrix. Also despite\n> having indexes it does sequential scans\n> > (probably because WHERE condition satisfies almost all of the data from\n> table). This query\n> > runs on the default landing page in application and needs to fetch\n> records in less that 100 ms\n> > without consuming too much CPU.\n> >\n> > Any opinions? Table is very huge and due to referential identity and\n> business requirements we could not\n> > implement partitioning as well.\n> >\n> > There is index on (countrycode,facilitycode,jobstartdatetime)\n> >\n> > explain (analyze,buffers) with JobCount as ( select\n> jobstatuscode,count(1) stat_count from job j\n> > where 1=1 and j.countrycode = 'TH'\n> > and j.facilitycode in\n> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n> > and ((j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30\n> 00:00:00' ) or j.jobstartdatetime IS NULL ) group by j.jobstatuscode)\n> > select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount\n> jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;\n> >\n> > QUERY PLAN\n> >\n> > Hash Right Join (cost=98845.93..98846.10 rows=10 width=12) (actual\n> time=1314.809..1314.849 rows=10 loops=1)\n> > -> Parallel Seq Scan on job j\n> (cost=0.00..96837.93 rows=200963 width=4) (actual time=13.010..1144.434\n> rows=163200 loops=3)\n> > Filter: (((countrycode)::text =\n> 'TH'::text) AND (((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp\n> without time zone) AND (jobst\n> > artdatetime <= '2020-09-30 00:00:00'::timestamp without time zone)) OR\n> (jobstartdatetime IS NULL)) AND ((facilitycode)::text = ANY\n> ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1\n> > ,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n> > Rows Removed by Filter: 449035\n> > Buffers: shared hit=60086 read=11834\n> > I/O Timings: read=59.194\n> >\n>\n> You should rewrite the subquery as a UNION to avoid the OR:\n>\n> ... WHERE j.countrycode = 'TH'\n> and j.facilitycode in\n> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n> and j.jobstartdatetime between '2020-08-01 00:00:00' and\n> '2020-09-30 00:00:00'\n>\n> and\n>\n> ... WHERE j.countrycode = 'TH'\n> and j.facilitycode in\n> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n> and j.jobstartdatetime IS NULL\n>\n> These indexes could speed up the resulting query:\n>\n> CREATE INDEX ON job (countrycode, facilitycode);\n> CREATE INDEX ON job (countrycode, jobstartdatetime);\n> CREATE INDEX ON job (countrycode, facilitycode) WHERE jobstartdaytime IS\n> NULL;\n>\n\nI created the indexes you suggested and changed the query with the UNION\noperator. Please see explain plan below. Performance of the query(execution\ntime has improved mostly because I ran vacuum full). Cost of the query is\nstill high.This is Dev envrionment and has 2 vCPU and 8 GB RAM.\n\nexplain (analyze,buffers) with JobCount as ( (select jobstatuscode,count(1)\nstat_count from job j where 1=1 and j.countrycode = 'TH' and\nj.facilitycode in\n('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\nand j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30\n00:00:00' group by j.jobstatuscode) UNION (select jobstatuscode,count(1)\nstat_count from job j where 1=1 and j.countrycode = 'TH' and\nj.facilitycode in\n('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\nand j.jobstartdatetime is null group by j.jobstatuscode))\nlmp_delivery_jobs-> select js.jobstatuscode,COALESCE(stat_count,0)\nstat_count from JobCount jc right outer join jobstatus js on\njc.jobstatuscode=js.jobstatuscode;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------------\n Hash Right Join (cost=79010.89..79011.19 rows=10 width=12) (actual\ntime=444.241..444.256 rows=10 loops=1)\n Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)\n Buffers: shared hit=8560\n CTE jobcount\n -> HashAggregate (cost=79002.35..79002.48 rows=13 width=24) (actual\ntime=444.211..444.213 rows=6 loops=1)\n Group Key: j.jobstatuscode, (count(1))\n Buffers: shared hit=8558\n -> Append (cost=78959.64..79002.28 rows=13 width=24) (actual\ntime=444.081..444.202 rows=6 loops=1)\n Buffers: shared hit=8558\n -> Finalize GroupAggregate (cost=78959.64..78961.41\nrows=7 width=12) (actual time=444.079..444.101 rows=6 loops=1)\n Group Key: j.jobstatuscode\n Buffers: shared hit=8546\n -> Gather Merge (cost=78959.64..78961.27 rows=14\nwidth=12) (actual time=444.063..444.526 rows=18 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=17636\n -> Sort (cost=77959.61..77959.63 rows=7\nwidth=12) (actual time=435.748..435.750 rows=6 loops=3)\n Sort Key: j.jobstatuscode\n Sort Method: quicksort Memory: 25kB\n Worker 0: Sort Method: quicksort\nMemory: 25kB\n Worker 1: Sort Method: quicksort\nMemory: 25kB\n Buffers: shared hit=17636\n -> Partial HashAggregate\n(cost=77959.44..77959.51 rows=7 width=12) (actual time=435.703..435.706\nrows=6 loops=3)\n Group Key: j.jobstatuscode\n Buffers: shared hit=17620\n -> Parallel Bitmap Heap Scan on\njob j (cost=11528.22..76957.69 rows=200351 width=4) (actual\ntime=47.682..281.928 rows=163200\nloops=3)\n Recheck Cond:\n(((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY\n('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,T\nHPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n Filter: ((jobstartdatetime\n>= '2020-08-01 00:00:00'::timestamp without time zone) AND\n(jobstartdatetime <= '2020-09-30 00\n:00:00'::timestamp without time zone))\n Heap Blocks: exact=6633\n Buffers: shared hit=17620\n -> Bitmap Index Scan on\njob_list_test1 (cost=0.00..11408.01 rows=482693 width=0) (actual\ntime=49.825..49.826 rows=48960\n0 loops=1)\n Index Cond:\n(((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY\n('{THNPM1,THPRK1,THCNT1,THSPN1,THKR\nI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n Buffers: shared\nhit=1913\n -> GroupAggregate (cost=40.50..40.68 rows=6 width=12)\n(actual time=0.093..0.094 rows=0 loops=1)\n Group Key: j_1.jobstatuscode\n Buffers: shared hit=12\n -> Sort (cost=40.50..40.54 rows=16 width=4)\n(actual time=0.092..0.092 rows=0 loops=1)\n Sort Key: j_1.jobstatuscode\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=12\n -> Index Scan using job_list_test3 on job\nj_1 (cost=0.14..40.18 rows=16 width=4) (actual time=0.081..0.082 rows=0\nloops=1)\n Index Cond: (((countrycode)::text =\n'TH'::text) AND ((facilitycode)::text = ANY\n('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,T\nHUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n Buffers: shared hit=12\n -> CTE Scan on jobcount jc (cost=0.00..0.26 rows=13 width=24) (actual\ntime=444.215..444.221 rows=6 loops=1)\n Buffers: shared hit=8558\n -> Hash (cost=8.29..8.29 rows=10 width=4) (actual time=0.016..0.016\nrows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=2\n -> Index Only Scan using jobstatus_jobstatuscode_unq on jobstatus\njs (cost=0.14..8.29 rows=10 width=4) (actual time=0.006..0.010 rows=10\nloops=1)\n Heap Fetches: 0\n Buffers: shared hit=2\n Planning Time: 0.808 ms\n Execution Time: 444.819 ms\n(53 rows)\n\n\n\n\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n>\n\nHi Laurenz,I createdOn Fri, Oct 16, 2020 at 2:06 PM Laurenz Albe <[email protected]> wrote:On Thu, 2020-10-15 at 20:34 +0530, aditya desai wrote:\n> Below query always shows up on top in the CPU matrix. Also despite having indexes it does sequential scans\n> (probably because WHERE condition satisfies almost all of the data from table). This query\n> runs on the default landing page in application and needs to fetch records in less that 100 ms\n>  without consuming too much CPU.\n> \n>  Any opinions? Table is very huge and due to referential identity and business requirements we could not\n>  implement partitioning as well.\n> \n> There is index on (countrycode,facilitycode,jobstartdatetime)\n> \n> explain (analyze,buffers) with JobCount as ( select jobstatuscode,count(1) stat_count from job j\n>  where 1=1 and j.countrycode = 'TH'\n> and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n>  and ((j.jobstartdatetime  between '2020-08-01 00:00:00' and '2020-09-30 00:00:00' ) or j.jobstartdatetime IS NULL )  group by j.jobstatuscode)\n>  select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;\n> \n>                           QUERY PLAN\n> \n>  Hash Right Join  (cost=98845.93..98846.10 rows=10 width=12) (actual time=1314.809..1314.849 rows=10 loops=1)\n>                              ->  Parallel Seq Scan on job j  (cost=0.00..96837.93 rows=200963 width=4) (actual time=13.010..1144.434 rows=163200 loops=3)\n>                                    Filter: (((countrycode)::text = 'TH'::text) AND (((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp without time zone) AND (jobst\n> artdatetime <= '2020-09-30 00:00:00'::timestamp without time zone)) OR (jobstartdatetime IS NULL)) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1\n> ,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n>                                    Rows Removed by Filter: 449035\n>                                    Buffers: shared hit=60086 read=11834\n>                                    I/O Timings: read=59.194\n> \n\nYou should rewrite the subquery as a UNION to avoid the OR:\n\n  ... WHERE j.countrycode = 'TH'\n        and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n        and j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30 00:00:00'\n\nand\n\n  ... WHERE j.countrycode = 'TH'\n        and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n        and j.jobstartdatetime IS NULL\n\nThese indexes could speed up the resulting query:\n\n  CREATE INDEX ON job (countrycode, facilitycode);\n  CREATE INDEX ON job (countrycode, jobstartdatetime);\n  CREATE INDEX ON job (countrycode, facilitycode) WHERE jobstartdaytime IS NULL;I created the indexes you suggested and changed the query with the UNION operator. Please see explain plan below. Performance of the query(execution time has improved mostly because I ran vacuum full). Cost of the query is still high.This is Dev envrionment and has 2 vCPU and 8 GB RAM.explain (analyze,buffers) with JobCount as ( (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime  between '2020-08-01 00:00:00' and '2020-09-30 00:00:00'    group by j.jobstatuscode) UNION (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime is null  group by j.jobstatuscode))lmp_delivery_jobs->  select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;                                                                                                               QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Right Join  (cost=79010.89..79011.19 rows=10 width=12) (actual time=444.241..444.256 rows=10 loops=1)   Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)   Buffers: shared hit=8560   CTE jobcount     ->  HashAggregate  (cost=79002.35..79002.48 rows=13 width=24) (actual time=444.211..444.213 rows=6 loops=1)           Group Key: j.jobstatuscode, (count(1))           Buffers: shared hit=8558           ->  Append  (cost=78959.64..79002.28 rows=13 width=24) (actual time=444.081..444.202 rows=6 loops=1)                 Buffers: shared hit=8558                 ->  Finalize GroupAggregate  (cost=78959.64..78961.41 rows=7 width=12) (actual time=444.079..444.101 rows=6 loops=1)                       Group Key: j.jobstatuscode                       Buffers: shared hit=8546                       ->  Gather Merge  (cost=78959.64..78961.27 rows=14 width=12) (actual time=444.063..444.526 rows=18 loops=1)                             Workers Planned: 2                             Workers Launched: 2                             Buffers: shared hit=17636                             ->  Sort  (cost=77959.61..77959.63 rows=7 width=12) (actual time=435.748..435.750 rows=6 loops=3)                                   Sort Key: j.jobstatuscode                                   Sort Method: quicksort  Memory: 25kB                                   Worker 0:  Sort Method: quicksort  Memory: 25kB                                   Worker 1:  Sort Method: quicksort  Memory: 25kB                                   Buffers: shared hit=17636                                   ->  Partial HashAggregate  (cost=77959.44..77959.51 rows=7 width=12) (actual time=435.703..435.706 rows=6 loops=3)                                         Group Key: j.jobstatuscode                                         Buffers: shared hit=17620                                         ->  Parallel Bitmap Heap Scan on job j  (cost=11528.22..76957.69 rows=200351 width=4) (actual time=47.682..281.928 rows=163200loops=3)                                               Recheck Cond: (((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))                                               Filter: ((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp without time zone) AND (jobstartdatetime <= '2020-09-30 00:00:00'::timestamp without time zone))                                               Heap Blocks: exact=6633                                               Buffers: shared hit=17620                                               ->  Bitmap Index Scan on job_list_test1  (cost=0.00..11408.01 rows=482693 width=0) (actual time=49.825..49.826 rows=489600 loops=1)                                                     Index Cond: (((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))                                                     Buffers: shared hit=1913                 ->  GroupAggregate  (cost=40.50..40.68 rows=6 width=12) (actual time=0.093..0.094 rows=0 loops=1)                       Group Key: j_1.jobstatuscode                       Buffers: shared hit=12                       ->  Sort  (cost=40.50..40.54 rows=16 width=4) (actual time=0.092..0.092 rows=0 loops=1)                             Sort Key: j_1.jobstatuscode                             Sort Method: quicksort  Memory: 25kB                             Buffers: shared hit=12                             ->  Index Scan using job_list_test3 on job j_1  (cost=0.14..40.18 rows=16 width=4) (actual time=0.081..0.082 rows=0 loops=1)                                   Index Cond: (((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))                                   Buffers: shared hit=12   ->  CTE Scan on jobcount jc  (cost=0.00..0.26 rows=13 width=24) (actual time=444.215..444.221 rows=6 loops=1)         Buffers: shared hit=8558   ->  Hash  (cost=8.29..8.29 rows=10 width=4) (actual time=0.016..0.016 rows=10 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         Buffers: shared hit=2         ->  Index Only Scan using jobstatus_jobstatuscode_unq on jobstatus js  (cost=0.14..8.29 rows=10 width=4) (actual time=0.006..0.010 rows=10 loops=1)               Heap Fetches: 0               Buffers: shared hit=2 Planning Time: 0.808 ms Execution Time: 444.819 ms(53 rows) \n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com", "msg_date": "Tue, 20 Oct 2020 18:00:44 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU Consuming query. Sequential scan despite indexing." }, { "msg_contents": "On Mon, Oct 19, 2020 at 9:50 PM Michael Lewis <[email protected]> wrote:\n\n> Reply to the group, not just me please. Btw, when you do reply to the\n> group, it is best practice on these lists to reply in-line and not just\n> reply on top with all prior messages quoted.\n>\n\nHi Michael,\nPlease see below inline response. I tried all this on Dev env 2 vCPU and 8\nGB RAM. Still waiting for the PST environment :( with better configuration.\n\n>\n> On Sun, Oct 18, 2020 at 3:23 AM aditya desai <[email protected]> wrote:\n>\n>> I tried vacuum full and execution time came down to half.\n>>\n> Great to hear.\n>\n>\n>> However, it still consumes CPU. Setting parallel workers per gather to 0\n>> did not help much.\n>>\n> You didn't answer all of my questions, particularly about disabling\n> sequential scan. If you still have the default random_page_cost of 4, it\n> might be that 1.5 allows better estimates for cost on index (random) vs\n> sequential scan of a table.\n>\n\nPlease see the next inline answer.\n\n>\n> Laurenz is a brilliant guy. I would implement the indexes he suggests if\n> you don't have them already and report back. If the indexes don't get used,\n> try set enable_seqscan = false; before the query and if it is way faster,\n> then reduce random_page_cost to maybe 1-2 depending how your overall cache\n> hit ratio is across the system.\n>\n\nQuery plan with enable_seqscan=off , Random page cost=1. With this\nexecution time and cost of query is almost less than half compared to\noriginal settings. Also used the suggestions given by Laurenze. 1. Made use\nof UINON operator and created indexes.\n\nlmp_delivery_jobs=> explain (analyze,buffers) with JobCount as ( (select\njobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode =\n'TH' and j.facilitycode in\n('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\nand j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30\n00:00:00' group by j.jobstatuscode) UNION (select jobstatuscode,count(1)\nstat_count from job j where 1=1 and j.countrycode = 'TH' and\nj.facilitycode in\n('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\nand j.jobstartdatetime is null group by j.jobstatuscode))\n select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc\nright outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------\n Hash Right Join (cost=68652.52..68652.76 rows=10 width=12) (actual\ntime=676.477..676.495 rows=10 loops=1)\n Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)\n Buffers: shared hit=11897\n CTE jobcount\n -> HashAggregate (cost=68650.01..68650.11 rows=10 width=24) (actual\ntime=676.451..676.454 rows=8 loops=1)\n Group Key: j.jobstatuscode, (count(1))\n Buffers: shared hit=11895\n -> Append (cost=68645.89..68649.96 rows=10 width=24) (actual\ntime=676.346..676.441 rows=8 loops=1)\n Buffers: shared hit=11895\n -> Finalize GroupAggregate (cost=68645.89..68648.17\nrows=9 width=12) (actual time=676.345..676.379 rows=8 loops=1)\n Group Key: j.jobstatuscode\n Buffers: shared hit=11889\n -> Gather Merge (cost=68645.89..68647.99 rows=18\nwidth=12) (actual time=676.330..676.403 rows=24 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=29067 read=1\n I/O Timings: read=0.038\n -> Sort (cost=67645.87..67645.89 rows=9\nwidth=12) (actual time=669.544..669.548 rows=8 loops=3)\n Sort Key: j.jobstatuscode\n Sort Method: quicksort Memory: 25kB\n Worker 0: Sort Method: quicksort\nMemory: 25kB\n Worker 1: Sort Method: quicksort\nMemory: 25kB\n Buffers: shared hit=29067 read=1\n I/O Timings: read=0.038\n -> Partial HashAggregate\n(cost=67645.63..67645.72 rows=9 width=12) (actual time=669.506..669.511\nrows=8 loops=3)\n Group Key: j.jobstatuscode\n Buffers: shared hit=29051 read=1\n I/O Timings: read=0.038\n -> Parallel Index Scan using\njob_list_test1 on job j (cost=0.43..66135.88 rows=301950 width=4) (actual\ntime=0.040..442.373 ro\nws=244800 loops=3)\n Index Cond:\n(((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY\n('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THP\nKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n Filter: ((jobstartdatetime\n>= '2020-08-01 00:00:00'::timestamp without time zone) AND\n(jobstartdatetime <= '2020-09-30 00\n:00:00'::timestamp without time zone))\n Buffers: shared hit=29051\nread=1\n I/O Timings: read=0.038\n -> GroupAggregate (cost=1.62..1.64 rows=1 width=12)\n(actual time=0.043..0.043 rows=0 loops=1)\n Group Key: j_1.jobstatuscode\n Buffers: shared hit=6\n -> Sort (cost=1.62..1.62 rows=1 width=4) (actual\ntime=0.041..0.041 rows=0 loops=1)\n Sort Key: j_1.jobstatuscode\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=6\n -> Index Scan using job_list_test3 on job\nj_1 (cost=0.14..1.61 rows=1 width=4) (actual time=0.034..0.034 rows=0\nloops=1)\n Index Cond: ((countrycode)::text =\n'TH'::text)\n Filter: ((facilitycode)::text = ANY\n('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])\n)\n Rows Removed by Filter: 26\n Buffers: shared hit=6\n -> CTE Scan on jobcount jc (cost=0.00..0.20 rows=10 width=24) (actual\ntime=676.454..676.461 rows=8 loops=1)\n Buffers: shared hit=11895\n -> Hash (cost=2.29..2.29 rows=10 width=4) (actual time=0.015..0.015\nrows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=2\n -> Index Only Scan using jobstatus_jobstatuscode_unq on jobstatus\njs (cost=0.14..2.29 rows=10 width=4) (actual time=0.005..0.009 rows=10\nloops=1)\n Heap Fetches: 0\n Buffers: shared hit=2\n Planning Time: 0.812 ms\n Execution Time: 676.642 ms\n(55 rows)\n\n\nQuery with Random page cost=4 and enable_seq=on\n\nlmp_delivery_jobs=> set random_page_cost=4;\nSET\nlmp_delivery_jobs=> explain (analyze,buffers) with JobCount as ( (select\njobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode =\n'TH' and j.facilitycode in\n('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\nand j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30\n00:00:00' group by j.jobstatuscode) UNION (select jobstatuscode,count(1)\nstat_count from job j where 1=1 and j.countrycode = 'TH' and\nj.facilitycode in\n('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\nand j.jobstartdatetime is null group by j.jobstatuscode))\n select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc\nright outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------\n Hash Right Join (cost=128145.44..128145.67 rows=10 width=12) (actual\ntime=1960.823..1960.842 rows=10 loops=1)\n Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)\n Buffers: shared hit=20586 read=8706\n I/O Timings: read=49.250\n CTE jobcount\n -> HashAggregate (cost=128144.11..128144.21 rows=10 width=24)\n(actual time=1960.786..1960.788 rows=8 loops=1)\n Group Key: j.jobstatuscode, (count(1))\n Buffers: shared hit=20585 read=8706\n I/O Timings: read=49.250\n -> Append (cost=128135.68..128144.06 rows=10 width=24) (actual\ntime=1960.634..1960.774 rows=8 loops=1)\n Buffers: shared hit=20585 read=8706\n I/O Timings: read=49.250\n -> Finalize GroupAggregate (cost=128135.68..128137.96\nrows=9 width=12) (actual time=1960.632..1960.689 rows=8 loops=1)\n Group Key: j.jobstatuscode\n Buffers: shared hit=20579 read=8706\n I/O Timings: read=49.250\n -> Gather Merge (cost=128135.68..128137.78 rows=18\nwidth=12) (actual time=1960.616..1960.690 rows=24 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=58214 read=30130\n I/O Timings: read=152.485\n -> Sort (cost=127135.66..127135.68 rows=9\nwidth=12) (actual time=1941.131..1941.134 rows=8 loops=3)\n Sort Key: j.jobstatuscode\n Sort Method: quicksort Memory: 25kB\n Worker 0: Sort Method: quicksort\nMemory: 25kB\n Worker 1: Sort Method: quicksort\nMemory: 25kB\n Buffers: shared hit=58214 read=30130\n I/O Timings: read=152.485\n -> Partial HashAggregate\n(cost=127135.43..127135.52 rows=9 width=12) (actual time=1941.088..1941.092\nrows=8 loops=3)\n Group Key: j.jobstatuscode\n Buffers: shared hit=58198\nread=30130\n I/O Timings: read=152.485\n -> Parallel Seq Scan on job j\n(cost=0.00..125625.68 rows=301950 width=4) (actual time=0.015..1698.223\nrows=244800 loops=3)\n Filter: ((jobstartdatetime\n>= '2020-08-01 00:00:00'::timestamp without time zone) AND\n(jobstartdatetime <= '2020-09-30 00\n:00:00'::timestamp without time zone) AND ((countrycode)::text =\n'TH'::text) AND ((facilitycode)::text = ANY\n('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,\nTHLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n Rows Removed by Filter:\n673444\n Buffers: shared hit=58198\nread=30130\n I/O Timings: read=152.485\n -> GroupAggregate (cost=5.93..5.95 rows=1 width=12)\n(actual time=0.077..0.077 rows=0 loops=1)\n Group Key: j_1.jobstatuscode\n Buffers: shared hit=6\n -> Sort (cost=5.93..5.94 rows=1 width=4) (actual\ntime=0.075..0.075 rows=0 loops=1)\n Sort Key: j_1.jobstatuscode\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=6\n -> Index Scan using job_list_test3 on job\nj_1 (cost=0.14..5.92 rows=1 width=4) (actual time=0.065..0.065 rows=0\nloops=1)\n Index Cond: ((countrycode)::text =\n'TH'::text)\n Filter: ((facilitycode)::text = ANY\n('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])\n)\n Rows Removed by Filter: 26\n Buffers: shared hit=6\n -> CTE Scan on jobcount jc (cost=0.00..0.20 rows=10 width=24) (actual\ntime=1960.789..1960.797 rows=8 loops=1)\n Buffers: shared hit=20585 read=8706\n I/O Timings: read=49.250\n -> Hash (cost=1.10..1.10 rows=10 width=4) (actual time=0.023..0.023\nrows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n Buffers: shared hit=1\n -> Seq Scan on jobstatus js (cost=0.00..1.10 rows=10 width=4)\n(actual time=0.007..0.013 rows=10 loops=1)\n Buffers: shared hit=1\n Planning Time: 3.019 ms\n Execution Time: 1961.024 ms\n\n\n>\n>\n>> Auto vacuuming is catching up just fine. No issues in that area.\n>>\n> If the time came down by half after 'vacuum full', I would question that\n> statement.\n>\n\nI checked the last autovacuum on underlying tables before load tests and it\nwas very recent. Also I explicitly ran VACUUM ANALYZE FREEZ on underlying\ntables before load test just to make sure. It did not help much.\n\n>\n>\n>> Temp table size is less that original tables without indexes.\n>>\n> Significantly less would indicate the regular table still being bloated I\n> think. Maybe someone else will suggest otherwise.\n>\n\nPlease see below.\n\n SELECT\nrelname AS TableName\n,n_live_tup AS LiveTuples\n,n_dead_tup AS DeadTuples\nFROM pg_stat_user_tables where relname='job';\n tablename | livetuples | deadtuples\n-----------+------------+------------\n job | 2754980 | 168\n\n\n>\n>\n>> Does this mean we need to upgrade the hardware? Also by caching data , do\n>> you mean caching at application side(microservices side) ? Or on postgres\n>> side? I tried pg_prewarm, it did not help much.\n>>\n> I can't say about hardware. Until you have exhausted options like configs\n> and indexing, spending more money forever onwards seems premature. I meant\n> pre-aggregated data, wherever it makes sense to do that. I wouldn't expect\n> pg_prewarm to do a ton since you already show high cache hits.\n>\n\nUnderstood thanks.\n\n>\n>\n>> It is actually the CPU consumption which is the issue. Query is fast\n>> otherwise.\n>>\n> Sure, but that is a symptom of reading and processing a lot of data.\n>\n\nAs per application team, it is business requirement to show last 60 days\nworth data. This particular query finds the counts of jobstatus(GROUP BY)\nwhich may be taking a lot of compute(CPU spikes) I have tried indexing\nsuggested by Laurenze as well. Cost and execution time are still high\n\nOn Mon, Oct 19, 2020 at 9:50 PM Michael Lewis <[email protected]> wrote:Reply to the group, not just me please. Btw, when you do reply to the group, it is best practice on these lists to reply in-line and not just reply on top with all prior messages quoted.Hi  Michael,Please see below inline response. I tried all this on Dev env 2 vCPU and 8 GB RAM. Still waiting for the PST environment :( with better configuration.On Sun, Oct 18, 2020 at 3:23 AM aditya desai <[email protected]> wrote:I tried vacuum full and execution time came down to half.Great to hear. However, it still consumes CPU. Setting parallel workers per gather to 0 did not help much.You didn't answer all of my questions, particularly about disabling sequential scan. If you still have the default random_page_cost of 4, it might be that 1.5 allows better estimates for cost on index (random) vs sequential scan of a table. Please see the next inline answer. Laurenz is a brilliant guy. I would implement the indexes he suggests if you don't have them already and report back. If the indexes don't get used, try set enable_seqscan = false; before the query and if it is way faster, then reduce random_page_cost to maybe 1-2 depending how your overall cache hit ratio is across the system.Query plan with enable_seqscan=off , Random page cost=1. With this execution time and cost of query is almost less than half compared to original settings. Also used the suggestions given by Laurenze. 1. Made use of UINON operator and created indexes. lmp_delivery_jobs=> explain (analyze,buffers) with JobCount as ( (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime  between '2020-08-01 00:00:00' and '2020-09-30 00:00:00'    group by j.jobstatuscode) UNION (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime is null  group by j.jobstatuscode)) select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;                                                                                                            QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Right Join  (cost=68652.52..68652.76 rows=10 width=12) (actual time=676.477..676.495 rows=10 loops=1)   Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)   Buffers: shared hit=11897   CTE jobcount     ->  HashAggregate  (cost=68650.01..68650.11 rows=10 width=24) (actual time=676.451..676.454 rows=8 loops=1)           Group Key: j.jobstatuscode, (count(1))           Buffers: shared hit=11895           ->  Append  (cost=68645.89..68649.96 rows=10 width=24) (actual time=676.346..676.441 rows=8 loops=1)                 Buffers: shared hit=11895                 ->  Finalize GroupAggregate  (cost=68645.89..68648.17 rows=9 width=12) (actual time=676.345..676.379 rows=8 loops=1)                       Group Key: j.jobstatuscode                       Buffers: shared hit=11889                       ->  Gather Merge  (cost=68645.89..68647.99 rows=18 width=12) (actual time=676.330..676.403 rows=24 loops=1)                             Workers Planned: 2                             Workers Launched: 2                             Buffers: shared hit=29067 read=1                             I/O Timings: read=0.038                             ->  Sort  (cost=67645.87..67645.89 rows=9 width=12) (actual time=669.544..669.548 rows=8 loops=3)                                   Sort Key: j.jobstatuscode                                   Sort Method: quicksort  Memory: 25kB                                   Worker 0:  Sort Method: quicksort  Memory: 25kB                                   Worker 1:  Sort Method: quicksort  Memory: 25kB                                   Buffers: shared hit=29067 read=1                                   I/O Timings: read=0.038                                   ->  Partial HashAggregate  (cost=67645.63..67645.72 rows=9 width=12) (actual time=669.506..669.511 rows=8 loops=3)                                         Group Key: j.jobstatuscode                                         Buffers: shared hit=29051 read=1                                         I/O Timings: read=0.038                                         ->  Parallel Index Scan using job_list_test1 on job j  (cost=0.43..66135.88 rows=301950 width=4) (actual time=0.040..442.373 rows=244800 loops=3)                                               Index Cond: (((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))                                               Filter: ((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp without time zone) AND (jobstartdatetime <= '2020-09-30 00:00:00'::timestamp without time zone))                                               Buffers: shared hit=29051 read=1                                               I/O Timings: read=0.038                 ->  GroupAggregate  (cost=1.62..1.64 rows=1 width=12) (actual time=0.043..0.043 rows=0 loops=1)                       Group Key: j_1.jobstatuscode                       Buffers: shared hit=6                       ->  Sort  (cost=1.62..1.62 rows=1 width=4) (actual time=0.041..0.041 rows=0 loops=1)                             Sort Key: j_1.jobstatuscode                             Sort Method: quicksort  Memory: 25kB                             Buffers: shared hit=6                             ->  Index Scan using job_list_test3 on job j_1  (cost=0.14..1.61 rows=1 width=4) (actual time=0.034..0.034 rows=0 loops=1)                                   Index Cond: ((countrycode)::text = 'TH'::text)                                   Filter: ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[]))                                   Rows Removed by Filter: 26                                   Buffers: shared hit=6   ->  CTE Scan on jobcount jc  (cost=0.00..0.20 rows=10 width=24) (actual time=676.454..676.461 rows=8 loops=1)         Buffers: shared hit=11895   ->  Hash  (cost=2.29..2.29 rows=10 width=4) (actual time=0.015..0.015 rows=10 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         Buffers: shared hit=2         ->  Index Only Scan using jobstatus_jobstatuscode_unq on jobstatus js  (cost=0.14..2.29 rows=10 width=4) (actual time=0.005..0.009 rows=10 loops=1)               Heap Fetches: 0               Buffers: shared hit=2 Planning Time: 0.812 ms Execution Time: 676.642 ms(55 rows)Query with Random page cost=4 and enable_seq=onlmp_delivery_jobs=> set random_page_cost=4;SETlmp_delivery_jobs=> explain (analyze,buffers) with JobCount as ( (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime  between '2020-08-01 00:00:00' and '2020-09-30 00:00:00'    group by j.jobstatuscode) UNION (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime is null  group by j.jobstatuscode)) select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;               QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Right Join  (cost=128145.44..128145.67 rows=10 width=12) (actual time=1960.823..1960.842 rows=10 loops=1)   Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)   Buffers: shared hit=20586 read=8706   I/O Timings: read=49.250   CTE jobcount     ->  HashAggregate  (cost=128144.11..128144.21 rows=10 width=24) (actual time=1960.786..1960.788 rows=8 loops=1)           Group Key: j.jobstatuscode, (count(1))           Buffers: shared hit=20585 read=8706           I/O Timings: read=49.250           ->  Append  (cost=128135.68..128144.06 rows=10 width=24) (actual time=1960.634..1960.774 rows=8 loops=1)                 Buffers: shared hit=20585 read=8706                 I/O Timings: read=49.250                 ->  Finalize GroupAggregate  (cost=128135.68..128137.96 rows=9 width=12) (actual time=1960.632..1960.689 rows=8 loops=1)                       Group Key: j.jobstatuscode                       Buffers: shared hit=20579 read=8706                       I/O Timings: read=49.250                       ->  Gather Merge  (cost=128135.68..128137.78 rows=18 width=12) (actual time=1960.616..1960.690 rows=24 loops=1)                             Workers Planned: 2                             Workers Launched: 2                             Buffers: shared hit=58214 read=30130                             I/O Timings: read=152.485                             ->  Sort  (cost=127135.66..127135.68 rows=9 width=12) (actual time=1941.131..1941.134 rows=8 loops=3)                                   Sort Key: j.jobstatuscode                                   Sort Method: quicksort  Memory: 25kB                                   Worker 0:  Sort Method: quicksort  Memory: 25kB                                   Worker 1:  Sort Method: quicksort  Memory: 25kB                                   Buffers: shared hit=58214 read=30130                                   I/O Timings: read=152.485                                   ->  Partial HashAggregate  (cost=127135.43..127135.52 rows=9 width=12) (actual time=1941.088..1941.092 rows=8 loops=3)                                         Group Key: j.jobstatuscode                                         Buffers: shared hit=58198 read=30130                                         I/O Timings: read=152.485                                         ->  Parallel Seq Scan on job j  (cost=0.00..125625.68 rows=301950 width=4) (actual time=0.015..1698.223 rows=244800 loops=3)                                               Filter: ((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp without time zone) AND (jobstartdatetime <= '2020-09-30 00:00:00'::timestamp without time zone) AND ((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))                                               Rows Removed by Filter: 673444                                               Buffers: shared hit=58198 read=30130                                               I/O Timings: read=152.485                 ->  GroupAggregate  (cost=5.93..5.95 rows=1 width=12) (actual time=0.077..0.077 rows=0 loops=1)                       Group Key: j_1.jobstatuscode                       Buffers: shared hit=6                       ->  Sort  (cost=5.93..5.94 rows=1 width=4) (actual time=0.075..0.075 rows=0 loops=1)                             Sort Key: j_1.jobstatuscode                             Sort Method: quicksort  Memory: 25kB                             Buffers: shared hit=6                             ->  Index Scan using job_list_test3 on job j_1  (cost=0.14..5.92 rows=1 width=4) (actual time=0.065..0.065 rows=0 loops=1)                                   Index Cond: ((countrycode)::text = 'TH'::text)                                   Filter: ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[]))                                   Rows Removed by Filter: 26                                   Buffers: shared hit=6   ->  CTE Scan on jobcount jc  (cost=0.00..0.20 rows=10 width=24) (actual time=1960.789..1960.797 rows=8 loops=1)         Buffers: shared hit=20585 read=8706         I/O Timings: read=49.250   ->  Hash  (cost=1.10..1.10 rows=10 width=4) (actual time=0.023..0.023 rows=10 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         Buffers: shared hit=1         ->  Seq Scan on jobstatus js  (cost=0.00..1.10 rows=10 width=4) (actual time=0.007..0.013 rows=10 loops=1)               Buffers: shared hit=1 Planning Time: 3.019 ms Execution Time: 1961.024 ms  Auto vacuuming is catching up just fine. No issues in that area.If the time came down by half after 'vacuum full', I would question that statement.I checked the last autovacuum on underlying tables before load tests and it was very recent. Also I explicitly ran VACUUM ANALYZE FREEZ on underlying tables before load test just to make sure. It did not help much.  Temp table size is less that original tables without indexes.Significantly less would indicate the regular table still being bloated I think. Maybe someone else will suggest otherwise.Please see below. SELECTrelname AS TableName,n_live_tup AS LiveTuples,n_dead_tup AS DeadTuplesFROM pg_stat_user_tables where relname='job'; tablename | livetuples | deadtuples-----------+------------+------------ job       |    2754980 |        168  Does this mean we need to upgrade the hardware? Also by caching data , do you mean caching at application side(microservices side) ? Or on postgres side? I tried pg_prewarm, it did not help much.I can't say about hardware. Until you have exhausted options like configs and indexing, spending more money forever onwards seems premature. I meant pre-aggregated data, wherever it makes sense to do that. I wouldn't expect pg_prewarm to do a ton since you already show high cache hits.Understood thanks.  It is actually the CPU consumption which is the issue. Query is fast otherwise.Sure, but that is a symptom of reading and processing a lot of data.As per application team, it is business requirement to show last 60 days worth data. This particular query finds the counts of jobstatus(GROUP BY) which may be taking a lot of compute(CPU spikes) I have tried indexing suggested by Laurenze as well. Cost and execution time are still high", "msg_date": "Tue, 20 Oct 2020 18:26:08 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU Consuming query. Sequential scan despite indexing." }, { "msg_contents": "Hi,\nKindly requesting an update on this. Thanks.\n\n-Aditya.\n\nOn Tue, Oct 20, 2020 at 6:26 PM aditya desai <[email protected]> wrote:\n\n>\n>\n> On Mon, Oct 19, 2020 at 9:50 PM Michael Lewis <[email protected]> wrote:\n>\n>> Reply to the group, not just me please. Btw, when you do reply to the\n>> group, it is best practice on these lists to reply in-line and not just\n>> reply on top with all prior messages quoted.\n>>\n>\n> Hi Michael,\n> Please see below inline response. I tried all this on Dev env 2 vCPU and 8\n> GB RAM. Still waiting for the PST environment :( with better configuration.\n>\n>>\n>> On Sun, Oct 18, 2020 at 3:23 AM aditya desai <[email protected]> wrote:\n>>\n>>> I tried vacuum full and execution time came down to half.\n>>>\n>> Great to hear.\n>>\n>>\n>>> However, it still consumes CPU. Setting parallel workers per gather to 0\n>>> did not help much.\n>>>\n>> You didn't answer all of my questions, particularly about disabling\n>> sequential scan. If you still have the default random_page_cost of 4, it\n>> might be that 1.5 allows better estimates for cost on index (random) vs\n>> sequential scan of a table.\n>>\n>\n> Please see the next inline answer.\n>\n>>\n>> Laurenz is a brilliant guy. I would implement the indexes he suggests if\n>> you don't have them already and report back. If the indexes don't get used,\n>> try set enable_seqscan = false; before the query and if it is way\n>> faster, then reduce random_page_cost to maybe 1-2 depending how your\n>> overall cache hit ratio is across the system.\n>>\n>\n> Query plan with enable_seqscan=off , Random page cost=1. With this\n> execution time and cost of query is almost less than half compared to\n> original settings. Also used the suggestions given by Laurenze. 1. Made use\n> of UINON operator and created indexes.\n>\n> lmp_delivery_jobs=> explain (analyze,buffers) with JobCount as ( (select\n> jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode =\n> 'TH' and j.facilitycode in\n> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n> and j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30\n> 00:00:00' group by j.jobstatuscode) UNION (select jobstatuscode,count(1)\n> stat_count from job j where 1=1 and j.countrycode = 'TH' and\n> j.facilitycode in\n> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n> and j.jobstartdatetime is null group by j.jobstatuscode))\n> select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount\n> jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> -----------------------------------------------------------\n> Hash Right Join (cost=68652.52..68652.76 rows=10 width=12) (actual\n> time=676.477..676.495 rows=10 loops=1)\n> Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)\n> Buffers: shared hit=11897\n> CTE jobcount\n> -> HashAggregate (cost=68650.01..68650.11 rows=10 width=24) (actual\n> time=676.451..676.454 rows=8 loops=1)\n> Group Key: j.jobstatuscode, (count(1))\n> Buffers: shared hit=11895\n> -> Append (cost=68645.89..68649.96 rows=10 width=24) (actual\n> time=676.346..676.441 rows=8 loops=1)\n> Buffers: shared hit=11895\n> -> Finalize GroupAggregate (cost=68645.89..68648.17\n> rows=9 width=12) (actual time=676.345..676.379 rows=8 loops=1)\n> Group Key: j.jobstatuscode\n> Buffers: shared hit=11889\n> -> Gather Merge (cost=68645.89..68647.99 rows=18\n> width=12) (actual time=676.330..676.403 rows=24 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=29067 read=1\n> I/O Timings: read=0.038\n> -> Sort (cost=67645.87..67645.89 rows=9\n> width=12) (actual time=669.544..669.548 rows=8 loops=3)\n> Sort Key: j.jobstatuscode\n> Sort Method: quicksort Memory: 25kB\n> Worker 0: Sort Method: quicksort\n> Memory: 25kB\n> Worker 1: Sort Method: quicksort\n> Memory: 25kB\n> Buffers: shared hit=29067 read=1\n> I/O Timings: read=0.038\n> -> Partial HashAggregate\n> (cost=67645.63..67645.72 rows=9 width=12) (actual time=669.506..669.511\n> rows=8 loops=3)\n> Group Key: j.jobstatuscode\n> Buffers: shared hit=29051 read=1\n> I/O Timings: read=0.038\n> -> Parallel Index Scan using\n> job_list_test1 on job j (cost=0.43..66135.88 rows=301950 width=4) (actual\n> time=0.040..442.373 ro\n> ws=244800 loops=3)\n> Index Cond:\n> (((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY\n> ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THP\n> KN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n> Filter: ((jobstartdatetime\n> >= '2020-08-01 00:00:00'::timestamp without time zone) AND\n> (jobstartdatetime <= '2020-09-30 00\n> :00:00'::timestamp without time zone))\n> Buffers: shared hit=29051\n> read=1\n> I/O Timings: read=0.038\n> -> GroupAggregate (cost=1.62..1.64 rows=1 width=12)\n> (actual time=0.043..0.043 rows=0 loops=1)\n> Group Key: j_1.jobstatuscode\n> Buffers: shared hit=6\n> -> Sort (cost=1.62..1.62 rows=1 width=4) (actual\n> time=0.041..0.041 rows=0 loops=1)\n> Sort Key: j_1.jobstatuscode\n> Sort Method: quicksort Memory: 25kB\n> Buffers: shared hit=6\n> -> Index Scan using job_list_test3 on job\n> j_1 (cost=0.14..1.61 rows=1 width=4) (actual time=0.034..0.034 rows=0\n> loops=1)\n> Index Cond: ((countrycode)::text =\n> 'TH'::text)\n> Filter: ((facilitycode)::text = ANY\n> ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])\n> )\n> Rows Removed by Filter: 26\n> Buffers: shared hit=6\n> -> CTE Scan on jobcount jc (cost=0.00..0.20 rows=10 width=24) (actual\n> time=676.454..676.461 rows=8 loops=1)\n> Buffers: shared hit=11895\n> -> Hash (cost=2.29..2.29 rows=10 width=4) (actual time=0.015..0.015\n> rows=10 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> Buffers: shared hit=2\n> -> Index Only Scan using jobstatus_jobstatuscode_unq on\n> jobstatus js (cost=0.14..2.29 rows=10 width=4) (actual time=0.005..0.009\n> rows=10 loops=1)\n> Heap Fetches: 0\n> Buffers: shared hit=2\n> Planning Time: 0.812 ms\n> Execution Time: 676.642 ms\n> (55 rows)\n>\n>\n> Query with Random page cost=4 and enable_seq=on\n>\n> lmp_delivery_jobs=> set random_page_cost=4;\n> SET\n> lmp_delivery_jobs=> explain (analyze,buffers) with JobCount as ( (select\n> jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode =\n> 'TH' and j.facilitycode in\n> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n> and j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30\n> 00:00:00' group by j.jobstatuscode) UNION (select jobstatuscode,count(1)\n> stat_count from job j where 1=1 and j.countrycode = 'TH' and\n> j.facilitycode in\n> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n> and j.jobstartdatetime is null group by j.jobstatuscode))\n> select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount\n> jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> -----------------------------------------\n> Hash Right Join (cost=128145.44..128145.67 rows=10 width=12) (actual\n> time=1960.823..1960.842 rows=10 loops=1)\n> Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)\n> Buffers: shared hit=20586 read=8706\n> I/O Timings: read=49.250\n> CTE jobcount\n> -> HashAggregate (cost=128144.11..128144.21 rows=10 width=24)\n> (actual time=1960.786..1960.788 rows=8 loops=1)\n> Group Key: j.jobstatuscode, (count(1))\n> Buffers: shared hit=20585 read=8706\n> I/O Timings: read=49.250\n> -> Append (cost=128135.68..128144.06 rows=10 width=24)\n> (actual time=1960.634..1960.774 rows=8 loops=1)\n> Buffers: shared hit=20585 read=8706\n> I/O Timings: read=49.250\n> -> Finalize GroupAggregate (cost=128135.68..128137.96\n> rows=9 width=12) (actual time=1960.632..1960.689 rows=8 loops=1)\n> Group Key: j.jobstatuscode\n> Buffers: shared hit=20579 read=8706\n> I/O Timings: read=49.250\n> -> Gather Merge (cost=128135.68..128137.78\n> rows=18 width=12) (actual time=1960.616..1960.690 rows=24 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=58214 read=30130\n> I/O Timings: read=152.485\n> -> Sort (cost=127135.66..127135.68 rows=9\n> width=12) (actual time=1941.131..1941.134 rows=8 loops=3)\n> Sort Key: j.jobstatuscode\n> Sort Method: quicksort Memory: 25kB\n> Worker 0: Sort Method: quicksort\n> Memory: 25kB\n> Worker 1: Sort Method: quicksort\n> Memory: 25kB\n> Buffers: shared hit=58214 read=30130\n> I/O Timings: read=152.485\n> -> Partial HashAggregate\n> (cost=127135.43..127135.52 rows=9 width=12) (actual time=1941.088..1941.092\n> rows=8 loops=3)\n> Group Key: j.jobstatuscode\n> Buffers: shared hit=58198\n> read=30130\n> I/O Timings: read=152.485\n> -> Parallel Seq Scan on job j\n> (cost=0.00..125625.68 rows=301950 width=4) (actual time=0.015..1698.223\n> rows=244800 loops=3)\n> Filter: ((jobstartdatetime\n> >= '2020-08-01 00:00:00'::timestamp without time zone) AND\n> (jobstartdatetime <= '2020-09-30 00\n> :00:00'::timestamp without time zone) AND ((countrycode)::text =\n> 'TH'::text) AND ((facilitycode)::text = ANY\n> ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,\n> THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n> Rows Removed by Filter:\n> 673444\n> Buffers: shared hit=58198\n> read=30130\n> I/O Timings: read=152.485\n> -> GroupAggregate (cost=5.93..5.95 rows=1 width=12)\n> (actual time=0.077..0.077 rows=0 loops=1)\n> Group Key: j_1.jobstatuscode\n> Buffers: shared hit=6\n> -> Sort (cost=5.93..5.94 rows=1 width=4) (actual\n> time=0.075..0.075 rows=0 loops=1)\n> Sort Key: j_1.jobstatuscode\n> Sort Method: quicksort Memory: 25kB\n> Buffers: shared hit=6\n> -> Index Scan using job_list_test3 on job\n> j_1 (cost=0.14..5.92 rows=1 width=4) (actual time=0.065..0.065 rows=0\n> loops=1)\n> Index Cond: ((countrycode)::text =\n> 'TH'::text)\n> Filter: ((facilitycode)::text = ANY\n> ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])\n> )\n> Rows Removed by Filter: 26\n> Buffers: shared hit=6\n> -> CTE Scan on jobcount jc (cost=0.00..0.20 rows=10 width=24) (actual\n> time=1960.789..1960.797 rows=8 loops=1)\n> Buffers: shared hit=20585 read=8706\n> I/O Timings: read=49.250\n> -> Hash (cost=1.10..1.10 rows=10 width=4) (actual time=0.023..0.023\n> rows=10 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> Buffers: shared hit=1\n> -> Seq Scan on jobstatus js (cost=0.00..1.10 rows=10 width=4)\n> (actual time=0.007..0.013 rows=10 loops=1)\n> Buffers: shared hit=1\n> Planning Time: 3.019 ms\n> Execution Time: 1961.024 ms\n>\n>\n>>\n>>\n>>> Auto vacuuming is catching up just fine. No issues in that area.\n>>>\n>> If the time came down by half after 'vacuum full', I would question that\n>> statement.\n>>\n>\n> I checked the last autovacuum on underlying tables before load tests and\n> it was very recent. Also I explicitly ran VACUUM ANALYZE FREEZ on\n> underlying tables before load test just to make sure. It did not help much.\n>\n>>\n>>\n>>> Temp table size is less that original tables without indexes.\n>>>\n>> Significantly less would indicate the regular table still being bloated I\n>> think. Maybe someone else will suggest otherwise.\n>>\n>\n> Please see below.\n>\n> SELECT\n> relname AS TableName\n> ,n_live_tup AS LiveTuples\n> ,n_dead_tup AS DeadTuples\n> FROM pg_stat_user_tables where relname='job';\n> tablename | livetuples | deadtuples\n> -----------+------------+------------\n> job | 2754980 | 168\n>\n>\n>>\n>>\n>>> Does this mean we need to upgrade the hardware? Also by caching data ,\n>>> do you mean caching at application side(microservices side) ? Or on\n>>> postgres side? I tried pg_prewarm, it did not help much.\n>>>\n>> I can't say about hardware. Until you have exhausted options like configs\n>> and indexing, spending more money forever onwards seems premature. I meant\n>> pre-aggregated data, wherever it makes sense to do that. I wouldn't expect\n>> pg_prewarm to do a ton since you already show high cache hits.\n>>\n>\n> Understood thanks.\n>\n>>\n>>\n>>> It is actually the CPU consumption which is the issue. Query is fast\n>>> otherwise.\n>>>\n>> Sure, but that is a symptom of reading and processing a lot of data.\n>>\n>\n> As per application team, it is business requirement to show last 60 days\n> worth data. This particular query finds the counts of jobstatus(GROUP BY)\n> which may be taking a lot of compute(CPU spikes) I have tried indexing\n> suggested by Laurenze as well. Cost and execution time are still high\n>\n\nHi,Kindly requesting an update on this. Thanks.-Aditya.On Tue, Oct 20, 2020 at 6:26 PM aditya desai <[email protected]> wrote:On Mon, Oct 19, 2020 at 9:50 PM Michael Lewis <[email protected]> wrote:Reply to the group, not just me please. Btw, when you do reply to the group, it is best practice on these lists to reply in-line and not just reply on top with all prior messages quoted.Hi  Michael,Please see below inline response. I tried all this on Dev env 2 vCPU and 8 GB RAM. Still waiting for the PST environment :( with better configuration.On Sun, Oct 18, 2020 at 3:23 AM aditya desai <[email protected]> wrote:I tried vacuum full and execution time came down to half.Great to hear. However, it still consumes CPU. Setting parallel workers per gather to 0 did not help much.You didn't answer all of my questions, particularly about disabling sequential scan. If you still have the default random_page_cost of 4, it might be that 1.5 allows better estimates for cost on index (random) vs sequential scan of a table. Please see the next inline answer. Laurenz is a brilliant guy. I would implement the indexes he suggests if you don't have them already and report back. If the indexes don't get used, try set enable_seqscan = false; before the query and if it is way faster, then reduce random_page_cost to maybe 1-2 depending how your overall cache hit ratio is across the system.Query plan with enable_seqscan=off , Random page cost=1. With this execution time and cost of query is almost less than half compared to original settings. Also used the suggestions given by Laurenze. 1. Made use of UINON operator and created indexes. lmp_delivery_jobs=> explain (analyze,buffers) with JobCount as ( (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime  between '2020-08-01 00:00:00' and '2020-09-30 00:00:00'    group by j.jobstatuscode) UNION (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime is null  group by j.jobstatuscode)) select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;                                                                                                            QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Right Join  (cost=68652.52..68652.76 rows=10 width=12) (actual time=676.477..676.495 rows=10 loops=1)   Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)   Buffers: shared hit=11897   CTE jobcount     ->  HashAggregate  (cost=68650.01..68650.11 rows=10 width=24) (actual time=676.451..676.454 rows=8 loops=1)           Group Key: j.jobstatuscode, (count(1))           Buffers: shared hit=11895           ->  Append  (cost=68645.89..68649.96 rows=10 width=24) (actual time=676.346..676.441 rows=8 loops=1)                 Buffers: shared hit=11895                 ->  Finalize GroupAggregate  (cost=68645.89..68648.17 rows=9 width=12) (actual time=676.345..676.379 rows=8 loops=1)                       Group Key: j.jobstatuscode                       Buffers: shared hit=11889                       ->  Gather Merge  (cost=68645.89..68647.99 rows=18 width=12) (actual time=676.330..676.403 rows=24 loops=1)                             Workers Planned: 2                             Workers Launched: 2                             Buffers: shared hit=29067 read=1                             I/O Timings: read=0.038                             ->  Sort  (cost=67645.87..67645.89 rows=9 width=12) (actual time=669.544..669.548 rows=8 loops=3)                                   Sort Key: j.jobstatuscode                                   Sort Method: quicksort  Memory: 25kB                                   Worker 0:  Sort Method: quicksort  Memory: 25kB                                   Worker 1:  Sort Method: quicksort  Memory: 25kB                                   Buffers: shared hit=29067 read=1                                   I/O Timings: read=0.038                                   ->  Partial HashAggregate  (cost=67645.63..67645.72 rows=9 width=12) (actual time=669.506..669.511 rows=8 loops=3)                                         Group Key: j.jobstatuscode                                         Buffers: shared hit=29051 read=1                                         I/O Timings: read=0.038                                         ->  Parallel Index Scan using job_list_test1 on job j  (cost=0.43..66135.88 rows=301950 width=4) (actual time=0.040..442.373 rows=244800 loops=3)                                               Index Cond: (((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))                                               Filter: ((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp without time zone) AND (jobstartdatetime <= '2020-09-30 00:00:00'::timestamp without time zone))                                               Buffers: shared hit=29051 read=1                                               I/O Timings: read=0.038                 ->  GroupAggregate  (cost=1.62..1.64 rows=1 width=12) (actual time=0.043..0.043 rows=0 loops=1)                       Group Key: j_1.jobstatuscode                       Buffers: shared hit=6                       ->  Sort  (cost=1.62..1.62 rows=1 width=4) (actual time=0.041..0.041 rows=0 loops=1)                             Sort Key: j_1.jobstatuscode                             Sort Method: quicksort  Memory: 25kB                             Buffers: shared hit=6                             ->  Index Scan using job_list_test3 on job j_1  (cost=0.14..1.61 rows=1 width=4) (actual time=0.034..0.034 rows=0 loops=1)                                   Index Cond: ((countrycode)::text = 'TH'::text)                                   Filter: ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[]))                                   Rows Removed by Filter: 26                                   Buffers: shared hit=6   ->  CTE Scan on jobcount jc  (cost=0.00..0.20 rows=10 width=24) (actual time=676.454..676.461 rows=8 loops=1)         Buffers: shared hit=11895   ->  Hash  (cost=2.29..2.29 rows=10 width=4) (actual time=0.015..0.015 rows=10 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         Buffers: shared hit=2         ->  Index Only Scan using jobstatus_jobstatuscode_unq on jobstatus js  (cost=0.14..2.29 rows=10 width=4) (actual time=0.005..0.009 rows=10 loops=1)               Heap Fetches: 0               Buffers: shared hit=2 Planning Time: 0.812 ms Execution Time: 676.642 ms(55 rows)Query with Random page cost=4 and enable_seq=onlmp_delivery_jobs=> set random_page_cost=4;SETlmp_delivery_jobs=> explain (analyze,buffers) with JobCount as ( (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime  between '2020-08-01 00:00:00' and '2020-09-30 00:00:00'    group by j.jobstatuscode) UNION (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime is null  group by j.jobstatuscode)) select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;               QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Right Join  (cost=128145.44..128145.67 rows=10 width=12) (actual time=1960.823..1960.842 rows=10 loops=1)   Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)   Buffers: shared hit=20586 read=8706   I/O Timings: read=49.250   CTE jobcount     ->  HashAggregate  (cost=128144.11..128144.21 rows=10 width=24) (actual time=1960.786..1960.788 rows=8 loops=1)           Group Key: j.jobstatuscode, (count(1))           Buffers: shared hit=20585 read=8706           I/O Timings: read=49.250           ->  Append  (cost=128135.68..128144.06 rows=10 width=24) (actual time=1960.634..1960.774 rows=8 loops=1)                 Buffers: shared hit=20585 read=8706                 I/O Timings: read=49.250                 ->  Finalize GroupAggregate  (cost=128135.68..128137.96 rows=9 width=12) (actual time=1960.632..1960.689 rows=8 loops=1)                       Group Key: j.jobstatuscode                       Buffers: shared hit=20579 read=8706                       I/O Timings: read=49.250                       ->  Gather Merge  (cost=128135.68..128137.78 rows=18 width=12) (actual time=1960.616..1960.690 rows=24 loops=1)                             Workers Planned: 2                             Workers Launched: 2                             Buffers: shared hit=58214 read=30130                             I/O Timings: read=152.485                             ->  Sort  (cost=127135.66..127135.68 rows=9 width=12) (actual time=1941.131..1941.134 rows=8 loops=3)                                   Sort Key: j.jobstatuscode                                   Sort Method: quicksort  Memory: 25kB                                   Worker 0:  Sort Method: quicksort  Memory: 25kB                                   Worker 1:  Sort Method: quicksort  Memory: 25kB                                   Buffers: shared hit=58214 read=30130                                   I/O Timings: read=152.485                                   ->  Partial HashAggregate  (cost=127135.43..127135.52 rows=9 width=12) (actual time=1941.088..1941.092 rows=8 loops=3)                                         Group Key: j.jobstatuscode                                         Buffers: shared hit=58198 read=30130                                         I/O Timings: read=152.485                                         ->  Parallel Seq Scan on job j  (cost=0.00..125625.68 rows=301950 width=4) (actual time=0.015..1698.223 rows=244800 loops=3)                                               Filter: ((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp without time zone) AND (jobstartdatetime <= '2020-09-30 00:00:00'::timestamp without time zone) AND ((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))                                               Rows Removed by Filter: 673444                                               Buffers: shared hit=58198 read=30130                                               I/O Timings: read=152.485                 ->  GroupAggregate  (cost=5.93..5.95 rows=1 width=12) (actual time=0.077..0.077 rows=0 loops=1)                       Group Key: j_1.jobstatuscode                       Buffers: shared hit=6                       ->  Sort  (cost=5.93..5.94 rows=1 width=4) (actual time=0.075..0.075 rows=0 loops=1)                             Sort Key: j_1.jobstatuscode                             Sort Method: quicksort  Memory: 25kB                             Buffers: shared hit=6                             ->  Index Scan using job_list_test3 on job j_1  (cost=0.14..5.92 rows=1 width=4) (actual time=0.065..0.065 rows=0 loops=1)                                   Index Cond: ((countrycode)::text = 'TH'::text)                                   Filter: ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[]))                                   Rows Removed by Filter: 26                                   Buffers: shared hit=6   ->  CTE Scan on jobcount jc  (cost=0.00..0.20 rows=10 width=24) (actual time=1960.789..1960.797 rows=8 loops=1)         Buffers: shared hit=20585 read=8706         I/O Timings: read=49.250   ->  Hash  (cost=1.10..1.10 rows=10 width=4) (actual time=0.023..0.023 rows=10 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         Buffers: shared hit=1         ->  Seq Scan on jobstatus js  (cost=0.00..1.10 rows=10 width=4) (actual time=0.007..0.013 rows=10 loops=1)               Buffers: shared hit=1 Planning Time: 3.019 ms Execution Time: 1961.024 ms  Auto vacuuming is catching up just fine. No issues in that area.If the time came down by half after 'vacuum full', I would question that statement.I checked the last autovacuum on underlying tables before load tests and it was very recent. Also I explicitly ran VACUUM ANALYZE FREEZ on underlying tables before load test just to make sure. It did not help much.  Temp table size is less that original tables without indexes.Significantly less would indicate the regular table still being bloated I think. Maybe someone else will suggest otherwise.Please see below. SELECTrelname AS TableName,n_live_tup AS LiveTuples,n_dead_tup AS DeadTuplesFROM pg_stat_user_tables where relname='job'; tablename | livetuples | deadtuples-----------+------------+------------ job       |    2754980 |        168  Does this mean we need to upgrade the hardware? Also by caching data , do you mean caching at application side(microservices side) ? Or on postgres side? I tried pg_prewarm, it did not help much.I can't say about hardware. Until you have exhausted options like configs and indexing, spending more money forever onwards seems premature. I meant pre-aggregated data, wherever it makes sense to do that. I wouldn't expect pg_prewarm to do a ton since you already show high cache hits.Understood thanks.  It is actually the CPU consumption which is the issue. Query is fast otherwise.Sure, but that is a symptom of reading and processing a lot of data.As per application team, it is business requirement to show last 60 days worth data. This particular query finds the counts of jobstatus(GROUP BY) which may be taking a lot of compute(CPU spikes) I have tried indexing suggested by Laurenze as well. Cost and execution time are still high", "msg_date": "Thu, 22 Oct 2020 10:51:40 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU Consuming query. Sequential scan despite indexing." }, { "msg_contents": "Hi,\nKindly requesting for help on this. Thanks.\n\n-Aditya.\n\n\n\nOn Tue, Oct 20, 2020 at 6:00 PM aditya desai <[email protected]> wrote:\n\n> Hi Laurenz,\n> I created\n>\n> On Fri, Oct 16, 2020 at 2:06 PM Laurenz Albe <[email protected]>\n> wrote:\n>\n>> On Thu, 2020-10-15 at 20:34 +0530, aditya desai wrote:\n>> > Below query always shows up on top in the CPU matrix. Also despite\n>> having indexes it does sequential scans\n>> > (probably because WHERE condition satisfies almost all of the data from\n>> table). This query\n>> > runs on the default landing page in application and needs to fetch\n>> records in less that 100 ms\n>> > without consuming too much CPU.\n>> >\n>> > Any opinions? Table is very huge and due to referential identity and\n>> business requirements we could not\n>> > implement partitioning as well.\n>> >\n>> > There is index on (countrycode,facilitycode,jobstartdatetime)\n>> >\n>> > explain (analyze,buffers) with JobCount as ( select\n>> jobstatuscode,count(1) stat_count from job j\n>> > where 1=1 and j.countrycode = 'TH'\n>> > and j.facilitycode in\n>> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n>> > and ((j.jobstartdatetime between '2020-08-01 00:00:00' and\n>> '2020-09-30 00:00:00' ) or j.jobstartdatetime IS NULL ) group by\n>> j.jobstatuscode)\n>> > select js.jobstatuscode,COALESCE(stat_count,0) stat_count from\n>> JobCount jc right outer join jobstatus js on\n>> jc.jobstatuscode=js.jobstatuscode;\n>> >\n>> > QUERY PLAN\n>> >\n>> > Hash Right Join (cost=98845.93..98846.10 rows=10 width=12) (actual\n>> time=1314.809..1314.849 rows=10 loops=1)\n>> > -> Parallel Seq Scan on job j\n>> (cost=0.00..96837.93 rows=200963 width=4) (actual time=13.010..1144.434\n>> rows=163200 loops=3)\n>> > Filter: (((countrycode)::text =\n>> 'TH'::text) AND (((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp\n>> without time zone) AND (jobst\n>> > artdatetime <= '2020-09-30 00:00:00'::timestamp without time zone)) OR\n>> (jobstartdatetime IS NULL)) AND ((facilitycode)::text = ANY\n>> ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1\n>> > ,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n>> > Rows Removed by Filter: 449035\n>> > Buffers: shared hit=60086 read=11834\n>> > I/O Timings: read=59.194\n>> >\n>>\n>> You should rewrite the subquery as a UNION to avoid the OR:\n>>\n>> ... WHERE j.countrycode = 'TH'\n>> and j.facilitycode in\n>> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n>> and j.jobstartdatetime between '2020-08-01 00:00:00' and\n>> '2020-09-30 00:00:00'\n>>\n>> and\n>>\n>> ... WHERE j.countrycode = 'TH'\n>> and j.facilitycode in\n>> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n>> and j.jobstartdatetime IS NULL\n>>\n>> These indexes could speed up the resulting query:\n>>\n>> CREATE INDEX ON job (countrycode, facilitycode);\n>> CREATE INDEX ON job (countrycode, jobstartdatetime);\n>> CREATE INDEX ON job (countrycode, facilitycode) WHERE jobstartdaytime\n>> IS NULL;\n>>\n>\n> I created the indexes you suggested and changed the query with the UNION\n> operator. Please see explain plan below. Performance of the query(execution\n> time has improved mostly because I ran vacuum full). Cost of the query is\n> still high.This is Dev envrionment and has 2 vCPU and 8 GB RAM.\n>\n> explain (analyze,buffers) with JobCount as ( (select\n> jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode =\n> 'TH' and j.facilitycode in\n> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n> and j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30\n> 00:00:00' group by j.jobstatuscode) UNION (select jobstatuscode,count(1)\n> stat_count from job j where 1=1 and j.countrycode = 'TH' and\n> j.facilitycode in\n> ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n> and j.jobstartdatetime is null group by j.jobstatuscode))\n> lmp_delivery_jobs-> select js.jobstatuscode,COALESCE(stat_count,0)\n> stat_count from JobCount jc right outer join jobstatus js on\n> jc.jobstatuscode=js.jobstatuscode;\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> -----------------------------------------------------------------\n> Hash Right Join (cost=79010.89..79011.19 rows=10 width=12) (actual\n> time=444.241..444.256 rows=10 loops=1)\n> Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)\n> Buffers: shared hit=8560\n> CTE jobcount\n> -> HashAggregate (cost=79002.35..79002.48 rows=13 width=24) (actual\n> time=444.211..444.213 rows=6 loops=1)\n> Group Key: j.jobstatuscode, (count(1))\n> Buffers: shared hit=8558\n> -> Append (cost=78959.64..79002.28 rows=13 width=24) (actual\n> time=444.081..444.202 rows=6 loops=1)\n> Buffers: shared hit=8558\n> -> Finalize GroupAggregate (cost=78959.64..78961.41\n> rows=7 width=12) (actual time=444.079..444.101 rows=6 loops=1)\n> Group Key: j.jobstatuscode\n> Buffers: shared hit=8546\n> -> Gather Merge (cost=78959.64..78961.27 rows=14\n> width=12) (actual time=444.063..444.526 rows=18 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=17636\n> -> Sort (cost=77959.61..77959.63 rows=7\n> width=12) (actual time=435.748..435.750 rows=6 loops=3)\n> Sort Key: j.jobstatuscode\n> Sort Method: quicksort Memory: 25kB\n> Worker 0: Sort Method: quicksort\n> Memory: 25kB\n> Worker 1: Sort Method: quicksort\n> Memory: 25kB\n> Buffers: shared hit=17636\n> -> Partial HashAggregate\n> (cost=77959.44..77959.51 rows=7 width=12) (actual time=435.703..435.706\n> rows=6 loops=3)\n> Group Key: j.jobstatuscode\n> Buffers: shared hit=17620\n> -> Parallel Bitmap Heap Scan on\n> job j (cost=11528.22..76957.69 rows=200351 width=4) (actual\n> time=47.682..281.928 rows=163200\n> loops=3)\n> Recheck Cond:\n> (((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY\n> ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,T\n> HPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n> Filter: ((jobstartdatetime\n> >= '2020-08-01 00:00:00'::timestamp without time zone) AND\n> (jobstartdatetime <= '2020-09-30 00\n> :00:00'::timestamp without time zone))\n> Heap Blocks: exact=6633\n> Buffers: shared hit=17620\n> -> Bitmap Index Scan on\n> job_list_test1 (cost=0.00..11408.01 rows=482693 width=0) (actual\n> time=49.825..49.826 rows=48960\n> 0 loops=1)\n> Index Cond:\n> (((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY\n> ('{THNPM1,THPRK1,THCNT1,THSPN1,THKR\n> I1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n> Buffers: shared\n> hit=1913\n> -> GroupAggregate (cost=40.50..40.68 rows=6 width=12)\n> (actual time=0.093..0.094 rows=0 loops=1)\n> Group Key: j_1.jobstatuscode\n> Buffers: shared hit=12\n> -> Sort (cost=40.50..40.54 rows=16 width=4)\n> (actual time=0.092..0.092 rows=0 loops=1)\n> Sort Key: j_1.jobstatuscode\n> Sort Method: quicksort Memory: 25kB\n> Buffers: shared hit=12\n> -> Index Scan using job_list_test3 on job\n> j_1 (cost=0.14..40.18 rows=16 width=4) (actual time=0.081..0.082 rows=0\n> loops=1)\n> Index Cond: (((countrycode)::text =\n> 'TH'::text) AND ((facilitycode)::text = ANY\n> ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,T\n> HUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n> Buffers: shared hit=12\n> -> CTE Scan on jobcount jc (cost=0.00..0.26 rows=13 width=24) (actual\n> time=444.215..444.221 rows=6 loops=1)\n> Buffers: shared hit=8558\n> -> Hash (cost=8.29..8.29 rows=10 width=4) (actual time=0.016..0.016\n> rows=10 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> Buffers: shared hit=2\n> -> Index Only Scan using jobstatus_jobstatuscode_unq on\n> jobstatus js (cost=0.14..8.29 rows=10 width=4) (actual time=0.006..0.010\n> rows=10 loops=1)\n> Heap Fetches: 0\n> Buffers: shared hit=2\n> Planning Time: 0.808 ms\n> Execution Time: 444.819 ms\n> (53 rows)\n>\n>\n>\n>\n>>\n>> Yours,\n>> Laurenz Albe\n>> --\n>> Cybertec | https://www.cybertec-postgresql.com\n>>\n>>\n\nHi,Kindly requesting for help on this. Thanks.-Aditya.On Tue, Oct 20, 2020 at 6:00 PM aditya desai <[email protected]> wrote:Hi Laurenz,I createdOn Fri, Oct 16, 2020 at 2:06 PM Laurenz Albe <[email protected]> wrote:On Thu, 2020-10-15 at 20:34 +0530, aditya desai wrote:\n> Below query always shows up on top in the CPU matrix. Also despite having indexes it does sequential scans\n> (probably because WHERE condition satisfies almost all of the data from table). This query\n> runs on the default landing page in application and needs to fetch records in less that 100 ms\n>  without consuming too much CPU.\n> \n>  Any opinions? Table is very huge and due to referential identity and business requirements we could not\n>  implement partitioning as well.\n> \n> There is index on (countrycode,facilitycode,jobstartdatetime)\n> \n> explain (analyze,buffers) with JobCount as ( select jobstatuscode,count(1) stat_count from job j\n>  where 1=1 and j.countrycode = 'TH'\n> and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n>  and ((j.jobstartdatetime  between '2020-08-01 00:00:00' and '2020-09-30 00:00:00' ) or j.jobstartdatetime IS NULL )  group by j.jobstatuscode)\n>  select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;\n> \n>                           QUERY PLAN\n> \n>  Hash Right Join  (cost=98845.93..98846.10 rows=10 width=12) (actual time=1314.809..1314.849 rows=10 loops=1)\n>                              ->  Parallel Seq Scan on job j  (cost=0.00..96837.93 rows=200963 width=4) (actual time=13.010..1144.434 rows=163200 loops=3)\n>                                    Filter: (((countrycode)::text = 'TH'::text) AND (((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp without time zone) AND (jobst\n> artdatetime <= '2020-09-30 00:00:00'::timestamp without time zone)) OR (jobstartdatetime IS NULL)) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1\n> ,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))\n>                                    Rows Removed by Filter: 449035\n>                                    Buffers: shared hit=60086 read=11834\n>                                    I/O Timings: read=59.194\n> \n\nYou should rewrite the subquery as a UNION to avoid the OR:\n\n  ... WHERE j.countrycode = 'TH'\n        and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n        and j.jobstartdatetime between '2020-08-01 00:00:00' and '2020-09-30 00:00:00'\n\nand\n\n  ... WHERE j.countrycode = 'TH'\n        and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1')\n        and j.jobstartdatetime IS NULL\n\nThese indexes could speed up the resulting query:\n\n  CREATE INDEX ON job (countrycode, facilitycode);\n  CREATE INDEX ON job (countrycode, jobstartdatetime);\n  CREATE INDEX ON job (countrycode, facilitycode) WHERE jobstartdaytime IS NULL;I created the indexes you suggested and changed the query with the UNION operator. Please see explain plan below. Performance of the query(execution time has improved mostly because I ran vacuum full). Cost of the query is still high.This is Dev envrionment and has 2 vCPU and 8 GB RAM.explain (analyze,buffers) with JobCount as ( (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime  between '2020-08-01 00:00:00' and '2020-09-30 00:00:00'    group by j.jobstatuscode) UNION (select jobstatuscode,count(1) stat_count from job j where 1=1 and j.countrycode = 'TH'   and j.facilitycode in ('THNPM1','THPRK1','THCNT1','THSPN1','THKRI1','THPKN1','THSBI1','THUTG1','THLRI1','THSRI1','THSUR1','THSKM1') and j.jobstartdatetime is null  group by j.jobstatuscode))lmp_delivery_jobs->  select js.jobstatuscode,COALESCE(stat_count,0) stat_count from JobCount jc right outer join jobstatus js on jc.jobstatuscode=js.jobstatuscode;                                                                                                               QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Right Join  (cost=79010.89..79011.19 rows=10 width=12) (actual time=444.241..444.256 rows=10 loops=1)   Hash Cond: ((jc.jobstatuscode)::text = (js.jobstatuscode)::text)   Buffers: shared hit=8560   CTE jobcount     ->  HashAggregate  (cost=79002.35..79002.48 rows=13 width=24) (actual time=444.211..444.213 rows=6 loops=1)           Group Key: j.jobstatuscode, (count(1))           Buffers: shared hit=8558           ->  Append  (cost=78959.64..79002.28 rows=13 width=24) (actual time=444.081..444.202 rows=6 loops=1)                 Buffers: shared hit=8558                 ->  Finalize GroupAggregate  (cost=78959.64..78961.41 rows=7 width=12) (actual time=444.079..444.101 rows=6 loops=1)                       Group Key: j.jobstatuscode                       Buffers: shared hit=8546                       ->  Gather Merge  (cost=78959.64..78961.27 rows=14 width=12) (actual time=444.063..444.526 rows=18 loops=1)                             Workers Planned: 2                             Workers Launched: 2                             Buffers: shared hit=17636                             ->  Sort  (cost=77959.61..77959.63 rows=7 width=12) (actual time=435.748..435.750 rows=6 loops=3)                                   Sort Key: j.jobstatuscode                                   Sort Method: quicksort  Memory: 25kB                                   Worker 0:  Sort Method: quicksort  Memory: 25kB                                   Worker 1:  Sort Method: quicksort  Memory: 25kB                                   Buffers: shared hit=17636                                   ->  Partial HashAggregate  (cost=77959.44..77959.51 rows=7 width=12) (actual time=435.703..435.706 rows=6 loops=3)                                         Group Key: j.jobstatuscode                                         Buffers: shared hit=17620                                         ->  Parallel Bitmap Heap Scan on job j  (cost=11528.22..76957.69 rows=200351 width=4) (actual time=47.682..281.928 rows=163200loops=3)                                               Recheck Cond: (((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))                                               Filter: ((jobstartdatetime >= '2020-08-01 00:00:00'::timestamp without time zone) AND (jobstartdatetime <= '2020-09-30 00:00:00'::timestamp without time zone))                                               Heap Blocks: exact=6633                                               Buffers: shared hit=17620                                               ->  Bitmap Index Scan on job_list_test1  (cost=0.00..11408.01 rows=482693 width=0) (actual time=49.825..49.826 rows=489600 loops=1)                                                     Index Cond: (((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))                                                     Buffers: shared hit=1913                 ->  GroupAggregate  (cost=40.50..40.68 rows=6 width=12) (actual time=0.093..0.094 rows=0 loops=1)                       Group Key: j_1.jobstatuscode                       Buffers: shared hit=12                       ->  Sort  (cost=40.50..40.54 rows=16 width=4) (actual time=0.092..0.092 rows=0 loops=1)                             Sort Key: j_1.jobstatuscode                             Sort Method: quicksort  Memory: 25kB                             Buffers: shared hit=12                             ->  Index Scan using job_list_test3 on job j_1  (cost=0.14..40.18 rows=16 width=4) (actual time=0.081..0.082 rows=0 loops=1)                                   Index Cond: (((countrycode)::text = 'TH'::text) AND ((facilitycode)::text = ANY ('{THNPM1,THPRK1,THCNT1,THSPN1,THKRI1,THPKN1,THSBI1,THUTG1,THLRI1,THSRI1,THSUR1,THSKM1}'::text[])))                                   Buffers: shared hit=12   ->  CTE Scan on jobcount jc  (cost=0.00..0.26 rows=13 width=24) (actual time=444.215..444.221 rows=6 loops=1)         Buffers: shared hit=8558   ->  Hash  (cost=8.29..8.29 rows=10 width=4) (actual time=0.016..0.016 rows=10 loops=1)         Buckets: 1024  Batches: 1  Memory Usage: 9kB         Buffers: shared hit=2         ->  Index Only Scan using jobstatus_jobstatuscode_unq on jobstatus js  (cost=0.14..8.29 rows=10 width=4) (actual time=0.006..0.010 rows=10 loops=1)               Heap Fetches: 0               Buffers: shared hit=2 Planning Time: 0.808 ms Execution Time: 444.819 ms(53 rows) \n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com", "msg_date": "Thu, 22 Oct 2020 10:57:08 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU Consuming query. Sequential scan despite indexing." }, { "msg_contents": "On Wed, Oct 21, 2020 at 10:22 PM aditya desai <[email protected]> wrote:\n\n> As per application team, it is business requirement to show last 60 days\n>> worth data.\n>>\n>\nI didn't look deeply but it sounds like you are looking backwards into 60\ndays worth of detail every single time you perform the query and computing\nan aggregate directly from the detail. Stop doing that. By way of\nexample, at the end of every day compute the aggregates on the relevant\ndimensions and save them. Then query the saved aggregates from previous\ndays and add them to the computed aggregate from the current day's detail.\n\nDavid J.\n\nOn Wed, Oct 21, 2020 at 10:22 PM aditya desai <[email protected]> wrote:As per application team, it is business requirement to show last 60 days worth data.I didn't look deeply but it sounds like you are looking backwards into 60 days worth of detail every single time you perform the query and computing an aggregate directly from the detail.  Stop doing that.  By way of example, at the end of every day compute the aggregates on the relevant dimensions and save them.  Then query the saved aggregates from previous days and add them to the computed aggregate from the current day's detail.David J.", "msg_date": "Wed, 21 Oct 2020 22:32:55 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU Consuming query. Sequential scan despite indexing." }, { "msg_contents": "Hi David,\nThanks for the suggestion. Let me try to implement this as well. WIll get\nback to you soon.\n\nRegards,\nAditya.\n\nOn Thu, Oct 22, 2020 at 11:03 AM David G. Johnston <\[email protected]> wrote:\n\n> On Wed, Oct 21, 2020 at 10:22 PM aditya desai <[email protected]> wrote:\n>\n>> As per application team, it is business requirement to show last 60 days\n>>> worth data.\n>>>\n>>\n> I didn't look deeply but it sounds like you are looking backwards into 60\n> days worth of detail every single time you perform the query and computing\n> an aggregate directly from the detail. Stop doing that. By way of\n> example, at the end of every day compute the aggregates on the relevant\n> dimensions and save them. Then query the saved aggregates from previous\n> days and add them to the computed aggregate from the current day's detail.\n>\n> David J.\n>\n>\n\nHi David,Thanks for the suggestion. Let me try to implement this as well. WIll get back to you soon.Regards,Aditya.On Thu, Oct 22, 2020 at 11:03 AM David G. Johnston <[email protected]> wrote:On Wed, Oct 21, 2020 at 10:22 PM aditya desai <[email protected]> wrote:As per application team, it is business requirement to show last 60 days worth data.I didn't look deeply but it sounds like you are looking backwards into 60 days worth of detail every single time you perform the query and computing an aggregate directly from the detail.  Stop doing that.  By way of example, at the end of every day compute the aggregates on the relevant dimensions and save them.  Then query the saved aggregates from previous days and add them to the computed aggregate from the current day's detail.David J.", "msg_date": "Thu, 22 Oct 2020 11:06:08 +0530", "msg_from": "aditya desai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU Consuming query. Sequential scan despite indexing." } ]
[ { "msg_contents": "Dear Postgres community,\n\nI'm looking for some help to manage queries against two large tables.\n\nContext:\nWe run a relatively large postgresql instance (5TB, 32 vCPU, 120GB RAM)\nwith a hybrid transactional/analytical workload. Data is written in batches\nevery 15 seconds or so, and the all queryable tables are append-only (we\nnever update or delete). Our users can run analytical queries on top of\nthese tables.\n\nWe recently came across a series of troublesome queries one of which I'll\ndive into here.\n\nPlease see the following gist for both the query we run and the \\d+ output:\nhttps://gist.github.com/mewwts/9f11ae5e6a5951593b8999559f5418cf.\n\nThe tables in question are:\n- `ethereum.transactions`: 833M rows, partitioned, 171M rows after WHERE\n- `uniswap_v2.\"Pair_evt_Swap\": 12M rows, not partitioned, 12M rows after\nWHERE\n\nThe crux of our issue is that the query planner chooses a nested loop join\nfor this query. Essentially making this query (and other queries) take a\nvery long time to complete. In contrast, by toggling `enable_nestloop` and\n`enable_seqscan` off we can take the total runtime down from 16 minutes to\n2 minutes.\n\n1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n3) enable_nestloop=off; enable_seqscan=off (2 min):\nhttps://explain.depesz.com/s/0WXx\n\nHow can I get Postgres not to loop over 12M rows?\n\nLet me know if there is anything I left out here that would be useful for\nfurther debugging.\n\n-- \nMats\nCTO @ Dune Analytics\nWe're hiring: https://careers.duneanalytics.com\n\nDear Postgres community,I'm looking for some help to manage queries against two large tables.Context:We run a relatively large postgresql instance (5TB, 32 vCPU, 120GB RAM) with a hybrid transactional/analytical workload. Data is written in batches every 15 seconds or so, and the all queryable tables are append-only (we never update or delete). Our users can run analytical queries on top of these tables.We recently came across a series of troublesome queries one of which I'll dive into here. Please see the following gist for both the query we run and the \\d+ output: https://gist.github.com/mewwts/9f11ae5e6a5951593b8999559f5418cf. The tables in question are: - `ethereum.transactions`: 833M rows, partitioned, 171M rows after WHERE- `uniswap_v2.\"Pair_evt_Swap\": 12M rows, not partitioned, 12M rows after WHEREThe crux of our issue is that the query planner chooses a nested loop join for this query. Essentially making this query (and other queries) take a very long time to complete. In contrast, by toggling `enable_nestloop` and `enable_seqscan` off we can take the total runtime down from 16 minutes to 2 minutes.1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK 3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx  How can I get Postgres not to loop over 12M rows?Let me know if there is anything I left out here that would be useful for further debugging. -- MatsCTO @ Dune AnalyticsWe're hiring: https://careers.duneanalytics.com", "msg_date": "Tue, 20 Oct 2020 09:37:42 +0000", "msg_from": "Mats Julian Olsen <[email protected]>", "msg_from_op": true, "msg_subject": "Query Performance / Planner estimate off" }, { "msg_contents": "On Tue, 20 Oct 2020 at 22:38, Mats Julian Olsen <[email protected]> wrote:\n>\n> The crux of our issue is that the query planner chooses a nested loop join for this query. Essentially making this query (and other queries) take a very long time to complete. In contrast, by toggling `enable_nestloop` and `enable_seqscan` off we can take the total runtime down from 16 minutes to 2 minutes.\n>\n> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n> 3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx\n>\n> How can I get Postgres not to loop over 12M rows?\n\nYou'll likely want to look at what random_page_cost is set to. If the\nplanner is preferring nested loops then it may be too low. You'll\nalso want to see if effective_cache_size is set to something\nrealistic. Higher values of that will prefer nested loops like this.\n\nYou may also want to reduce max_parallel_workers_per_gather. It looks\nlike you're not getting your parallel workers as often as you'd like.\nIf the planner chooses a plan thinking it's going to get some workers\nand gets none, then that plan may be inferior the one that the planner\nwould have chosen if it had known the workers would be unavailable.\n\n> Let me know if there is anything I left out here that would be useful for further debugging.\n\nselect name,setting from pg_Settings where category like 'Query\nTuning%' and source <> 'default';\nselect version();\n\nwould be useful.\n\nDavid\n\n\n", "msg_date": "Tue, 20 Oct 2020 22:50:14 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On Tue, Oct 20, 2020 at 9:50 AM David Rowley <[email protected]> wrote:\n\n> On Tue, 20 Oct 2020 at 22:38, Mats Julian Olsen <[email protected]>\n> wrote:\n> >\n> > The crux of our issue is that the query planner chooses a nested loop\n> join for this query. Essentially making this query (and other queries) take\n> a very long time to complete. In contrast, by toggling `enable_nestloop`\n> and `enable_seqscan` off we can take the total runtime down from 16 minutes\n> to 2 minutes.\n> >\n> > 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n> > 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n> > 3) enable_nestloop=off; enable_seqscan=off (2 min):\n> https://explain.depesz.com/s/0WXx\n> >\n> > How can I get Postgres not to loop over 12M rows?\n>\n> You'll likely want to look at what random_page_cost is set to. If the\n> planner is preferring nested loops then it may be too low. You'll\n> also want to see if effective_cache_size is set to something\n> realistic. Higher values of that will prefer nested loops like this.\n>\n\nrandom_page_cost is 1.1 and effective_cache_size is '60GB' (listed in the\ngist). random_page_cost may be too low?\n\n\n> You may also want to reduce max_parallel_workers_per_gather. It looks\n> like you're not getting your parallel workers as often as you'd like.\n> If the planner chooses a plan thinking it's going to get some workers\n> and gets none, then that plan may be inferior the one that the planner\n> would have chosen if it had known the workers would be unavailable.\n>\n\nInteresting, here are the values for those:\nmax_parallel_workers = 8\nmax_parallel_workers_per_gather = 4\n\n\n>\n> > Let me know if there is anything I left out here that would be useful\n> for further debugging.\n>\n> select name,setting from pg_Settings where category like 'Query\n> Tuning%' and source <> 'default';\n> select version();\n>\n\ndefault_statistics_target = 500\neffective_cache_size = 7864320\nrandom_page_cost = 1.1\n\nPostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit\n\n>\n> would be useful.\n>\n> David\n>\n\nThanks David, see above for more information.\n\n-- \nMats\nCTO @ Dune Analytics\nWe're hiring: https://careers.duneanalytics.com\n\nOn Tue, Oct 20, 2020 at 9:50 AM David Rowley <[email protected]> wrote:On Tue, 20 Oct 2020 at 22:38, Mats Julian Olsen <[email protected]> wrote:\n>\n> The crux of our issue is that the query planner chooses a nested loop join for this query. Essentially making this query (and other queries) take a very long time to complete. In contrast, by toggling `enable_nestloop` and `enable_seqscan` off we can take the total runtime down from 16 minutes to 2 minutes.\n>\n> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n> 3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx\n>\n> How can I get Postgres not to loop over 12M rows?\n\nYou'll likely want to look at what random_page_cost is set to. If the\nplanner is preferring nested loops then it may be too low.  You'll\nalso want to see if effective_cache_size is set to something\nrealistic.  Higher values of that will prefer nested loops like this.random_page_cost is 1.1 and effective_cache_size is '60GB' (listed in the gist). random_page_cost may be too low? \nYou may also want to reduce max_parallel_workers_per_gather.  It looks\nlike you're not getting your parallel workers as often as you'd like.\nIf the planner chooses a plan thinking it's going to get some workers\nand gets none, then that plan may be inferior the one that the planner\nwould have chosen if it had known the workers would be unavailable.Interesting, here are the values for those:max_parallel_workers = 8max_parallel_workers_per_gather = 4 \n\n> Let me know if there is anything I left out here that would be useful for further debugging.\n\nselect name,setting from pg_Settings where category like 'Query\nTuning%' and source <> 'default';\nselect version();default_statistics_target = 500effective_cache_size = 7864320random_page_cost = 1.1 PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit\n\nwould be useful.\n\nDavid\nThanks David, see above for more information. -- MatsCTO @ Dune AnalyticsWe're hiring: https://careers.duneanalytics.com", "msg_date": "Tue, 20 Oct 2020 09:59:05 +0000", "msg_from": "Mats Julian Olsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "út 20. 10. 2020 v 11:59 odesílatel Mats Julian Olsen <[email protected]>\nnapsal:\n\n> On Tue, Oct 20, 2020 at 9:50 AM David Rowley <[email protected]> wrote:\n>\n>> On Tue, 20 Oct 2020 at 22:38, Mats Julian Olsen <[email protected]>\n>> wrote:\n>> >\n>> > The crux of our issue is that the query planner chooses a nested loop\n>> join for this query. Essentially making this query (and other queries) take\n>> a very long time to complete. In contrast, by toggling `enable_nestloop`\n>> and `enable_seqscan` off we can take the total runtime down from 16 minutes\n>> to 2 minutes.\n>> >\n>> > 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n>> > 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n>> > 3) enable_nestloop=off; enable_seqscan=off (2 min):\n>> https://explain.depesz.com/s/0WXx\n>> >\n>> > How can I get Postgres not to loop over 12M rows?\n>>\n>> You'll likely want to look at what random_page_cost is set to. If the\n>> planner is preferring nested loops then it may be too low. You'll\n>> also want to see if effective_cache_size is set to something\n>> realistic. Higher values of that will prefer nested loops like this.\n>>\n>\n> random_page_cost is 1.1 and effective_cache_size is '60GB' (listed in the\n> gist). random_page_cost may be too low?\n>\n\nrandom_page_cost 2 is safer - the value 1.5 is a little bit aggressive for\nme.\n\n\n>\n>> You may also want to reduce max_parallel_workers_per_gather. It looks\n>> like you're not getting your parallel workers as often as you'd like.\n>> If the planner chooses a plan thinking it's going to get some workers\n>> and gets none, then that plan may be inferior the one that the planner\n>> would have chosen if it had known the workers would be unavailable.\n>>\n>\n> Interesting, here are the values for those:\n> max_parallel_workers = 8\n> max_parallel_workers_per_gather = 4\n>\n>\n>>\n>> > Let me know if there is anything I left out here that would be useful\n>> for further debugging.\n>>\n>> select name,setting from pg_Settings where category like 'Query\n>> Tuning%' and source <> 'default';\n>> select version();\n>>\n>\n> default_statistics_target = 500\n> effective_cache_size = 7864320\n> random_page_cost = 1.1\n>\n> PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on x86_64-pc-linux-gnu,\n> compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit\n>\n>>\n>> would be useful.\n>>\n>> David\n>>\n>\n> Thanks David, see above for more information.\n>\n> --\n> Mats\n> CTO @ Dune Analytics\n> We're hiring: https://careers.duneanalytics.com\n>\n\nút 20. 10. 2020 v 11:59 odesílatel Mats Julian Olsen <[email protected]> napsal:On Tue, Oct 20, 2020 at 9:50 AM David Rowley <[email protected]> wrote:On Tue, 20 Oct 2020 at 22:38, Mats Julian Olsen <[email protected]> wrote:\n>\n> The crux of our issue is that the query planner chooses a nested loop join for this query. Essentially making this query (and other queries) take a very long time to complete. In contrast, by toggling `enable_nestloop` and `enable_seqscan` off we can take the total runtime down from 16 minutes to 2 minutes.\n>\n> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n> 3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx\n>\n> How can I get Postgres not to loop over 12M rows?\n\nYou'll likely want to look at what random_page_cost is set to. If the\nplanner is preferring nested loops then it may be too low.  You'll\nalso want to see if effective_cache_size is set to something\nrealistic.  Higher values of that will prefer nested loops like this.random_page_cost is 1.1 and effective_cache_size is '60GB' (listed in the gist). random_page_cost may be too low?random_page_cost 2 is safer - the value 1.5 is a little bit aggressive for me.  \nYou may also want to reduce max_parallel_workers_per_gather.  It looks\nlike you're not getting your parallel workers as often as you'd like.\nIf the planner chooses a plan thinking it's going to get some workers\nand gets none, then that plan may be inferior the one that the planner\nwould have chosen if it had known the workers would be unavailable.Interesting, here are the values for those:max_parallel_workers = 8max_parallel_workers_per_gather = 4 \n\n> Let me know if there is anything I left out here that would be useful for further debugging.\n\nselect name,setting from pg_Settings where category like 'Query\nTuning%' and source <> 'default';\nselect version();default_statistics_target = 500effective_cache_size = 7864320random_page_cost = 1.1 PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit\n\nwould be useful.\n\nDavid\nThanks David, see above for more information. -- MatsCTO @ Dune AnalyticsWe're hiring: https://careers.duneanalytics.com", "msg_date": "Tue, 20 Oct 2020 12:50:05 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On Tue, Oct 20, 2020 at 10:50 AM Pavel Stehule <[email protected]>\nwrote:\n\n>\n>\n> út 20. 10. 2020 v 11:59 odesílatel Mats Julian Olsen <\n> [email protected]> napsal:\n>\n>> On Tue, Oct 20, 2020 at 9:50 AM David Rowley <[email protected]>\n>> wrote:\n>>\n>>> On Tue, 20 Oct 2020 at 22:38, Mats Julian Olsen <[email protected]>\n>>> wrote:\n>>> >\n>>> > The crux of our issue is that the query planner chooses a nested loop\n>>> join for this query. Essentially making this query (and other queries) take\n>>> a very long time to complete. In contrast, by toggling `enable_nestloop`\n>>> and `enable_seqscan` off we can take the total runtime down from 16 minutes\n>>> to 2 minutes.\n>>> >\n>>> > 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n>>> > 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n>>> > 3) enable_nestloop=off; enable_seqscan=off (2 min):\n>>> https://explain.depesz.com/s/0WXx\n>>> >\n>>> > How can I get Postgres not to loop over 12M rows?\n>>>\n>>> You'll likely want to look at what random_page_cost is set to. If the\n>>> planner is preferring nested loops then it may be too low. You'll\n>>> also want to see if effective_cache_size is set to something\n>>> realistic. Higher values of that will prefer nested loops like this.\n>>>\n>>\n>> random_page_cost is 1.1 and effective_cache_size is '60GB' (listed in the\n>> gist). random_page_cost may be too low?\n>>\n>\n> random_page_cost 2 is safer - the value 1.5 is a little bit aggressive for\n> me.\n>\n\nThanks Pavel. I tried changing random_page_cost from 1.1 to 2, to 3... all\nthe way up to 10. All values resulted in the same query plan, except for\n10, which then executed a parallel hash join (however with sequential\nscans) https://explain.depesz.com/s/Srcb.\n\n10 seems like a way too high value for random_page_cost though?\n\n\n>\n>>\n>>> You may also want to reduce max_parallel_workers_per_gather. It looks\n>>> like you're not getting your parallel workers as often as you'd like.\n>>> If the planner chooses a plan thinking it's going to get some workers\n>>> and gets none, then that plan may be inferior the one that the planner\n>>> would have chosen if it had known the workers would be unavailable.\n>>>\n>>\n>> Interesting, here are the values for those:\n>> max_parallel_workers = 8\n>> max_parallel_workers_per_gather = 4\n>>\n>>\n>>>\n>>> > Let me know if there is anything I left out here that would be useful\n>>> for further debugging.\n>>>\n>>> select name,setting from pg_Settings where category like 'Query\n>>> Tuning%' and source <> 'default';\n>>> select version();\n>>>\n>>\n>> default_statistics_target = 500\n>> effective_cache_size = 7864320\n>> random_page_cost = 1.1\n>>\n>> PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on x86_64-pc-linux-gnu,\n>> compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit\n>>\n>>>\n>>> would be useful.\n>>>\n>>> David\n>>>\n>>\n>> Thanks David, see above for more information.\n>>\n>> --\n>> Mats\n>> CTO @ Dune Analytics\n>> We're hiring: https://careers.duneanalytics.com\n>>\n>\n\n-- \nMats\nCTO @ Dune Analytics\nWe're hiring: https://careers.duneanalytics.com\n\nOn Tue, Oct 20, 2020 at 10:50 AM Pavel Stehule <[email protected]> wrote:út 20. 10. 2020 v 11:59 odesílatel Mats Julian Olsen <[email protected]> napsal:On Tue, Oct 20, 2020 at 9:50 AM David Rowley <[email protected]> wrote:On Tue, 20 Oct 2020 at 22:38, Mats Julian Olsen <[email protected]> wrote:\n>\n> The crux of our issue is that the query planner chooses a nested loop join for this query. Essentially making this query (and other queries) take a very long time to complete. In contrast, by toggling `enable_nestloop` and `enable_seqscan` off we can take the total runtime down from 16 minutes to 2 minutes.\n>\n> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n> 3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx\n>\n> How can I get Postgres not to loop over 12M rows?\n\nYou'll likely want to look at what random_page_cost is set to. If the\nplanner is preferring nested loops then it may be too low.  You'll\nalso want to see if effective_cache_size is set to something\nrealistic.  Higher values of that will prefer nested loops like this.random_page_cost is 1.1 and effective_cache_size is '60GB' (listed in the gist). random_page_cost may be too low?random_page_cost 2 is safer - the value 1.5 is a little bit aggressive for me. Thanks Pavel. I tried changing random_page_cost from 1.1 to 2, to 3... all the way up to 10. All values resulted in the same query plan, except for 10, which then executed a parallel hash join (however with sequential scans) https://explain.depesz.com/s/Srcb.10 seems like a way too high value for random_page_cost though? \nYou may also want to reduce max_parallel_workers_per_gather.  It looks\nlike you're not getting your parallel workers as often as you'd like.\nIf the planner chooses a plan thinking it's going to get some workers\nand gets none, then that plan may be inferior the one that the planner\nwould have chosen if it had known the workers would be unavailable.Interesting, here are the values for those:max_parallel_workers = 8max_parallel_workers_per_gather = 4 \n\n> Let me know if there is anything I left out here that would be useful for further debugging.\n\nselect name,setting from pg_Settings where category like 'Query\nTuning%' and source <> 'default';\nselect version();default_statistics_target = 500effective_cache_size = 7864320random_page_cost = 1.1 PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit\n\nwould be useful.\n\nDavid\nThanks David, see above for more information. -- MatsCTO @ Dune AnalyticsWe're hiring: https://careers.duneanalytics.com\n\n-- MatsCTO @ Dune AnalyticsWe're hiring: https://careers.duneanalytics.com", "msg_date": "Tue, 20 Oct 2020 11:09:13 +0000", "msg_from": "Mats Julian Olsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "út 20. 10. 2020 v 13:09 odesílatel Mats Julian Olsen <[email protected]>\nnapsal:\n\n>\n>\n> On Tue, Oct 20, 2020 at 10:50 AM Pavel Stehule <[email protected]>\n> wrote:\n>\n>>\n>>\n>> út 20. 10. 2020 v 11:59 odesílatel Mats Julian Olsen <\n>> [email protected]> napsal:\n>>\n>>> On Tue, Oct 20, 2020 at 9:50 AM David Rowley <[email protected]>\n>>> wrote:\n>>>\n>>>> On Tue, 20 Oct 2020 at 22:38, Mats Julian Olsen <[email protected]>\n>>>> wrote:\n>>>> >\n>>>> > The crux of our issue is that the query planner chooses a nested loop\n>>>> join for this query. Essentially making this query (and other queries) take\n>>>> a very long time to complete. In contrast, by toggling `enable_nestloop`\n>>>> and `enable_seqscan` off we can take the total runtime down from 16 minutes\n>>>> to 2 minutes.\n>>>> >\n>>>> > 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n>>>> > 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n>>>> > 3) enable_nestloop=off; enable_seqscan=off (2 min):\n>>>> https://explain.depesz.com/s/0WXx\n>>>> >\n>>>> > How can I get Postgres not to loop over 12M rows?\n>>>>\n>>>> You'll likely want to look at what random_page_cost is set to. If the\n>>>> planner is preferring nested loops then it may be too low. You'll\n>>>> also want to see if effective_cache_size is set to something\n>>>> realistic. Higher values of that will prefer nested loops like this.\n>>>>\n>>>\n>>> random_page_cost is 1.1 and effective_cache_size is '60GB' (listed in\n>>> the gist). random_page_cost may be too low?\n>>>\n>>\n>> random_page_cost 2 is safer - the value 1.5 is a little bit aggressive\n>> for me.\n>>\n>\n> Thanks Pavel. I tried changing random_page_cost from 1.1 to 2, to 3... all\n> the way up to 10. All values resulted in the same query plan, except for\n> 10, which then executed a parallel hash join (however with sequential\n> scans) https://explain.depesz.com/s/Srcb.\n>\n> 10 seems like a way too high value for random_page_cost though?\n>\n\nit is not usual, but I know about analytics cases where is this value. But\nmaybe effective_cache_size is too high.\n\n>\n>\n>>\n>>>\n>>>> You may also want to reduce max_parallel_workers_per_gather. It looks\n>>>> like you're not getting your parallel workers as often as you'd like.\n>>>> If the planner chooses a plan thinking it's going to get some workers\n>>>> and gets none, then that plan may be inferior the one that the planner\n>>>> would have chosen if it had known the workers would be unavailable.\n>>>>\n>>>\n>>> Interesting, here are the values for those:\n>>> max_parallel_workers = 8\n>>> max_parallel_workers_per_gather = 4\n>>>\n>>>\n>>>>\n>>>> > Let me know if there is anything I left out here that would be useful\n>>>> for further debugging.\n>>>>\n>>>> select name,setting from pg_Settings where category like 'Query\n>>>> Tuning%' and source <> 'default';\n>>>> select version();\n>>>>\n>>>\n>>> default_statistics_target = 500\n>>> effective_cache_size = 7864320\n>>> random_page_cost = 1.1\n>>>\n>>> PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on x86_64-pc-linux-gnu,\n>>> compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit\n>>>\n>>>>\n>>>> would be useful.\n>>>>\n>>>> David\n>>>>\n>>>\n>>> Thanks David, see above for more information.\n>>>\n>>> --\n>>> Mats\n>>> CTO @ Dune Analytics\n>>> We're hiring: https://careers.duneanalytics.com\n>>>\n>>\n>\n> --\n> Mats\n> CTO @ Dune Analytics\n> We're hiring: https://careers.duneanalytics.com\n>\n\nút 20. 10. 2020 v 13:09 odesílatel Mats Julian Olsen <[email protected]> napsal:On Tue, Oct 20, 2020 at 10:50 AM Pavel Stehule <[email protected]> wrote:út 20. 10. 2020 v 11:59 odesílatel Mats Julian Olsen <[email protected]> napsal:On Tue, Oct 20, 2020 at 9:50 AM David Rowley <[email protected]> wrote:On Tue, 20 Oct 2020 at 22:38, Mats Julian Olsen <[email protected]> wrote:\n>\n> The crux of our issue is that the query planner chooses a nested loop join for this query. Essentially making this query (and other queries) take a very long time to complete. In contrast, by toggling `enable_nestloop` and `enable_seqscan` off we can take the total runtime down from 16 minutes to 2 minutes.\n>\n> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n> 3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx\n>\n> How can I get Postgres not to loop over 12M rows?\n\nYou'll likely want to look at what random_page_cost is set to. If the\nplanner is preferring nested loops then it may be too low.  You'll\nalso want to see if effective_cache_size is set to something\nrealistic.  Higher values of that will prefer nested loops like this.random_page_cost is 1.1 and effective_cache_size is '60GB' (listed in the gist). random_page_cost may be too low?random_page_cost 2 is safer - the value 1.5 is a little bit aggressive for me. Thanks Pavel. I tried changing random_page_cost from 1.1 to 2, to 3... all the way up to 10. All values resulted in the same query plan, except for 10, which then executed a parallel hash join (however with sequential scans) https://explain.depesz.com/s/Srcb.10 seems like a way too high value for random_page_cost though?it is not usual, but I know about analytics cases where is this value. But maybe  effective_cache_size is too high.  \nYou may also want to reduce max_parallel_workers_per_gather.  It looks\nlike you're not getting your parallel workers as often as you'd like.\nIf the planner chooses a plan thinking it's going to get some workers\nand gets none, then that plan may be inferior the one that the planner\nwould have chosen if it had known the workers would be unavailable.Interesting, here are the values for those:max_parallel_workers = 8max_parallel_workers_per_gather = 4 \n\n> Let me know if there is anything I left out here that would be useful for further debugging.\n\nselect name,setting from pg_Settings where category like 'Query\nTuning%' and source <> 'default';\nselect version();default_statistics_target = 500effective_cache_size = 7864320random_page_cost = 1.1 PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit\n\nwould be useful.\n\nDavid\nThanks David, see above for more information. -- MatsCTO @ Dune AnalyticsWe're hiring: https://careers.duneanalytics.com\n\n-- MatsCTO @ Dune AnalyticsWe're hiring: https://careers.duneanalytics.com", "msg_date": "Tue, 20 Oct 2020 13:15:42 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On Tue, Oct 20, 2020 at 11:16 AM Pavel Stehule <[email protected]>\nwrote:\n\n>\n>\n> út 20. 10. 2020 v 13:09 odesílatel Mats Julian Olsen <\n> [email protected]> napsal:\n>\n>>\n>>\n>> On Tue, Oct 20, 2020 at 10:50 AM Pavel Stehule <[email protected]>\n>> wrote:\n>>\n>>>\n>>>\n>>> út 20. 10. 2020 v 11:59 odesílatel Mats Julian Olsen <\n>>> [email protected]> napsal:\n>>>\n>>>> On Tue, Oct 20, 2020 at 9:50 AM David Rowley <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> On Tue, 20 Oct 2020 at 22:38, Mats Julian Olsen <\n>>>>> [email protected]> wrote:\n>>>>> >\n>>>>> > The crux of our issue is that the query planner chooses a nested\n>>>>> loop join for this query. Essentially making this query (and other queries)\n>>>>> take a very long time to complete. In contrast, by toggling\n>>>>> `enable_nestloop` and `enable_seqscan` off we can take the total runtime\n>>>>> down from 16 minutes to 2 minutes.\n>>>>> >\n>>>>> > 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n>>>>> > 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n>>>>> > 3) enable_nestloop=off; enable_seqscan=off (2 min):\n>>>>> https://explain.depesz.com/s/0WXx\n>>>>> >\n>>>>> > How can I get Postgres not to loop over 12M rows?\n>>>>>\n>>>>> You'll likely want to look at what random_page_cost is set to. If the\n>>>>> planner is preferring nested loops then it may be too low. You'll\n>>>>> also want to see if effective_cache_size is set to something\n>>>>> realistic. Higher values of that will prefer nested loops like this.\n>>>>>\n>>>>\n>>>> random_page_cost is 1.1 and effective_cache_size is '60GB' (listed in\n>>>> the gist). random_page_cost may be too low?\n>>>>\n>>>\n>>> random_page_cost 2 is safer - the value 1.5 is a little bit aggressive\n>>> for me.\n>>>\n>>\n>> Thanks Pavel. I tried changing random_page_cost from 1.1 to 2, to 3...\n>> all the way up to 10. All values resulted in the same query plan, except\n>> for 10, which then executed a parallel hash join (however with sequential\n>> scans) https://explain.depesz.com/s/Srcb.\n>>\n>> 10 seems like a way too high value for random_page_cost though?\n>>\n>\n> it is not usual, but I know about analytics cases where is this value. But\n> maybe effective_cache_size is too high.\n>\n\nChanging the effective_cache_size from 10GB up to 60GB does not affect the\nNested Loop-part of this query plan. It does alter the inner part of a loop\nfrom sequential (low cache) to index scans (high cache).\n\n\n>\n>>\n>>>\n>>>>\n>>>>> You may also want to reduce max_parallel_workers_per_gather. It looks\n>>>>> like you're not getting your parallel workers as often as you'd like.\n>>>>> If the planner chooses a plan thinking it's going to get some workers\n>>>>> and gets none, then that plan may be inferior the one that the planner\n>>>>> would have chosen if it had known the workers would be unavailable.\n>>>>>\n>>>>\n>>>> Interesting, here are the values for those:\n>>>> max_parallel_workers = 8\n>>>> max_parallel_workers_per_gather = 4\n>>>>\n>>>>\n>>>>>\n>>>>> > Let me know if there is anything I left out here that would be\n>>>>> useful for further debugging.\n>>>>>\n>>>>> select name,setting from pg_Settings where category like 'Query\n>>>>> Tuning%' and source <> 'default';\n>>>>> select version();\n>>>>>\n>>>>\n>>>> default_statistics_target = 500\n>>>> effective_cache_size = 7864320\n>>>> random_page_cost = 1.1\n>>>>\n>>>> PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on x86_64-pc-linux-gnu,\n>>>> compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit\n>>>>\n>>>>>\n>>>>> would be useful.\n>>>>>\n>>>>> David\n>>>>>\n>>>>\n>>>> Thanks David, see above for more information.\n>>>>\n>>>> --\n>>>> Mats\n>>>> CTO @ Dune Analytics\n>>>> We're hiring: https://careers.duneanalytics.com\n>>>>\n>>>\n>>\n>> --\n>> Mats\n>> CTO @ Dune Analytics\n>> We're hiring: https://careers.duneanalytics.com\n>>\n>\n\n-- \nMats\nCTO @ Dune Analytics\nWe're hiring: https://careers.duneanalytics.com\n\nOn Tue, Oct 20, 2020 at 11:16 AM Pavel Stehule <[email protected]> wrote:út 20. 10. 2020 v 13:09 odesílatel Mats Julian Olsen <[email protected]> napsal:On Tue, Oct 20, 2020 at 10:50 AM Pavel Stehule <[email protected]> wrote:út 20. 10. 2020 v 11:59 odesílatel Mats Julian Olsen <[email protected]> napsal:On Tue, Oct 20, 2020 at 9:50 AM David Rowley <[email protected]> wrote:On Tue, 20 Oct 2020 at 22:38, Mats Julian Olsen <[email protected]> wrote:\n>\n> The crux of our issue is that the query planner chooses a nested loop join for this query. Essentially making this query (and other queries) take a very long time to complete. In contrast, by toggling `enable_nestloop` and `enable_seqscan` off we can take the total runtime down from 16 minutes to 2 minutes.\n>\n> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n> 3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx\n>\n> How can I get Postgres not to loop over 12M rows?\n\nYou'll likely want to look at what random_page_cost is set to. If the\nplanner is preferring nested loops then it may be too low.  You'll\nalso want to see if effective_cache_size is set to something\nrealistic.  Higher values of that will prefer nested loops like this.random_page_cost is 1.1 and effective_cache_size is '60GB' (listed in the gist). random_page_cost may be too low?random_page_cost 2 is safer - the value 1.5 is a little bit aggressive for me. Thanks Pavel. I tried changing random_page_cost from 1.1 to 2, to 3... all the way up to 10. All values resulted in the same query plan, except for 10, which then executed a parallel hash join (however with sequential scans) https://explain.depesz.com/s/Srcb.10 seems like a way too high value for random_page_cost though?it is not usual, but I know about analytics cases where is this value. But maybe  effective_cache_size is too high. Changing the effective_cache_size from 10GB up to 60GB does not affect the Nested Loop-part of this query plan. It does alter the inner part of a loop from sequential (low cache) to index scans (high cache).  \nYou may also want to reduce max_parallel_workers_per_gather.  It looks\nlike you're not getting your parallel workers as often as you'd like.\nIf the planner chooses a plan thinking it's going to get some workers\nand gets none, then that plan may be inferior the one that the planner\nwould have chosen if it had known the workers would be unavailable.Interesting, here are the values for those:max_parallel_workers = 8max_parallel_workers_per_gather = 4 \n\n> Let me know if there is anything I left out here that would be useful for further debugging.\n\nselect name,setting from pg_Settings where category like 'Query\nTuning%' and source <> 'default';\nselect version();default_statistics_target = 500effective_cache_size = 7864320random_page_cost = 1.1 PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit\n\nwould be useful.\n\nDavid\nThanks David, see above for more information. -- MatsCTO @ Dune AnalyticsWe're hiring: https://careers.duneanalytics.com\n\n-- MatsCTO @ Dune AnalyticsWe're hiring: https://careers.duneanalytics.com\n\n-- MatsCTO @ Dune AnalyticsWe're hiring: https://careers.duneanalytics.com", "msg_date": "Tue, 20 Oct 2020 11:20:27 +0000", "msg_from": "Mats Julian Olsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "вт, 20 окт. 2020 г. в 11:38, Mats Julian Olsen <[email protected]>:\n\n> I'm looking for some help to manage queries against two large tables.\n>\n\nCan you tell the version you're running currently and the output of this\nquery, please?\n\n select name,setting,source from pg_settings where source not in\n('default','override');\n\n-- \nVictor Yegorov\n\nвт, 20 окт. 2020 г. в 11:38, Mats Julian Olsen <[email protected]>:I'm looking for some help to manage queries against two large tables.Can you tell the version you're running currently and the output of this query, please?    select name,setting,source from pg_settings where source not in ('default','override'); -- Victor Yegorov", "msg_date": "Tue, 20 Oct 2020 15:04:56 +0200", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "вт, 20 окт. 2020 г. в 11:38, Mats Julian Olsen <[email protected]>:\n\n> I'm looking for some help to manage queries against two large tables.\n>\n\nAlso, can you enable `track_io_timing` (no restart required) and provide\noutput of `EXPLAIN (analyze, buffers, settings)` for all 3 variants,\nplease?\n(I assume you're on 12+.)\n\n-- \nVictor Yegorov\n\nвт, 20 окт. 2020 г. в 11:38, Mats Julian Olsen <[email protected]>:I'm looking for some help to manage queries against two large tables.Also, can you enable `track_io_timing` (no restart required) and provide output of `EXPLAIN (analyze, buffers, settings)` for all 3 variants, please? (I assume you're on 12+.)-- Victor Yegorov", "msg_date": "Tue, 20 Oct 2020 15:22:41 +0200", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "Looping in the main group ID.\n\nRegards\nSushant\n\nOn Tue, Oct 20, 2020 at 6:49 PM Sushant Pawar <[email protected]> wrote:\n\n>\n> On Tue, Oct 20, 2020 at 3:08 PM Mats Julian Olsen <[email protected]>\n> wrote:\n>\n>> Dear Postgres community,\n>>\n>> I'm looking for some help to manage queries against two large tables.\n>>\n>> Context:\n>> We run a relatively large postgresql instance (5TB, 32 vCPU, 120GB RAM)\n>> with a hybrid transactional/analytical workload. Data is written in batches\n>> every 15 seconds or so, and the all queryable tables are append-only (we\n>> never update or delete). Our users can run analytical queries on top of\n>> these tables.\n>>\n>> We recently came across a series of troublesome queries one of which I'll\n>> dive into here.\n>>\n>> Please see the following gist for both the query we run and the \\d+\n>> output: https://gist.github.com/mewwts/9f11ae5e6a5951593b8999559f5418cf.\n>>\n>> The tables in question are:\n>> - `ethereum.transactions`: 833M rows, partitioned, 171M rows after WHERE\n>> - `uniswap_v2.\"Pair_evt_Swap\": 12M rows, not partitioned, 12M rows after\n>> WHERE\n>>\n>> The crux of our issue is that the query planner chooses a nested loop\n>> join for this query. Essentially making this query (and other queries) take\n>> a very long time to complete. In contrast, by toggling `enable_nestloop`\n>> and `enable_seqscan` off we can take the total runtime down from 16 minutes\n>> to 2 minutes.\n>>\n>> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n>> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n>> 3) enable_nestloop=off; enable_seqscan=off (2 min):\n>> https://explain.depesz.com/s/0WXx\n>>\n>\n> The cost of a query while using the default Vanila plan is very less\n> compared to the 3rd plan with nested loop and seqscan being set to off.\n> As the JIT is enabled, it seems the planner tries to select the plan with\n> the least cost and going for the plan which is taking more time of\n> execution. Can you try running this query with JIT=off in the session and\n> see if it selects the plan with the least time for execution?\n>\n>>\n>> How can I get Postgres not to loop over 12M rows?\n>>\n>> Let me know if there is anything I left out here that would be useful for\n>> further debugging.\n>>\n>> --\n>> Regards\n>>\n> Sushant\n>\n\nLooping in the main group ID.RegardsSushantOn Tue, Oct 20, 2020 at 6:49 PM Sushant Pawar <[email protected]> wrote:On Tue, Oct 20, 2020 at 3:08 PM Mats Julian Olsen <[email protected]> wrote:Dear Postgres community,I'm looking for some help to manage queries against two large tables.Context:We run a relatively large postgresql instance (5TB, 32 vCPU, 120GB RAM) with a hybrid transactional/analytical workload. Data is written in batches every 15 seconds or so, and the all queryable tables are append-only (we never update or delete). Our users can run analytical queries on top of these tables.We recently came across a series of troublesome queries one of which I'll dive into here. Please see the following gist for both the query we run and the \\d+ output: https://gist.github.com/mewwts/9f11ae5e6a5951593b8999559f5418cf. The tables in question are: - `ethereum.transactions`: 833M rows, partitioned, 171M rows after WHERE- `uniswap_v2.\"Pair_evt_Swap\": 12M rows, not partitioned, 12M rows after WHEREThe crux of our issue is that the query planner chooses a nested loop join for this query. Essentially making this query (and other queries) take a very long time to complete. In contrast, by toggling `enable_nestloop` and `enable_seqscan` off we can take the total runtime down from 16 minutes to 2 minutes.1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK 3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx The cost of a query while using the default Vanila plan is very less compared to the 3rd plan with nested loop and seqscan  being set to off.  As the JIT is enabled, it seems the planner tries to select the plan with the least cost and going for the plan which is taking more time of execution. Can you try running this query with JIT=off in the session and see if it selects the plan with the least time for execution? How can I get Postgres not to loop over 12M rows?Let me know if there is anything I left out here that would be useful for further debugging. -- Regards    Sushant", "msg_date": "Tue, 20 Oct 2020 19:10:08 +0530", "msg_from": "Sushant Pawar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On 10/20/20 3:04 PM, Victor Yegorov wrote:\n> вт, 20 окт. 2020 г. в 11:38, Mats Julian Olsen <[email protected] \n> <mailto:[email protected]>>:\n>\n> I'm looking for some help to manage queries against two large tables.\n>\n>\n> Can you tell the version you're running currently and the output of \n> this query, please?\n>\n>     select name,setting,source from pg_settings where source not in \n> ('default','override');\n>\nRunning \"PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on \nx86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 \n20191008, 64-bit\"\n\nUpdated the gist to include the results forom pg_settings. Here's the \ndirect link \nhttps://gist.githubusercontent.com/mewwts/9f11ae5e6a5951593b8999559f5418cf/raw/e5deebbbb48680e04570bec4e9a816fa009da34f/pg_settings\n\n\n\n\n\n\n\n\n\nOn 10/20/20 3:04 PM, Victor Yegorov\n wrote:\n\n\n\n\nвт, 20 окт. 2020 г. в 11:38, Mats Julian Olsen\n <[email protected]>:\n\n\n\n\nI'm looking for some help to manage queries against\n two large tables.\n\n\n\n\nCan you tell the version you're running currently and the\n output of this query, please?\n\n     select name,setting,source from pg_settings where source\n not in ('default','override'); \n\n\n\n\n\nRunning \"PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on\n x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1\n 20191008, 64-bit\"\n\nUpdated the gist to include the results forom pg_settings. Here's\n the direct link\nhttps://gist.githubusercontent.com/mewwts/9f11ae5e6a5951593b8999559f5418cf/raw/e5deebbbb48680e04570bec4e9a816fa009da34f/pg_settings", "msg_date": "Tue, 20 Oct 2020 16:50:02 +0200", "msg_from": "Mats Olsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On 10/20/20 11:37 AM, Mats Julian Olsen wrote:\n\n> Dear Postgres community,\n>\n> I'm looking for some help to manage queries against two large tables.\n>\n> Context:\n> We run a relatively large postgresql instance (5TB, 32 vCPU, 120GB \n> RAM) with a hybrid transactional/analytical workload. Data is written \n> in batches every 15 seconds or so, and the all queryable tables are \n> append-only (we never update or delete). Our users can run analytical \n> queries on top of these tables.\n>\n> We recently came across a series of troublesome queries one of which \n> I'll dive into here.\n>\n> Please see the following gist for both the query we run and the \\d+ \n> output: \n> https://gist.github.com/mewwts/9f11ae5e6a5951593b8999559f5418cf \n> <https://gist.github.com/mewwts/9f11ae5e6a5951593b8999559f5418cf>.\n>\n> The tables in question are:\n> - `ethereum.transactions`: 833M rows, partitioned, 171M rows after WHERE\n> - `uniswap_v2.\"Pair_evt_Swap\": 12M rows, not partitioned, 12M rows \n> after WHERE\n\nThe query plans I submitted was querying the table \n`uniswap_v2.\"Pair_evt_Mint\"`which has 560k rows before and after WHERE. \nAlso not partitioned. Apologies for the inconsistency, but as I \nmentioned the same performance problem holds when using \n`uniswap_v2.\"Pair_evt_Swap\" (even worse due to it's size).\n\n>\n> The crux of our issue is that the query planner chooses a nested loop \n> join for this query. Essentially making this query (and other queries) \n> take a very long time to complete. In contrast, by toggling \n> `enable_nestloop` and `enable_seqscan` off we can take the total \n> runtime down from 16 minutes to 2 minutes.\n>\n> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR \n> <https://explain.depesz.com/s/NvDR>\n> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK \n> <https://explain.depesz.com/s/buKK>\n> 3) enable_nestloop=off; enable_seqscan=off (2 min): \n> https://explain.depesz.com/s/0WXx <https://explain.depesz.com/s/0WXx>\n>\n> How can I get Postgres not to loop over 12M rows?\n>\n> Let me know if there is anything I left out here that would be useful \n> for further debugging.\n>\n> -- \n> Mats\n> CTO @ Dune Analytics\n> We're hiring: https://careers.duneanalytics.com \n> <https://careers.duneanalytics.com>\n\n\n\n\n\n\nOn 10/20/20 11:37 AM, Mats Julian Olsen wrote:\n\n\n\n\nDear Postgres community,\n\n\nI'm looking for some help to manage queries against two\n large tables.\n\n\n\nContext:\n\nWe run a relatively large postgresql instance (5TB, 32\n vCPU, 120GB RAM) with a hybrid transactional/analytical\n workload. Data is written in batches every 15 seconds or so,\n and the all queryable tables are append-only (we never update\n or delete). Our users can run analytical queries on top of\n these tables.\n\n\nWe recently came across a series of troublesome queries one\n of which I'll dive into here. \n\n\nPlease see the following gist for both the query we run and\n the \\d+ output: https://gist.github.com/mewwts/9f11ae5e6a5951593b8999559f5418cf.\n \n\n\n\n\nThe tables in question are:\n\n - `ethereum.transactions`: 833M rows, partitioned,\n 171M rows after WHERE\n\n- `uniswap_v2.\"Pair_evt_Swap\": 12M rows, not\n partitioned, 12M rows after WHERE\n\n\n\n\n\nThe query plans I submitted was querying the table\n `uniswap_v2.\"Pair_evt_Mint\"`which has 560k rows before and after\n WHERE. Also not partitioned. Apologies for the inconsistency, but\n as I mentioned the same performance problem holds when using\n `uniswap_v2.\"Pair_evt_Swap\" (even worse due to it's size).\n\n\n\n\n\n\nThe crux of our issue is that the query planner chooses a\n nested loop join for this query. Essentially making this\n query (and other queries) take a very long time to complete.\n In contrast, by toggling `enable_nestloop` and\n `enable_seqscan` off we can take the total runtime down from\n 16 minutes to 2 minutes.\n\n\n1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n\n2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n\n\n3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx \n \n\n\nHow can I get Postgres not to loop over 12M rows?\n\n\n\nLet me know if there is anything I left out here that\n would be useful for further debugging. \n\n\n\n-- \n\n\n\n\n\n\n\n\n\n\n\n\nMats\nCTO @ Dune Analytics\n\nWe're\n hiring: https://careers.duneanalytics.com", "msg_date": "Tue, 20 Oct 2020 18:45:36 +0200", "msg_from": "Mats Olsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "вт, 20 окт. 2020 г. в 16:50, Mats Olsen <[email protected]>:\n\n> On 10/20/20 3:04 PM, Victor Yegorov wrote:\n>\n> вт, 20 окт. 2020 г. в 11:38, Mats Julian Olsen <[email protected]>:\n>\n>> I'm looking for some help to manage queries against two large tables.\n>>\n>\n> Can you tell the version you're running currently and the output of this\n> query, please?\n>\n> select name,setting,source from pg_settings where source not in\n> ('default','override');\n>\n> Running \"PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on\n> x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1\n> 20191008, 64-bit\"\n>\n> Updated the gist to include the results forom pg_settings. Here's the\n> direct link\n> https://gist.githubusercontent.com/mewwts/9f11ae5e6a5951593b8999559f5418cf/raw/e5deebbbb48680e04570bec4e9a816fa009da34f/pg_settings\n>\nIt looks like indexes currently chosen by the planner don't quite fit your\nquery.\n\nI would create the following index (if it's possible to update schema):\n ON \"uniswap_v2.Pair_evt_Mint\" (evt_tx_hash, evt_block_time)\n\nSame for the second table, looks like\n ON \"ethereum.transactions\" (hash, block_time)\nis a better fit for your query. In fact, I do not think\n`transactions_block_number_time` index is used frequently, 'cos second\ncolumn of the index is a partitioning key.\n\nCurrently planner wants to go via indexes 'cos you've made random access\nreally cheap compared to sequential one (and your findings shows this).\nPerhaps on a NVMe disks this could work, but in your case you need to find\nthe real bottleneck (therefore I asked for buffers).\n\nI would set `random_page_cost` to a 2.5 at least with your numbers. Also, I\nwould check DB and indexes for bloat (just a guess now, 'cos your plans\nmiss buffers figures).\n\n\n-- \nVictor Yegorov\n\nвт, 20 окт. 2020 г. в 16:50, Mats Olsen <[email protected]>:\n\nOn 10/20/20 3:04 PM, Victor Yegorov\n wrote:\n\n\nвт, 20 окт. 2020 г. в 11:38, Mats Julian Olsen\n <[email protected]>:\n\n\n\n\nI'm looking for some help to manage queries against\n two large tables.\n\n\n\n\nCan you tell the version you're running currently and the\n output of this query, please?\n\n     select name,setting,source from pg_settings where source\n not in ('default','override'); \n\n\n\n\n\nRunning \"PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on\n x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1\n 20191008, 64-bit\"\n\nUpdated the gist to include the results forom pg_settings. Here's\n the direct link\nhttps://gist.githubusercontent.com/mewwts/9f11ae5e6a5951593b8999559f5418cf/raw/e5deebbbb48680e04570bec4e9a816fa009da34f/pg_settings\n\n\nIt looks like indexes currently chosen by the planner don't quite fit your query.I would create the following index (if it's possible to update schema):   ON \"uniswap_v2.Pair_evt_Mint\" (evt_tx_hash, evt_block_time)Same for the second table, looks like  ON \"ethereum.transactions\" (hash, block_time)is a better fit for your query. In fact, I do not think `transactions_block_number_time` index is used frequently, 'cos second column of the index is a partitioning key.Currently planner wants to go via indexes 'cos you've made random access really cheap compared to sequential one (and your findings shows this).Perhaps on a NVMe disks this could work, but in your case you need to find the real bottleneck (therefore I asked for buffers).I would set `random_page_cost` to a 2.5 at least with your numbers. Also, I would check DB and indexes for bloat (just a guess now, 'cos your plans miss buffers figures).-- Victor Yegorov", "msg_date": "Tue, 20 Oct 2020 18:51:39 +0200", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On 10/20/20 3:40 PM, Sushant Pawar wrote:\n> Looping in the main group ID.\n>\n> Regards\n> Sushant\n>\n> On Tue, Oct 20, 2020 at 6:49 PM Sushant Pawar <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n> On Tue, Oct 20, 2020 at 3:08 PM Mats Julian Olsen\n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Dear Postgres community,\n>\n> I'm looking for some help to manage queries against two large\n> tables.\n>\n> Context:\n> We run a relatively large postgresql instance (5TB, 32 vCPU,\n> 120GB RAM) with a hybrid transactional/analytical workload.\n> Data is written in batches every 15 seconds or so, and the all\n> queryable tables are append-only (we never update or delete).\n> Our users can run analytical queries on top of these tables.\n>\n> We recently came across a series of troublesome queries one of\n> which I'll dive into here.\n>\n> Please see the following gist for both the query we run and\n> the \\d+ output:\n> https://gist.github.com/mewwts/9f11ae5e6a5951593b8999559f5418cf\n> <https://gist.github.com/mewwts/9f11ae5e6a5951593b8999559f5418cf>.\n>\n>\n> The tables in question are:\n> - `ethereum.transactions`: 833M rows, partitioned, 171M rows\n> after WHERE\n> - `uniswap_v2.\"Pair_evt_Swap\": 12M rows, not partitioned, 12M\n> rows after WHERE\n>\n> The crux of our issue is that the query planner chooses a\n> nested loop join for this query. Essentially making this query\n> (and other queries) take a very long time to complete. In\n> contrast, by toggling `enable_nestloop` and `enable_seqscan`\n> off we can take the total runtime down from 16 minutes to 2\n> minutes.\n>\n> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n> <https://explain.depesz.com/s/NvDR>\n> 2) enable_nestloop=off (4 min):\n> https://explain.depesz.com/s/buKK\n> <https://explain.depesz.com/s/buKK>\n> 3) enable_nestloop=off; enable_seqscan=off (2 min):\n> https://explain.depesz.com/s/0WXx\n> <https://explain.depesz.com/s/0WXx>\n>\n>\n> The cost of a query while using the default Vanila plan is very\n> less compared to the 3rd plan with nested loop and seqscan  being\n> set to off.  As the JIT is enabled, it seems the planner tries to\n> select the plan with the least cost and going for the plan which\n> is taking more time of execution. Can you try running this query\n> with JIT=off in the session and see if it selects the plan with\n> the least time for execution?\n>\nThank you for your reply. Here's the result using set jit=off; \nhttps://explain.depesz.com/s/rpKc. It's essentially the same plan as the \ninitial one.\n\n>\n> How can I get Postgres not to loop over 12M rows?\n>\n> Let me know if there is anything I left out here that would be\n> useful for further debugging.\n>\n> -- \n> Regards\n>\n>     Sushant\n>\n\n\n\n\n\n\n\n\nOn 10/20/20 3:40 PM, Sushant Pawar\n wrote:\n\n\n\n\nLooping in the main group ID.\n\n\nRegards\nSushant\n\n\nOn Tue, Oct 20, 2020 at 6:49\n PM Sushant Pawar <[email protected]> wrote:\n\n\n\n\nOn Tue, Oct 20, 2020\n at 3:08 PM Mats Julian Olsen <[email protected]>\n wrote:\n\n\n\nDear Postgres community,\n\n\nI'm looking for some help to manage queries\n against two large tables.\n\n\n\nContext:\n\nWe run a relatively large postgresql instance\n (5TB, 32 vCPU, 120GB RAM) with a hybrid\n transactional/analytical workload. Data is written\n in batches every 15 seconds or so, and the all\n queryable tables are append-only (we never update\n or delete). Our users can run analytical queries\n on top of these tables.\n\n\nWe recently came across a series of troublesome\n queries one of which I'll dive into here. \n\n\nPlease see the following gist for both the\n query we run and the \\d+ output: https://gist.github.com/mewwts/9f11ae5e6a5951593b8999559f5418cf.\n \n\n\n\n\nThe tables in question are:\n\n - `ethereum.transactions`: 833M rows,\n partitioned, 171M rows after WHERE\n\n- `uniswap_v2.\"Pair_evt_Swap\": 12M rows,\n not partitioned, 12M rows after WHERE\n\n\n\n\nThe crux of our issue is that the query\n planner chooses a nested loop join for this\n query. Essentially making this query (and other\n queries) take a very long time to complete. In\n contrast, by toggling `enable_nestloop` and\n `enable_seqscan` off we can take the total\n runtime down from 16 minutes to 2 minutes.\n\n\n1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n\n2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n\n\n3) enable_nestloop=off; enable_seqscan=off (2\n min): https://explain.depesz.com/s/0WXx \n\n\n\n\n\nThe cost of a query while\n using the default Vanila plan is very less compared\n to the 3rd plan with nested loop and seqscan  being\n set to off.  As the JIT is enabled, it seems the\n planner tries to select the plan with the least cost\n and going for the plan which is taking more time of\n execution. Can you try running this query with\n JIT=off in the session and see if it selects the\n plan with the least time for execution?\n\n\n\n\n\n\nThank you for your reply. Here's the result using set jit=off;\n https://explain.depesz.com/s/rpKc. It's essentially the same plan\n as the initial one.\n\n\n\n\n\n\n\n\n\n\n \n\n\nHow can I get Postgres not to loop over 12M\n rows?\n\n\n\nLet me know if there is anything I left out\n here that would be useful for further debugging.\n \n\n\n\n-- \n\n\n\n\n\n\n\n\n\n\n\n\nRegards\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n    Sushant", "msg_date": "Tue, 20 Oct 2020 19:02:09 +0200", "msg_from": "Mats Olsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On 10/20/20 3:22 PM, Victor Yegorov wrote:\n> вт, 20 окт. 2020 г. в 11:38, Mats Julian Olsen <[email protected] \n> <mailto:[email protected]>>:\n>\n> I'm looking for some help to manage queries against two large tables.\n>\n>\n> Also, can you enable `track_io_timing` (no restart required) and \n> provide output of `EXPLAIN (analyze, buffers, settings)` for all 3 \n> variants, please?\n> (I assume you're on 12+.)\n\nThanks! Yes on 12.2. Here's the output:\n\nvanilla: https://explain.depesz.com/s/Ktrd\n\nset enable_nestloop=off: https://explain.depesz.com/s/mvSD\n\nset enable_nestloop=off; set enable_seqscan=off: \nhttps://explain.depesz.com/s/XIDo\n\nAre these helpful?\n\n>\n> -- \n> Victor Yegorov\n\n\n\n\n\n\n\n\nOn 10/20/20 3:22 PM, Victor Yegorov\n wrote:\n\n\n\n\nвт, 20 окт. 2020 г. в 11:38, Mats Julian Olsen\n <[email protected]>:\n\n\n\n\nI'm looking for some help to manage queries against\n two large tables.\n\n\n\n Also, can you enable `track_io_timing` (no restart required)\n and provide output of `EXPLAIN (analyze, buffers, settings)`\n for all 3 variants, please? \n\n (I assume you're on 12+.)\n\n\nThanks! Yes on 12.2. Here's the output:\n\nvanilla: https://explain.depesz.com/s/Ktrd\nset enable_nestloop=off: https://explain.depesz.com/s/mvSD\n\nset enable_nestloop=off; set enable_seqscan=off:\n https://explain.depesz.com/s/XIDo\nAre these helpful?\n\n\n\n\n\n -- \n\n\nVictor Yegorov", "msg_date": "Tue, 20 Oct 2020 19:40:40 +0200", "msg_from": "Mats Olsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On 10/20/20 6:51 PM, Victor Yegorov wrote:\n> вт, 20 окт. 2020 г. в 16:50, Mats Olsen <[email protected] \n> <mailto:[email protected]>>:\n>\n> On 10/20/20 3:04 PM, Victor Yegorov wrote:\n>\n>> вт, 20 окт. 2020 г. в 11:38, Mats Julian Olsen\n>> <[email protected] <mailto:[email protected]>>:\n>>\n>> I'm looking for some help to manage queries against two large\n>> tables.\n>>\n>>\n>> Can you tell the version you're running currently and the output\n>> of this query, please?\n>>\n>>     select name,setting,source from pg_settings where source not\n>> in ('default','override');\n>>\n> Running \"PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on\n> x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1\n> 20191008, 64-bit\"\n>\n> Updated the gist to include the results forom pg_settings. Here's\n> the direct link\n> https://gist.githubusercontent.com/mewwts/9f11ae5e6a5951593b8999559f5418cf/raw/e5deebbbb48680e04570bec4e9a816fa009da34f/pg_settings\n> <https://gist.githubusercontent.com/mewwts/9f11ae5e6a5951593b8999559f5418cf/raw/e5deebbbb48680e04570bec4e9a816fa009da34f/pg_settings>\n>\n> It looks like indexes currently chosen by the planner don't quite fit \n> your query.\n>\n> I would create the following index (if it's possible to update schema):\n>    ON \"uniswap_v2.Pair_evt_Mint\" (evt_tx_hash, evt_block_time)\nI'll try to add this.\n>\n> Same for the second table, looks like\n>   ON \"ethereum.transactions\" (hash, block_time)\n> is a better fit for your query. In fact, I do not think \n> `transactions_block_number_time` index is used frequently, 'cos second \n> column of the index is a partitioning key.\nI'll see if I can add it. This table is huge so normally we only make \nchanges to these when we redeploy the database.\n>\n> Currently planner wants to go via indexes 'cos you've made random \n> access really cheap compared to sequential one (and your findings \n> shows this).\n> Perhaps on a NVMe disks this could work, but in your case you need to \n> find the real bottleneck (therefore I asked for buffers).\n>\n> I would set `random_page_cost` to a 2.5 at least with your numbers. \n> Also, I would check DB and indexes for bloat (just a guess now, 'cos \n> your plans miss buffers figures)\n\nYeah, 1.1 seems way to low.\n\nHere's the output of the explain (analyze, buffers, settings) you asked for:\n\nvanilla: https://explain.depesz.com/s/Ktrd\n\nset enable_nestloop=off: https://explain.depesz.com/s/mvSD\n\nset enable_nestloop=off; set enable_seqscan=off: \nhttps://explain.depesz.com/s/XIDo\n\n\n>\n>\n> -- \n> Victor Yegorov\n\n\n\n\n\n\n\n\nOn 10/20/20 6:51 PM, Victor Yegorov\n wrote:\n\n\n\n\nвт, 20 окт. 2020 г. в 16:50, Mats Olsen <[email protected]>:\n\n\n\n\nOn 10/20/20 3:04 PM, Victor Yegorov wrote:\n\n\n\nвт, 20 окт. 2020 г. в 11:38, Mats\n Julian Olsen <[email protected]>:\n\n\n\n\nI'm looking for some help to manage queries\n against two large tables.\n\n\n\n\nCan you tell the version you're running\n currently and the output of this query, please?\n\n     select name,setting,source from pg_settings\n where source not in ('default','override'); \n\n\n\n\n\nRunning \"PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg19.10+1) on\n x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n 9.2.1-9ubuntu2) 9.2.1 20191008, 64-bit\"\n\nUpdated the gist to include the results forom\n pg_settings. Here's the direct link\n https://gist.githubusercontent.com/mewwts/9f11ae5e6a5951593b8999559f5418cf/raw/e5deebbbb48680e04570bec4e9a816fa009da34f/pg_settings\n\n\n\n\nIt looks like indexes currently chosen by the planner don't\n quite fit your query.\n\n\nI would create the following index (if it's possible to\n update schema):\n    ON \"uniswap_v2.Pair_evt_Mint\" (evt_tx_hash, evt_block_time)\n\n\n\n I'll try to add this.\n\n\n\n\nSame for the second table, looks like\n   ON \"ethereum.transactions\" (hash, block_time)\n is a better fit for your query. In fact, I do not think\n `transactions_block_number_time` index is used frequently,\n 'cos second column of the index is a partitioning key.\n\n\n\n I'll see if I can add it. This table is huge so normally we only\n make changes to these when we redeploy the database.\n\n\n\n Currently planner wants to go via indexes 'cos you've made\n random access really cheap compared to sequential one (and\n your findings shows this).\n Perhaps on a NVMe disks this could work, but in your case you\n need to find the real bottleneck (therefore I asked for\n buffers).\n\n I would set `random_page_cost` to a 2.5 at least with your\n numbers. Also, I would check DB and indexes for bloat (just a\n guess now, 'cos your plans miss buffers figures)\n\n\nYeah, 1.1 seems way to low. \n\nHere's the output of the explain (analyze, buffers, settings) you\n asked for:\n\nvanilla: https://explain.depesz.com/s/Ktrd\nset enable_nestloop=off: https://explain.depesz.com/s/mvSD\n\nset enable_nestloop=off; set enable_seqscan=off:\n https://explain.depesz.com/s/XIDo\n\n\n\n\n\n\n -- \n\n\nVictor Yegorov", "msg_date": "Tue, 20 Oct 2020 19:43:27 +0200", "msg_from": "Mats Olsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "Hi Mats,\r\n\r\nOn 20. Oct 2020, at 11:37, Mats Julian Olsen <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n[...]\r\n\r\n1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\r\n2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\r\n3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx\r\n\r\nHow can I get Postgres not to loop over 12M rows?\r\n\r\nI looked at the plans and your config and there are some thoughts I'm having:\r\n\r\n- The row estimate is off, as you possibly noticed. This can be possibly solved by raising `default_statistics_target` to e.g. 2500 (we typically use that) and run ANALYZE\r\n\r\n- I however think that the misestimate might be caused by the evt_tx_hash being of type bytea. I believe that PG cannot estimate this very well for JOINs and will rather pick row numbers too low. Hence the nested loop is picked and there might be no way around this. I have experienced similar things when applying JOINs on VARCHAR with e.g. more than 3 fields for comparison.\r\n\r\n- Other things to look into:\r\n\r\n - work_mem seems too low to me with 56MB, consider raising this to the GB range to avoid disk-based operations\r\n - min_parallel_table_scan_size - try 0\r\n - parallel_setup_cost (default 1000, maybe try 500)\r\n - parallel_tuple_cost (default 1.0, maybe try 0.1)\r\n - random_page_cost (as mentioned consider raising this maybe much higher, factor 10 or sth like this) or (typically) seq_page_cost can be possibly much lower (0.1, 0.01) depending on your storage\r\n\r\nI hope this helps to get to a parallel plan without setting `nested_loop = off`. EXPLAIN should be enough already to see the difference.\r\n\r\nBest,\r\nSebastian\r\n\r\n--\r\n\r\nSebastian Dressler, Solution Architect\r\n+49 30 994 0496 72 | [email protected]<mailto:[email protected]>\r\n\r\nSwarm64 AS\r\nParkveien 41 B | 0258 Oslo | Norway\r\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\r\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck\r\n\r\nSwarm64 AS Zweigstelle Hive\r\nUllsteinstr. 120 | 12109 Berlin | Germany\r\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B\r\n\r\n[cid:[email protected]]", "msg_date": "Wed, 21 Oct 2020 12:38:04 +0000", "msg_from": "Sebastian Dressler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On 10/21/20 2:38 PM, Sebastian Dressler wrote:\n> Hi Mats,\n>\n>> On 20. Oct 2020, at 11:37, Mats Julian Olsen <[email protected] \n>> <mailto:[email protected]>> wrote:\n>>\n>> [...]\n>>\n>> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR \n>> <https://explain.depesz.com/s/NvDR>\n>> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK \n>> <https://explain.depesz.com/s/buKK>\n>> 3) enable_nestloop=off; enable_seqscan=off (2 min): \n>> https://explain.depesz.com/s/0WXx <https://explain.depesz.com/s/0WXx>\n>>\n>> How can I get Postgres not to loop over 12M rows?\n>\n> I looked at the plans and your config and there are some thoughts I'm \n> having:\n>\n> - The row estimate is off, as you possibly noticed. This can be \n> possibly solved by raising `default_statistics_target` to e.g. 2500 \n> (we typically use that) and run ANALYZE\nI've `set default_statistics_target=2500` and ran analyze on both tables \ninvolved, unfortunately the plan is the same. The columns we use for \njoining here are hashes and we expect very few duplicates in the tables. \nHence I think extended statistics (storing most common values and \nhistogram bounds) aren't useful for this kind of data. Would you say the \nsame thing?\n>\n> - I however think that the misestimate might be caused by the \n> evt_tx_hash being of type bytea. I believe that PG cannot estimate \n> this very well for JOINs and will rather pick row numbers too low. \n> Hence the nested loop is picked and there might be no way around this. \n> I have experienced similar things when applying JOINs on VARCHAR with \n> e.g. more than 3 fields for comparison.\n\nThis is very interesting, and I have never heard of issues with using \n`bytea` for joins. Our entire database is filled with them, as we deal \nwith hashes of different lengths. In fact I would estimate that 60% of \ncolumns are bytea's. My intuition would say that it's better to store \nthe hashes as byte arrays, rather than `text` fields as you can compare \nthe raw bytes directly without encoding first?  Do you have any \nreferences for this?\n\nAlternatively, since I know the length of the hashes in advance, I \ncould've used `varchar(n)`, but I don't think there's any gains to be \nhad in postgres by doing that? Something like `bytea(n)` would also have \nbeen interesting, had postgres been able to exploit that information.\n\n\n> - Other things to look into:\n>\n>     - work_mem seems too low to me with 56MB, consider raising this to \n> the GB range to avoid disk-based operations\n>     - min_parallel_table_scan_size - try 0\n>     - parallel_setup_cost (default 1000, maybe try 500)\n>     - parallel_tuple_cost (default 1.0, maybe try 0.1)\n>     - random_page_cost (as mentioned consider raising this maybe much \n> higher, factor 10 or sth like this) or (typically) seq_page_cost can \n> be possibly much lower (0.1, 0.01) depending on your storage\n\nI've tried various settings of these parameters now, and unfortunately \nthe only parameter that alters the query plan is the last one \n(random_page_cost), which also has the side effect of (almost) forcing \nsequential scans for most queries as far as I understand? Our storage is \nGoogle Cloud pd-ssd.\n\nThank you so much for you response, I'm looking forward to keep the \ndiscussion going.\n\n>\n> I hope this helps to get to a parallel plan without setting \n> `nested_loop = off`. EXPLAIN should be enough already to see the \n> difference.\n>\n> Best,\n> Sebastian\n>\n> --\n>\n> Sebastian Dressler, Solution Architect\n> +49 30 994 0496 72 | [email protected] <mailto:[email protected]>\n>\n> Swarm64 AS\n> Parkveien 41 B | 0258 Oslo | Norway\n> Registered at Brønnøysundregistrene in Norway under Org.-Number 911 \n> 662 787\n> CEO/Geschäftsführer (Daglig Leder): Thomas \n> Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck\n>\n> Swarm64 AS Zweigstelle Hive\n> Ullsteinstr. 120 | 12109 Berlin | Germany\n> Registered at Amtsgericht Charlottenburg - HRB 154382 B\n>\n\nBest,\n\nMats", "msg_date": "Wed, 21 Oct 2020 16:42:02 +0200", "msg_from": "Mats Olsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On Wed, Oct 21, 2020, 8:42 AM Mats Olsen <[email protected]> wrote:\n\n>\n> On 10/21/20 2:38 PM, Sebastian Dressler wrote:\n>\n> Hi Mats,\n>\n> On 20. Oct 2020, at 11:37, Mats Julian Olsen <[email protected]>\n> wrote:\n>\n> [...]\n>\n> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n> 3) enable_nestloop=off; enable_seqscan=off (2 min):\n> https://explain.depesz.com/s/0WXx\n>\n> How can I get Postgres not to loop over 12M rows?\n>\n>\n> I looked at the plans and your config and there are some thoughts I'm\n> having:\n>\n> - The row estimate is off, as you possibly noticed. This can be possibly\n> solved by raising `default_statistics_target` to e.g. 2500 (we typically\n> use that) and run ANALYZE\n>\n> I've `set default_statistics_target=2500` and ran analyze on both tables\n> involved, unfortunately the plan is the same. The columns we use for\n> joining here are hashes and we expect very few duplicates in the tables.\n> Hence I think extended statistics (storing most common values and histogram\n> bounds) aren't useful for this kind of data. Would you say the same thing?\n>\n\nHave you checked if ndistinct is roughly accurate? It can be set manually\non a column, or set to some value less than one with the calculation\ndepending on reltuples.\n\nOn Wed, Oct 21, 2020, 8:42 AM Mats Olsen <[email protected]> wrote:\n\n\n\nOn 10/21/20 2:38 PM, Sebastian Dressler\n wrote:\n\n\n \n Hi Mats,\n \n\n\nOn 20. Oct 2020, at 11:37, Mats Julian Olsen\n <[email protected]>\n wrote:\n\n\n\n[...]\n\n\n\n1) Vanilla plan (16 min) : \n https://explain.depesz.com/s/NvDR \n2) enable_nestloop=off (4 min): \n https://explain.depesz.com/s/buKK \n\n3) enable_nestloop=off;\n enable_seqscan=off (2 min): \n https://explain.depesz.com/s/0WXx  \n\n\nHow can I get Postgres not to loop over\n 12M rows?\n\n\n\n\n\n\n\n I looked at the plans and your config and there are some\n thoughts I'm having:\n\n\n- The row estimate is off, as you possibly noticed. This\n can be possibly solved by raising `default_statistics_target`\n to e.g. 2500 (we typically use that) and run ANALYZE\n\n\n I've `set default_statistics_target=2500` and ran analyze on both\n tables involved, unfortunately the plan is the same. The columns we\n use for joining here are hashes and we expect very few duplicates in\n the tables. Hence I think extended statistics (storing most common\n values and histogram bounds) aren't useful for this kind of data.\n Would you say the same thing?Have you checked if ndistinct is roughly accurate? It can be set manually on a column, or set to some value less than one with the calculation depending on reltuples.", "msg_date": "Wed, 21 Oct 2020 09:29:53 -0600", "msg_from": "Michael Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "Hi Mats,\r\n\r\nHappy to help.\r\n\r\nOn 21. Oct 2020, at 16:42, Mats Olsen <[email protected]<mailto:[email protected]>> wrote:\r\nOn 10/21/20 2:38 PM, Sebastian Dressler wrote:\r\nHi Mats,\r\n\r\nOn 20. Oct 2020, at 11:37, Mats Julian Olsen <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n[...]\r\n\r\n1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\r\n2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\r\n3) enable_nestloop=off; enable_seqscan=off (2 min): https://explain.depesz.com/s/0WXx\r\n\r\nHow can I get Postgres not to loop over 12M rows?\r\n\r\nI looked at the plans and your config and there are some thoughts I'm having:\r\n\r\n- The row estimate is off, as you possibly noticed. This can be possibly solved by raising `default_statistics_target` to e.g. 2500 (we typically use that) and run ANALYZE\r\nI've `set default_statistics_target=2500` and ran analyze on both tables involved, unfortunately the plan is the same. The columns we use for joining here are hashes and we expect very few duplicates in the tables. Hence I think extended statistics (storing most common values and histogram bounds) aren't useful for this kind of data. Would you say the same thing?\r\n\r\nYes, that looks like a given in this case.\r\n\r\n\r\n- I however think that the misestimate might be caused by the evt_tx_hash being of type bytea. I believe that PG cannot estimate this very well for JOINs and will rather pick row numbers too low. Hence the nested loop is picked and there might be no way around this. I have experienced similar things when applying JOINs on VARCHAR with e.g. more than 3 fields for comparison.\r\n\r\nThis is very interesting, and I have never heard of issues with using `bytea` for joins. Our entire database is filled with them, as we deal with hashes of different lengths. In fact I would estimate that 60% of columns are bytea's. My intuition would say that it's better to store the hashes as byte arrays, rather than `text` fields as you can compare the raw bytes directly without encoding first? Do you have any references for this?\r\n\r\nUnfortunately, I have not dealt yet with `bytea` that much. It just rang a bell when I saw these kind of off-estimates in combination with nested loops. In the case I referenced it was, that the tables had 3 VARCHAR columns to be joined on and the estimate was very much off. As a result, PG chose nested loops in the upper layers of processing. Due to another JOIN the estimate went down to 1 row whereas it was 1 million rows in reality. Now, yours is \"only\" a factor 5 away, i.e. this might be a totally different reason.\r\n\r\nHowever, I looked into the plan once more and realized, that the source of the problem could also be the scan on \"Pair_evt_Mint\" along the date dimension. Although you have a stats target of 10k there. If the timestamp is (roughly) sorted, you could try adding a BRIN index and by that maybe get a better estimate & scan-time.\r\n\r\nAlternatively, since I know the length of the hashes in advance, I could've used `varchar(n)`, but I don't think there's any gains to be had in postgres by doing that? Something like `bytea(n)` would also have been interesting, had postgres been able to exploit that information.\r\n\r\nI think giving VARCHAR a shot makes sense, maybe on an experimental basis to see whether the estimates get better. Maybe PG can then estimate that there are (almost) no dupes within the table but that there are N-many across tables. Another option to explore is maybe to use UUID as a type. As said above, it more looks like the timestamp causing the mis-estimate.\r\n\r\nMaybe try querying this table by itself with that timestamp to see what kind of estimate you get?\r\n\r\n- Other things to look into:\r\n\r\n - work_mem seems too low to me with 56MB, consider raising this to the GB range to avoid disk-based operations\r\n - min_parallel_table_scan_size - try 0\r\n - parallel_setup_cost (default 1000, maybe try 500)\r\n - parallel_tuple_cost (default 1.0, maybe try 0.1)\r\n - random_page_cost (as mentioned consider raising this maybe much higher, factor 10 or sth like this) or (typically) seq_page_cost can be possibly much lower (0.1, 0.01) depending on your storage\r\n\r\nI've tried various settings of these parameters now, and unfortunately the only parameter that alters the query plan is the last one (random_page_cost), which also has the side effect of (almost) forcing sequential scans for most queries as far as I understand? Our storage is Google Cloud pd-ssd.\r\n\r\nI think a combination of random_page_cost with parallel_tuple_cost and min_parallel_table_scan_size might make sense. By that you possibly get at least parallel sequential scans. But I understand that this is possibly having the same effect as using `enable_nestloop = off`.\r\n\r\nThank you so much for you response, I'm looking forward to keep the discussion going.\r\n\r\nYou're very welcome.\r\n\r\nBest,\r\nSebastian\r\n\r\n--\r\n\r\nSebastian Dressler, Solution Architect\r\n+49 30 994 0496 72 | [email protected]<mailto:[email protected]>\r\n\r\nSwarm64 AS\r\nParkveien 41 B | 0258 Oslo | Norway\r\nRegistered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787\r\nCEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck\r\n\r\nSwarm64 AS Zweigstelle Hive\r\nUllsteinstr. 120 | 12109 Berlin | Germany\r\nRegistered at Amtsgericht Charlottenburg - HRB 154382 B\r\n\r\n[cid:[email protected]]", "msg_date": "Wed, 21 Oct 2020 15:35:07 +0000", "msg_from": "Sebastian Dressler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On 10/21/20 5:29 PM, Michael Lewis wrote:\n>\n>\n> On Wed, Oct 21, 2020, 8:42 AM Mats Olsen <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>\n> On 10/21/20 2:38 PM, Sebastian Dressler wrote:\n>> Hi Mats,\n>>\n>>> On 20. Oct 2020, at 11:37, Mats Julian Olsen\n>>> <[email protected] <mailto:[email protected]>> wrote:\n>>>\n>>> [...]\n>>>\n>>> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n>>> <https://explain.depesz.com/s/NvDR>\n>>> 2) enable_nestloop=off (4 min):\n>>> https://explain.depesz.com/s/buKK\n>>> <https://explain.depesz.com/s/buKK>\n>>> 3) enable_nestloop=off; enable_seqscan=off (2 min):\n>>> https://explain.depesz.com/s/0WXx\n>>> <https://explain.depesz.com/s/0WXx>\n>>>\n>>> How can I get Postgres not to loop over 12M rows?\n>>\n>> I looked at the plans and your config and there are some thoughts\n>> I'm having:\n>>\n>> - The row estimate is off, as you possibly noticed. This can be\n>> possibly solved by raising `default_statistics_target` to e.g.\n>> 2500 (we typically use that) and run ANALYZE\n> I've `set default_statistics_target=2500` and ran analyze on both\n> tables involved, unfortunately the plan is the same. The columns\n> we use for joining here are hashes and we expect very few\n> duplicates in the tables. Hence I think extended statistics\n> (storing most common values and histogram bounds) aren't useful\n> for this kind of data. Would you say the same thing?\n>\n>\n> Have you checked if ndistinct is roughly accurate? It can be set \n> manually on a column, or set to some value less than one with the \n> calculation depending on reltuples.\nThank you for your reply!\n\nI included ndistinct-counts in the gist: see \nhttps://gist.githubusercontent.com/mewwts/9f11ae5e6a5951593b8999559f5418cf/raw/24ca1f227940b48842a03435b731f82364f3576d/stats%2520Mint \nand \nhttps://gist.githubusercontent.com/mewwts/9f11ae5e6a5951593b8999559f5418cf/raw/24ca1f227940b48842a03435b731f82364f3576d/stats%2520transactions.\n\nThe join keys `transactions.hash` (unique) and \n`\"Pair_evt_Mint\".evt_tx_hash` (nearly unique) both have ndistinct=-1 \nwhich seems to make sense to me. The Mint-table has -0.8375 for \nevt_block_time whereas this query returns 0.56 `select count(distinct \nevt_block_time)::numeric/count(*) from uniswap_v2.\"Pair_evt_Mint\";`. \nShould I adjust that one?\n\nMany of the other ndistinct-values for `transactions` seem strange, as \nit's a giant (partitioned) table, but I don't know enough about the \nstatistics to draw any conclusions from it. What do you think?\n\n\n\n\n\n\n\n\n\nOn 10/21/20 5:29 PM, Michael Lewis\n wrote:\n\n\n\n\n\n\n\nOn Wed, Oct 21, 2020, 8:42\n AM Mats Olsen <[email protected]>\n wrote:\n\n\n\n\n\nOn 10/21/20 2:38 PM, Sebastian Dressler wrote:\n\n Hi Mats,\n \n\n\nOn 20. Oct 2020, at 11:37, Mats Julian\n Olsen <[email protected]>\n wrote:\n\n\n\n[...]\n\n\n\n1) Vanilla plan (16 min) : \n https://explain.depesz.com/s/NvDR\n\n2) enable_nestloop=off (4 min): \n https://explain.depesz.com/s/buKK\n\n\n3) enable_nestloop=off;\n enable_seqscan=off (2 min): \n https://explain.depesz.com/s/0WXx \n \n\n\nHow can I get Postgres not to loop\n over 12M rows?\n\n\n\n\n\n\n\n I looked at the plans and your config and there\n are some thoughts I'm having:\n\n\n- The row estimate is off, as you possibly\n noticed. This can be possibly solved by raising\n `default_statistics_target` to e.g. 2500 (we\n typically use that) and run ANALYZE\n\n\n I've `set default_statistics_target=2500` and ran\n analyze on both tables involved, unfortunately the plan\n is the same. The columns we use for joining here are\n hashes and we expect very few duplicates in the tables.\n Hence I think extended statistics (storing most common\n values and histogram bounds) aren't useful for this kind\n of data. Would you say the same thing?\n\n\n\n\n\n\nHave you checked if ndistinct is roughly\n accurate? It can be set manually on a column, or set to some\n value less than one with the calculation depending on\n reltuples.\n\n\n Thank you for your reply!\nI included ndistinct-counts in the gist: see\nhttps://gist.githubusercontent.com/mewwts/9f11ae5e6a5951593b8999559f5418cf/raw/24ca1f227940b48842a03435b731f82364f3576d/stats%2520Mint\n and\nhttps://gist.githubusercontent.com/mewwts/9f11ae5e6a5951593b8999559f5418cf/raw/24ca1f227940b48842a03435b731f82364f3576d/stats%2520transactions.\nThe join keys `transactions.hash` (unique) and\n `\"Pair_evt_Mint\".evt_tx_hash` (nearly unique) both have\n ndistinct=-1 which seems to make sense to me. The Mint-table has\n -0.8375 for evt_block_time whereas this query returns 0.56 `select\n count(distinct evt_block_time)::numeric/count(*) from\n uniswap_v2.\"Pair_evt_Mint\";`. Should I adjust that one?\n\nMany of the other ndistinct-values for `transactions` seem\n strange, as it's a giant (partitioned) table, but I don't know\n enough about the statistics to draw any conclusions from it. What\n do you think?", "msg_date": "Thu, 22 Oct 2020 08:21:46 +0200", "msg_from": "Mats Olsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On Wed, Oct 21, 2020 at 04:42:02PM +0200, Mats Olsen wrote:\n> On 10/21/20 2:38 PM, Sebastian Dressler wrote:\n> > > On 20. Oct 2020, at 11:37, Mats Julian Olsen <[email protected]\n> > > \n> > > [...]\n> > > \n> > > 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n> > > <https://explain.depesz.com/s/NvDR>\n> > > 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n> > > <https://explain.depesz.com/s/buKK>\n> > > 3) enable_nestloop=off; enable_seqscan=off (2 min):\n> > > https://explain.depesz.com/s/0WXx\n> > > <https://explain.depesz.com/s/0WXx>\n> > > \n> > > How can I get Postgres not to loop over 12M rows?\n> > \n> > I looked at the plans and your config and there are some thoughts I'm\n> > having:\n> > \n> > - The row estimate is off, as you possibly noticed. This can be possibly\n> > solved by raising `default_statistics_target` to e.g. 2500 (we typically\n> > use that) and run ANALYZE\n> I've `set default_statistics_target=2500` and ran analyze on both tables\n> involved, unfortunately the plan is the same. The columns we use for joining\n> here are hashes and we expect very few duplicates in the tables. Hence I\n> think extended statistics (storing most common values and histogram bounds)\n> aren't useful for this kind of data. Would you say the same thing?\n\nIn postgres, extended statistics means \"MV stats objects\", not MCV+histogram,\nwhich are \"simple statistics\", like ndistinct.\n\nYour indexes maybe aren't ideal for this query, as mentioned.\nThe indexes that do exist might also be inefficient, due to being unclustered,\nor bloated, or due to multiple columns.\n\nThese look redundant (which doesn't matter for this the query):\n\nPartition key: RANGE (block_number)\nIndexes:\n \"transactions_block_number_btree\" btree (block_number DESC)\n \"transactions_block_number_hash_key\" UNIQUE CONSTRAINT, btree (block_number, hash)\n \"transactions_block_number_time\" btree (hash, block_number)\n\nMaybe that would be an index just on \"hash\", which might help here.\n\nPossibly you'd want to try to use a BRIN index on timestamp (or maybe\nblock_number?).\n\nMaybe you'd want to VACUUM the table to allow index-only scan on the hash\ncolumns ?\n\nMaybe you'd want to check if reindexing reduces the index size ? We don't know\nif the table gets lots of UPDATE/DELETE or if any of the columns have high\nlogical vs physical \"correlation\".\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n\nHave you ANALYZED the partitioned parent recently ?\nThis isn't handled by autovacuum.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 22 Oct 2020 01:37:01 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On 10/22/20 8:37 AM, Justin Pryzby wrote:\n> On Wed, Oct 21, 2020 at 04:42:02PM +0200, Mats Olsen wrote:\n>> On 10/21/20 2:38 PM, Sebastian Dressler wrote:\n>>>> On 20. Oct 2020, at 11:37, Mats Julian Olsen <[email protected]\n>>>>\n>>>> [...]\n>>>>\n>>>> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR\n>>>> <https://explain.depesz.com/s/NvDR>\n>>>> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK\n>>>> <https://explain.depesz.com/s/buKK>\n>>>> 3) enable_nestloop=off; enable_seqscan=off (2 min):\n>>>> https://explain.depesz.com/s/0WXx\n>>>> <https://explain.depesz.com/s/0WXx>\n>>>>\n>>>> How can I get Postgres not to loop over 12M rows?\n>>> I looked at the plans and your config and there are some thoughts I'm\n>>> having:\n>>>\n>>> - The row estimate is off, as you possibly noticed. This can be possibly\n>>> solved by raising `default_statistics_target` to e.g. 2500 (we typically\n>>> use that) and run ANALYZE\n>> I've `set default_statistics_target=2500` and ran analyze on both tables\n>> involved, unfortunately the plan is the same. The columns we use for joining\n>> here are hashes and we expect very few duplicates in the tables. Hence I\n>> think extended statistics (storing most common values and histogram bounds)\n>> aren't useful for this kind of data. Would you say the same thing?\n> In postgres, extended statistics means \"MV stats objects\", not MCV+histogram,\n> which are \"simple statistics\", like ndistinct.\n>\n> Your indexes maybe aren't ideal for this query, as mentioned.\n> The indexes that do exist might also be inefficient, due to being unclustered,\n> or bloated, or due to multiple columns.\n\nThis table is append-only, i.e. no updates. The partitions are clustered \non a btree index on block_time `\n\n\"transactions_p500000_block_time_idx\" btree (block_time) CLUSTER\n\n>\n> These look redundant (which doesn't matter for this the query):\n>\n> Partition key: RANGE (block_number)\n> Indexes:\n> \"transactions_block_number_btree\" btree (block_number DESC)\n> \"transactions_block_number_hash_key\" UNIQUE CONSTRAINT, btree (block_number, hash)\n> \"transactions_block_number_time\" btree (hash, block_number)\n>\n> Maybe that would be an index just on \"hash\", which might help here.\n>\n> Possibly you'd want to try to use a BRIN index on timestamp (or maybe\n> block_number?).\n\nYeah this could be a good idea, but the size of this table doesn't let \nme add any indexes while it's online. I'll revisit these the next time \nwe redeploy the database.\n\n>\n> Maybe you'd want to VACUUM the table to allow index-only scan on the hash\n> columns ?\n>\n> Maybe you'd want to check if reindexing reduces the index size ? We don't know\n> if the table gets lots of UPDATE/DELETE or if any of the columns have high\n> logical vs physical \"correlation\".\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions#Statistics:_n_distinct.2C_MCV.2C_histogram\n>\n> Have you ANALYZED the partitioned parent recently ?\n> This isn't handled by autovacuum.\n\nAs mentioned above there aren't any updates or deletes to this table. \nBoth tables have been ANALYZEd. I ran that query and the output is here \nhttps://gist.github.com/mewwts/86ef43ff82120e104a654cd7fbb5ec06. I ran \nit for the two specific columns and all partitions for the transactions \ntable, and for all columns on \"Pair_evt_Mint\". Does these values tell \nyou anything?\n\n\n\n", "msg_date": "Thu, 22 Oct 2020 09:36:03 +0200", "msg_from": "Mats Olsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On Thu, Oct 22, 2020 at 09:36:03AM +0200, Mats Olsen wrote:\n> On 10/22/20 8:37 AM, Justin Pryzby wrote:\n> > These look redundant (which doesn't matter for this the query):\n> > \n> > Partition key: RANGE (block_number)\n> > Indexes:\n> > \"transactions_block_number_btree\" btree (block_number DESC)\n> > \"transactions_block_number_hash_key\" UNIQUE CONSTRAINT, btree (block_number, hash)\n> > \"transactions_block_number_time\" btree (hash, block_number)\n> > \n> > Maybe that would be an index just on \"hash\", which might help here.\n> > \n> > Possibly you'd want to try to use a BRIN index on timestamp (or maybe\n> > block_number?).\n> \n> Yeah this could be a good idea, but the size of this table doesn't let me\n> add any indexes while it's online. I'll revisit these the next time we\n> redeploy the database.\n\nWhy not CREATE INDEX CONCURRENTLY ?\nIt seems to me you could add BRIN on all correlated indexes. It's nearly free.\n\n 0.102922715 | Pair_evt_Mint | evt_block_time | f | 0 | -0.56466025 | 10000 | 10001 | 0.964666\n 0.06872191 | Pair_evt_Mint | evt_block_time | f | 0 | -0.8379525 | 500 | 501 | 0.99982\n 0.06872191 | Pair_evt_Mint | evt_block_number | f | 0 | -0.8379525 | 500 | 501 | 0.99982\n 0.032878816 | Pair_evt_Mint | evt_block_number | f | 0 | -0.56466025 | 2500 | 2501 | 0.964666\n\n> > Maybe you'd want to VACUUM the table to allow index-only scan on the hash\n> > columns ?\n\nDid you try it ? I think this could be a big win.\nSince it's append-only, autovacuum won't hit it (until you upgrade to pg13).\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 22 Oct 2020 08:48:26 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "Thanks for your response Justin.\n\nOn 10/22/20 3:48 PM, Justin Pryzby wrote:\n> On Thu, Oct 22, 2020 at 09:36:03AM +0200, Mats Olsen wrote:\n>> On 10/22/20 8:37 AM, Justin Pryzby wrote:\n>>> These look redundant (which doesn't matter for this the query):\n>>>\n>>> Partition key: RANGE (block_number)\n>>> Indexes:\n>>> \"transactions_block_number_btree\" btree (block_number DESC)\n>>> \"transactions_block_number_hash_key\" UNIQUE CONSTRAINT, btree (block_number, hash)\n>>> \"transactions_block_number_time\" btree (hash, block_number)\n>>>\n>>> Maybe that would be an index just on \"hash\", which might help here.\n>>>\n>>> Possibly you'd want to try to use a BRIN index on timestamp (or maybe\n>>> block_number?).\n>> Yeah this could be a good idea, but the size of this table doesn't let me\n>> add any indexes while it's online. I'll revisit these the next time we\n>> redeploy the database.\n> Why not CREATE INDEX CONCURRENTLY ?\nWe could, but it would take forever on the `ethereum.transactions` table.\n> It seems to me you could add BRIN on all correlated indexes. It's nearly free.\n>\n> 0.102922715 | Pair_evt_Mint | evt_block_time | f | 0 | -0.56466025 | 10000 | 10001 | 0.964666\n> 0.06872191 | Pair_evt_Mint | evt_block_time | f | 0 | -0.8379525 | 500 | 501 | 0.99982\n> 0.06872191 | Pair_evt_Mint | evt_block_number | f | 0 | -0.8379525 | 500 | 501 | 0.99982\n> 0.032878816 | Pair_evt_Mint | evt_block_number | f | 0 | -0.56466025 | 2500 | 2501 | 0.964666\nAgreed, could try to add BRIN's on these.\n>\n>>> Maybe you'd want to VACUUM the table to allow index-only scan on the hash\n>>> columns ?\n> Did you try it ? I think this could be a big win.\n> Since it's append-only, autovacuum won't hit it (until you upgrade to pg13).\n\nI vacuumed the uniswap_v2.\"Pair_evt_Mint\", but still getting the same \nplan, unfortunately.\n\n\n>\n\n\n", "msg_date": "Fri, 23 Oct 2020 14:16:35 +0200", "msg_from": "Mats Olsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" }, { "msg_contents": "On 10/21/20 5:35 PM, Sebastian Dressler wrote:\n> Hi Mats,\n>\n> Happy to help.\n>\n>> On 21. Oct 2020, at 16:42, Mats Olsen <[email protected] \n>> <mailto:[email protected]>> wrote:\n>> On 10/21/20 2:38 PM, Sebastian Dressler wrote:\n>>> Hi Mats,\n>>>\n>>>> On 20. Oct 2020, at 11:37, Mats Julian Olsen \n>>>> <[email protected] <mailto:[email protected]>> wrote:\n>>>>\n>>>> [...]\n>>>>\n>>>> 1) Vanilla plan (16 min) : https://explain.depesz.com/s/NvDR \n>>>> <https://explain.depesz.com/s/NvDR>\n>>>> 2) enable_nestloop=off (4 min): https://explain.depesz.com/s/buKK \n>>>> <https://explain.depesz.com/s/buKK>\n>>>> 3) enable_nestloop=off; enable_seqscan=off (2 min): \n>>>> https://explain.depesz.com/s/0WXx <https://explain.depesz.com/s/0WXx>\n>>>>\n>>>> How can I get Postgres not to loop over 12M rows?\n>>>\n>>> I looked at the plans and your config and there are some thoughts \n>>> I'm having:\n>>>\n>>> - The row estimate is off, as you possibly noticed. This can be \n>>> possibly solved by raising `default_statistics_target` to e.g. 2500 \n>>> (we typically use that) and run ANALYZE\n>> I've `set default_statistics_target=2500` and ran analyze on both \n>> tables involved, unfortunately the plan is the same. The columns we \n>> use for joining here are hashes and we expect very few duplicates in \n>> the tables. Hence I think extended statistics (storing most common \n>> values and histogram bounds) aren't useful for this kind of data. \n>> Would you say the same thing?\n>\n> Yes, that looks like a given in this case.\n>\n>>>\n>>> - I however think that the misestimate might be caused by the \n>>> evt_tx_hash being of type bytea. I believe that PG cannot estimate \n>>> this very well for JOINs and will rather pick row numbers too low. \n>>> Hence the nested loop is picked and there might be no way around \n>>> this. I have experienced similar things when applying JOINs on \n>>> VARCHAR with e.g. more than 3 fields for comparison.\n>>\n>> This is very interesting, and I have never heard of issues with using \n>> `bytea` for joins. Our entire database is filled with them, as we \n>> deal with hashes of different lengths. In fact I would estimate that \n>> 60% of columns are bytea's. My intuition would say that it's better \n>> to store the hashes as byte arrays, rather than `text` fields as you \n>> can compare the raw bytes directly without encoding first?  Do you \n>> have any references for this?\n>>\n> Unfortunately, I have not dealt yet with `bytea` that much. It just \n> rang a bell when I saw these kind of off-estimates in combination with \n> nested loops. In the case I referenced it was, that the tables had 3 \n> VARCHAR columns to be joined on and the estimate was very much off. As \n> a result, PG chose nested loops in the upper layers of processing. Due \n> to another JOIN the estimate went down to 1 row whereas it was 1 \n> million rows in reality. Now, yours is \"only\" a factor 5 away, i.e. \n> this might be a totally different reason.\n>\n> However, I looked into the plan once more and realized, that the \n> source of the problem could also be the scan on \"Pair_evt_Mint\" along \n> the date dimension. Although you have a stats target of 10k there. If \n> the timestamp is (roughly) sorted, you could try adding a BRIN index \n> and by that maybe get a better estimate & scan-time.\nHi again, after around 48 hours a CREATE INDEX CONCURRENTLY ran \nsuccessfully. The new plan still uses a nested loop, but the scan on \n\"Pair_evt_Mint\" is now a Parallel index scan. See \nhttps://explain.depesz.com/s/8ZzT\n>>\n>> Alternatively, since I know the length of the hashes in advance, I \n>> could've used `varchar(n)`, but I don't think there's any gains to be \n>> had in postgres by doing that? Something like `bytea(n)` would also \n>> have been interesting, had postgres been able to exploit that \n>> information.\n>>\n> I think giving VARCHAR a shot makes sense, maybe on an experimental \n> basis to see whether the estimates get better. Maybe PG can then \n> estimate that there are (almost) no dupes within the table but that \n> there are N-many across tables. Another option to explore is maybe to \n> use UUID as a type. As said above, it more looks like the timestamp \n> causing the mis-estimate.\n>\n> Maybe try querying this table by itself with that timestamp to see \n> what kind of estimate you get?\n>\n>>> - Other things to look into:\n>>>\n>>>     - work_mem seems too low to me with 56MB, consider raising this \n>>> to the GB range to avoid disk-based operations\n>>>     - min_parallel_table_scan_size - try 0\n>>>     - parallel_setup_cost (default 1000, maybe try 500)\n>>>     - parallel_tuple_cost (default 1.0, maybe try 0.1)\n>>>     - random_page_cost (as mentioned consider raising this maybe \n>>> much higher, factor 10 or sth like this) or (typically) \n>>> seq_page_cost can be possibly much lower (0.1, 0.01) depending on \n>>> your storage\n>>\n>> I've tried various settings of these parameters now, and \n>> unfortunately the only parameter that alters the query plan is the \n>> last one (random_page_cost), which also has the side effect of \n>> (almost) forcing sequential scans for most queries as far as I \n>> understand? Our storage is Google Cloud pd-ssd.\n>>\n> I think a combination of random_page_cost with parallel_tuple_cost and \n> min_parallel_table_scan_size might make sense. By that you possibly \n> get at least parallel sequential scans. But I understand that this is \n> possibly having the same effect as using `enable_nestloop = off`.\n\nI'll have a closer look at these parameters.\n\nAgain, thank you.\n\nMats\n\n\n\n\n\n\n\n\n\nOn 10/21/20 5:35 PM, Sebastian Dressler\n wrote:\n\n\n\n Hi Mats,\n \n\nHappy to help.\n\n\nOn 21. Oct 2020, at 16:42, Mats Olsen <[email protected]>\n wrote:\n\n\nOn 10/21/20 2:38 PM,\n Sebastian Dressler wrote:\n\n\n Hi Mats,\n \n\n\nOn 20. Oct 2020, at 11:37, Mats\n Julian Olsen <[email protected]>\n wrote:\n\n\n\n[...]\n\n\n\n1) Vanilla plan (16 min) : \n https://explain.depesz.com/s/NvDR\n\n2) enable_nestloop=off (4\n min): \n https://explain.depesz.com/s/buKK\n\n\n3) enable_nestloop=off;\n enable_seqscan=off (2 min): \n https://explain.depesz.com/s/0WXx \n \n\n\nHow can I get Postgres not\n to loop over 12M rows?\n\n\n\n\n\n\n\n I looked at the plans and your config and there\n are some thoughts I'm having:\n\n\n- The row estimate is off, as you\n possibly noticed. This can be possibly solved by\n raising `default_statistics_target` to e.g. 2500\n (we typically use that) and run ANALYZE\n\n\n I've `set default_statistics_target=2500` and ran\n analyze on both tables involved, unfortunately the plan\n is the same. The columns we use for joining here are\n hashes and we expect very few duplicates in the tables.\n Hence I think extended statistics (storing most common\n values and histogram bounds) aren't useful for this kind\n of data. Would you say the same thing?\n\n\n\n\n\nYes, that looks like a given in this case.\n\n\n\n\n\n\n\n\n- I however think that the misestimate\n might be caused by the evt_tx_hash being of type\n bytea. I believe that PG cannot estimate this very\n well for JOINs and will rather pick row numbers\n too low. Hence the nested loop is picked and there\n might be no way around this. I have experienced\n similar things when applying JOINs on VARCHAR with\n e.g. more than 3 fields for comparison.\n\n\nThis is very interesting, and I have never\n heard of issues with using `bytea` for joins. Our\n entire database is filled with them, as we deal with\n hashes of different lengths. In fact I would estimate\n that 60% of columns are bytea's. My intuition would\n say that it's better to store the hashes as byte\n arrays, rather than `text` fields as you can compare\n the raw bytes directly without encoding first?  Do you\n have any references for this?\n\n\n\n\nUnfortunately, I have not dealt yet with `bytea` that\n much. It just rang a bell when I saw these kind of\n off-estimates in combination with nested loops. In the case\n I referenced it was, that the tables had 3 VARCHAR columns\n to be joined on and the estimate was very much off. As a\n result, PG chose nested loops in the upper layers of\n processing. Due to another JOIN the estimate went down to 1\n row whereas it was 1 million rows in reality. Now, yours is\n \"only\" a factor 5 away, i.e. this might be a totally\n different reason.\n\n\nHowever, I looked into the plan once more and realized,\n that the source of the problem could also be the scan on\n \"Pair_evt_Mint\" along the date dimension. Although you have\n a stats target of 10k there. If the timestamp is (roughly)\n sorted, you could try adding a BRIN index and by that maybe\n get a better estimate & scan-time.\n\n\n\n Hi again, after around 48 hours a CREATE INDEX CONCURRENTLY ran\n successfully. The new plan still uses a nested loop, but the scan on\n \"Pair_evt_Mint\" is now a Parallel index scan. See\n https://explain.depesz.com/s/8ZzT\n\n\n\n\n\n\nAlternatively, since I know the length of\n the hashes in advance, I could've used `varchar(n)`,\n but I don't think there's any gains to be had in\n postgres by doing that? Something like `bytea(n)`\n would also have been interesting, had postgres been\n able to exploit that information.\n\n\n\nI think giving VARCHAR a shot makes sense, maybe on an\n experimental basis to see whether the estimates get better.\n Maybe PG can then estimate that there are (almost) no dupes\n within the table but that there are N-many across tables.\n Another option to explore is maybe to use UUID as a type. As\n said above, it more looks like the timestamp causing the\n mis-estimate.\n\n\nMaybe try querying this table by itself with that\n timestamp to see what kind of estimate you get?\n\n\n\n\n\n\n\n- Other things to look into:\n\n\n    - work_mem seems too low to me\n with 56MB, consider raising this to the GB range\n to avoid disk-based operations\n    - min_parallel_table_scan_size -\n try 0\n\n    - parallel_setup_cost (default\n 1000, maybe try 500)\n    - parallel_tuple_cost (default\n 1.0, maybe try 0.1)\n\n    - random_page_cost (as mentioned\n consider raising this maybe much higher, factor 10\n or sth like this) or (typically) seq_page_cost can\n be possibly much lower (0.1, 0.01) depending on\n your storage\n\n\nI've tried various settings of these\n parameters now, and unfortunately the only parameter\n that alters the query plan is the last one\n (random_page_cost), which also has the side effect of\n (almost) forcing sequential scans for most queries as\n far as I understand? Our storage is Google Cloud\n pd-ssd.\n\n\n\n I think a combination of random_page_cost with\n parallel_tuple_cost and min_parallel_table_scan_size might\n make sense. By that you possibly get at least parallel\n sequential scans. But I understand that this is possibly\n having the same effect as using `enable_nestloop = off`.\n\n\nI'll have a closer look at these parameters.\nAgain, thank you.\nMats", "msg_date": "Wed, 28 Oct 2020 07:45:10 +0100", "msg_from": "Mats Olsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance / Planner estimate off" } ]
[ { "msg_contents": "Hi, I have long running query which running for long time and its planner always performing sequnce scan the table2.My gole is to reduce Read IO on the disk cause, this query runns more oftenly ( using this in funtion for ETL). \n\ntable1: transfer_order_header(records 2782678)table2: transfer_order_item ( records: 15995697)here is the query:\n\nset work_mem = '688552kB';explain (analyze,buffers)select     COALESCE(itm.serialnumber,'') AS SERIAL_NO,             COALESCE(itm.ITEM_SKU,'') AS SKU,             COALESCE(itm.receivingplant,'') AS RECEIVINGPLANT,  COALESCE(itm.STO_ID,'') AS STO, supplyingplant,            COALESCE(itm.deliveryitem,'') AS DELIVERYITEM,     min(eventtime) as eventtime  FROM sor_t.transfer_order_header hed,sor_t.transfer_order_item itm  where hed.eventid=itm.eventid group by 1,2,3,4,5,6\n\nQuery Planner[2]:\n\n\"Finalize GroupAggregate (cost=1930380.06..4063262.11 rows=16004137 width=172) (actual time=56050.500..83268.566 rows=15891873 loops=1)\"\" Group Key: (COALESCE(itm.serialnumber, ''::character varying)), (COALESCE(itm.item_sku, ''::character varying)), (COALESCE(itm.receivingplant, ''::character varying)), (COALESCE(itm.sto_id, ''::character varying)), hed.supplyingplant, (COALESCE(itm.deliveryitem, ''::character varying))\"\" Buffers: shared hit=712191 read=3, temp read=38232 written=38233\"\" -> Gather Merge (cost=1930380.06..3669827.09 rows=13336780 width=172) (actual time=56050.488..77106.993 rows=15948520 loops=1)\"\" Workers Planned: 2\"\" Workers Launched: 2\"\" Buffers: shared hit=2213081 read=12, temp read=142840 written=142843\"\" -> Partial GroupAggregate (cost=1929380.04..2129431.74 rows=6668390 width=172) (actual time=50031.458..54888.828 rows=5316173 loops=3)\"\" Group Key: (COALESCE(itm.serialnumber, ''::character varying)), (COALESCE(itm.item_sku, ''::character varying)), (COALESCE(itm.receivingplant, ''::character varying)), (COALESCE(itm.sto_id, ''::character varying)), hed.supplyingplant, (COALESCE(itm.deliveryitem, ''::character varying))\"\" Buffers: shared hit=2213081 read=12, temp read=142840 written=142843\"\" -> Sort (cost=1929380.04..1946051.01 rows=6668390 width=172) (actual time=50031.446..52823.352 rows=5332010 loops=3)\"\" Sort Key: (COALESCE(itm.serialnumber, ''::character varying)), (COALESCE(itm.item_sku, ''::character varying)), (COALESCE(itm.receivingplant, ''::character varying)), (COALESCE(itm.sto_id, ''::character varying)), hed.supplyingplant, (COALESCE(itm.deliveryitem, ''::character varying))\"\" Sort Method: external merge Disk: 305856kB\"\" Worker 0: Sort Method: external merge Disk: 436816kB\"\" Worker 1: Sort Method: external merge Disk: 400048kB\"\" Buffers: shared hit=2213081 read=12, temp read=142840 written=142843\"\" -> Parallel Hash Join (cost=133229.66..603743.97 rows=6668390 width=172) (actual time=762.925..3901.133 rows=5332010 loops=3)\"\" Hash Cond: ((itm.eventid)::text = (hed.eventid)::text)\"\" Buffers: shared hit=2213027 read=12\"\" -> Parallel Seq Scan on transfer_order_item itm (cost=0.00..417722.90 rows=6668390 width=68) (actual time=0.005..524.359 rows=5332010 loops=3)\"\" Buffers: shared hit=351039\"\" -> Parallel Hash (cost=118545.68..118545.68 rows=1174718 width=35) (actual time=755.590..755.590 rows=926782 loops=3)\"\" Buckets: 4194304 Batches: 1 Memory Usage: 243808kB\"\" Buffers: shared hit=1861964 read=12\"\" -> Parallel Index Only Scan using transfer_order_header_eventid_supplyingplant_eventtime_idx1 on transfer_order_header hed (cost=0.56..118545.68 rows=1174718 width=35) (actual time=0.128..388.436 rows=926782 loops=3)\"\" Heap Fetches: 18322\"\" Buffers: shared hit=1861964 read=12\"\"Planning Time: 1.068 ms\"\"Execution Time: 84274.004 ms\"\n\nTables[1]  created ddls in dbfiddle.\n\n\n\nPG Server:  PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bit.RAM: 456Mem Settings: \"maintenance_work_mem\" \"8563712\" \"kB\"\n\"work_mem\" \"688552\"         \"kB\"\n\"wal_buffers\"                         \"2048\"              \"8kB\"\n\"shared_buffers\"                 \"44388442\"     \"8kB\"\n\n\nAny suggestions would greatly appretiated. \n\n\nThanks,Rj\n\n\n\nHi, I have long running query which running for long time and its planner always performing sequnce scan the table2.My gole is to reduce Read IO on the disk cause, this query runns more oftenly ( using this in funtion for ETL). table1: transfer_order_header(records 2782678)table2: transfer_order_item ( records: 15995697)here is the query:set work_mem = '688552kB';explain (analyze,buffers)select     COALESCE(itm.serialnumber,'') AS SERIAL_NO,             COALESCE(itm.ITEM_SKU,'') AS SKU,             COALESCE(itm.receivingplant,'') AS RECEIVINGPLANT,  COALESCE(itm.STO_ID,'') AS STO, supplyingplant,            COALESCE(itm.deliveryitem,'') AS DELIVERYITEM,     min(eventtime) as eventtime  FROM sor_t.transfer_order_header hed,sor_t.transfer_order_item itm  where hed.eventid=itm.eventid group by 1,2,3,4,5,6Query Planner[2]:\"Finalize GroupAggregate (cost=1930380.06..4063262.11 rows=16004137 width=172) (actual time=56050.500..83268.566 rows=15891873 loops=1)\"\n\" Group Key: (COALESCE(itm.serialnumber, ''::character varying)), (COALESCE(itm.item_sku, ''::character varying)), (COALESCE(itm.receivingplant, ''::character varying)), (COALESCE(itm.sto_id, ''::character varying)), hed.supplyingplant, (COALESCE(itm.deliveryitem, ''::character varying))\"\n\" Buffers: shared hit=712191 read=3, temp read=38232 written=38233\"\n\" -> Gather Merge (cost=1930380.06..3669827.09 rows=13336780 width=172) (actual time=56050.488..77106.993 rows=15948520 loops=1)\"\n\" Workers Planned: 2\"\n\" Workers Launched: 2\"\n\" Buffers: shared hit=2213081 read=12, temp read=142840 written=142843\"\n\" -> Partial GroupAggregate (cost=1929380.04..2129431.74 rows=6668390 width=172) (actual time=50031.458..54888.828 rows=5316173 loops=3)\"\n\" Group Key: (COALESCE(itm.serialnumber, ''::character varying)), (COALESCE(itm.item_sku, ''::character varying)), (COALESCE(itm.receivingplant, ''::character varying)), (COALESCE(itm.sto_id, ''::character varying)), hed.supplyingplant, (COALESCE(itm.deliveryitem, ''::character varying))\"\n\" Buffers: shared hit=2213081 read=12, temp read=142840 written=142843\"\n\" -> Sort (cost=1929380.04..1946051.01 rows=6668390 width=172) (actual time=50031.446..52823.352 rows=5332010 loops=3)\"\n\" Sort Key: (COALESCE(itm.serialnumber, ''::character varying)), (COALESCE(itm.item_sku, ''::character varying)), (COALESCE(itm.receivingplant, ''::character varying)), (COALESCE(itm.sto_id, ''::character varying)), hed.supplyingplant, (COALESCE(itm.deliveryitem, ''::character varying))\"\n\" Sort Method: external merge Disk: 305856kB\"\n\" Worker 0: Sort Method: external merge Disk: 436816kB\"\n\" Worker 1: Sort Method: external merge Disk: 400048kB\"\n\" Buffers: shared hit=2213081 read=12, temp read=142840 written=142843\"\n\" -> Parallel Hash Join (cost=133229.66..603743.97 rows=6668390 width=172) (actual time=762.925..3901.133 rows=5332010 loops=3)\"\n\" Hash Cond: ((itm.eventid)::text = (hed.eventid)::text)\"\n\" Buffers: shared hit=2213027 read=12\"\n\" -> Parallel Seq Scan on transfer_order_item itm (cost=0.00..417722.90 rows=6668390 width=68) (actual time=0.005..524.359 rows=5332010 loops=3)\"\n\" Buffers: shared hit=351039\"\n\" -> Parallel Hash (cost=118545.68..118545.68 rows=1174718 width=35) (actual time=755.590..755.590 rows=926782 loops=3)\"\n\" Buckets: 4194304 Batches: 1 Memory Usage: 243808kB\"\n\" Buffers: shared hit=1861964 read=12\"\n\" -> Parallel Index Only Scan using transfer_order_header_eventid_supplyingplant_eventtime_idx1 on transfer_order_header hed (cost=0.56..118545.68 rows=1174718 width=35) (actual time=0.128..388.436 rows=926782 loops=3)\"\n\" Heap Fetches: 18322\"\n\" Buffers: shared hit=1861964 read=12\"\n\"Planning Time: 1.068 ms\"\n\"Execution Time: 84274.004 ms\"Tables[1]  created ddls in dbfiddle.PG Server:  PostgreSQL 11.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bit.RAM: 456Mem Settings: \"maintenance_work_mem\" \"8563712\" \"kB\"\"work_mem\" \"688552\"\t        \"kB\"\"wal_buffers\"\t                        \"2048\"              \"8kB\"\"shared_buffers\"\t                \"44388442\"     \"8kB\"Any suggestions would greatly appretiated. Thanks,Rj", "msg_date": "Thu, 22 Oct 2020 00:32:29 +0000 (UTC)", "msg_from": "Nagaraj Raj <[email protected]>", "msg_from_op": true, "msg_subject": "Query performance" }, { "msg_contents": "On Thu, Oct 22, 2020 at 12:32:29AM +0000, Nagaraj Raj wrote:\n> Hi, I have long running query which running for long time and its planner always performing sequnce scan the table2.My gole is to reduce Read IO on the disk cause, this query runns�more oftenly�( using this in funtion for ETL).�\n> \n> table1:�transfer_order_header(records�2782678)table2:�transfer_order_item ( records: 15995697)here is the query:\n> \n> set work_mem = '688552kB';explain (analyze,buffers)select� � �COALESCE(itm.serialnumber,'') AS SERIAL_NO,�� � � � � � COALESCE(itm.ITEM_SKU,'') AS SKU,�� � � � � � COALESCE(itm.receivingplant,'') AS RECEIVINGPLANT,� COALESCE(itm.STO_ID,'') AS STO, supplyingplant,� � � � � � COALESCE(itm.deliveryitem,'') AS DELIVERYITEM, � � min(eventtime) as eventtime��FROM sor_t.transfer_order_header hed,sor_t.transfer_order_item itm��where hed.eventid=itm.eventid group by 1,2,3,4,5,6\n\nIt spends most its time writing tempfiles for sorting, so it (still) seems to\nbe starved for work_mem.\n|Sort (cost=1929380.04..1946051.01 rows=6668390 width=172) (actual time=50031.446..52823.352 rows=5332010 loops=3)\n\nFirst, can you get a better plan with 2GB work_mem or with enable_sort=off ? \n\nIf so, maybe you could make it less expensive by moving all the coalesce()\ninto a subquery, like\n| SELECT COALESCE(a,''), COALESCE(b,''), .. FROM (SELECT a,b, .. GROUP BY 1,2,..)x;\n\nOr, if you have a faster disks available, use them for temp_tablespace.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 21 Oct 2020 20:09:07 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" }, { "msg_contents": "On Wed, Oct 21, 2020 at 5:32 PM Nagaraj Raj <[email protected]> wrote:\n\n> Hi, I have long running query which running for long time and its planner\n> always performing sequnce scan the table2.\n>\n\n FROM sor_t.transfer_order_header hed,sor_t.transfer_order_item itm\n> where hed.eventid=itm.eventid group by 1,2,3,4,5,6\n>\n> Any suggestions would greatly appretiated.\n>\n\nYou aren't filtering out any rows so it is unsurprising that a sequential\nscan was chosen to fulfil the request that the entire detail table be\nconsulted. The good news is you have access to parallelism - see if you\ncan increase that factor.\n\nAny other suggestions probably requires more knowledge of your problem\ndomain than you've provided here.\n\nFinding a way to add a where clause or compute your desired result during\nrecord insertion or updating are two other potential avenues of\nconsideration.\n\nDavid J.\n\nOn Wed, Oct 21, 2020 at 5:32 PM Nagaraj Raj <[email protected]> wrote:Hi, I have long running query which running for long time and its planner always performing sequnce scan the table2. FROM sor_t.transfer_order_header hed,sor_t.transfer_order_item itm  where hed.eventid=itm.eventid group by 1,2,3,4,5,6Any suggestions would greatly appretiated. You aren't filtering out any rows so it is unsurprising that a sequential scan was chosen to fulfil the request that the entire detail table be consulted.  The good news is you have access to parallelism - see if you can increase that factor.Any other suggestions probably requires more knowledge of your problem domain than you've provided here.Finding a way to add a where clause or compute your desired result during record insertion or updating are two other potential avenues of consideration.David J.", "msg_date": "Wed, 21 Oct 2020 18:11:15 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance" } ]