threads
listlengths
1
275
[ { "msg_contents": "Dear Team,\n\nI am accessing PostgreSQL database objects using Microsoft-Access ODBC connection and tables are loaded but while opening table getting the below error msg.\n\n\nERROR :: ODBC--call failed :: Bindings were not allocated properly. (#15) and\n\n\n\n[cid:[email protected]]\n\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.", "msg_date": "Wed, 29 Nov 2017 14:47:32 +0000", "msg_from": "Dinesh Chandra 12108 <[email protected]>", "msg_from_op": true, "msg_subject": "ODBC--call failed :: Bindings were not allocated properly" } ]
[ { "msg_contents": "Hello everyone,\n\nI am having a strange performance issue concerning the creation of a\nmaterialized view in Postgres 9.6.\n\nI have a somewhat complex query, that takes about two minutes to fully run\nand that I want to run often, therefore I want to create a materialized\nview of this query to speed things up.\n\nBut when I try to create the materialized I get hours of processing just to\neventually crash at a not enough memory for temporary files error.\n\nAt first I thought my indexes were fucked up, but I as said early the\nselection query itself don't take more than two minutes and the indexes are\nworking fine.\n\nI also tried to change the temporary files directory to a bigger (slower)\nhard disk. After four hours of processing the temporary files were summing\n600 gb of memory (about twenty times the size of my whole database) and I\nhad to send a stop sign.\n\nToday I just tried to create a normal table and everything fine under 3\nminutes of processing time.\n\nThe selection query goes as:\n\nselect c.ano, c.mes, a.carreira_id as id,\navg(r.rem_bruta) as salmed, median(r.rem_bruta) as selmediana,\nstddev_pop(r.rem_bruta) as salsd, avg(r.indenizacao_total) as indmed,\nmedian(r.indenizacao_total) as indmediana, stddev_pop(r.indenizacao_total)\nas indsd from servidores.cad c left join servidores.cargo a on c.cargo_id =\na.id join servidores.rem r on c.ano = r.ano and c.mes = r.mes and c.rem_id\n= r.id group by c.ano, c.mes, a.carreira_id);\n\nHello everyone,I am having a strange performance issue concerning the creation of a materialized view in Postgres 9.6.I have a somewhat complex query, that takes about two minutes to fully run and that I want to run often, therefore I want to create a materialized view of this query to speed things up.But when I try to create the materialized I get hours of processing just to eventually crash at a not enough memory for temporary files error. At first I thought my indexes were fucked up, but I as said early the selection query itself don't take more than two minutes and the indexes are working fine.I also tried to change the temporary files directory to a bigger (slower) hard disk. After four hours of processing the temporary files were summing 600 gb of memory (about twenty times the size of my whole database) and I had to send a stop sign.Today I just tried to create a normal table and everything fine under 3 minutes of processing time.The selection query goes as:select c.ano, c.mes, a.carreira_id as id, avg(r.rem_bruta) as salmed, median(r.rem_bruta) as selmediana, stddev_pop(r.rem_bruta) as salsd, avg(r.indenizacao_total) as indmed, median(r.indenizacao_total) as indmediana, stddev_pop(r.indenizacao_total) as indsd from servidores.cad c left join servidores.cargo a on c.cargo_id = a.id join servidores.rem r on c.ano = r.ano and c.mes = r.mes and c.rem_id = r.id group by c.ano, c.mes, a.carreira_id);", "msg_date": "Thu, 30 Nov 2017 10:07:21 -0200", "msg_from": "=?UTF-8?Q?Caio_Guimar=C3=A3es_Figueiredo?= <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE TABLE vs CREATE MATERIALIZED VIEW" } ]
[ { "msg_contents": "Hello. I want to remove rows from first table, that exist in second\n(equality is done using PK). However I experience seq scan on second table,\nwhich counters my intuition - I think it should be index-only. Because\ntables are large, performance of query is very bad.\n\nHowever I got mixed results when trying to reproduce this behavior on\nsyntetic tables. Here I'll show 3 different plans, which I got for the same\nquery.\n\n1. Setup is:\n---------------------------\ncreate table diff (id uuid constraint diff_pkey primary key);\ncreate table origin (id uuid constraint origin_pkey primary key);\n---------------------------\n\nThe query generates correct plan, which performs only index scans:\n\nexplain delete from origin where exists (select id from diff where origin.id\n= diff.id);\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Delete on origin (cost=0.30..105.56 rows=1850 width=12)\n -> Merge Semi Join (cost=0.30..105.56 rows=1850 width=12)\n Merge Cond: (origin.id = diff.id)\n -> Index Scan using origin_pkey on origin (cost=0.15..38.90\nrows=1850 width=22)\n -> Index Scan using diff_pkey on diff (cost=0.15..38.90\nrows=1850 width=22)\n(5 rows)\n\n2. Setup is:\n--------------------------------\ncreate table origin (id uuid constraint origin_pkey primary key, data\njsonb);\ncreate table diff (id uuid constraint diff_pkey primary key, data jsonb);\n--------------------------------\n\nThe query generates plan with two seq scans:\n\nexplain delete from origin where exists (select id from diff where origin.id\n= diff.id);\n QUERY PLAN\n---------------------------------------------------------------------------\n Delete on origin (cost=34.08..69.49 rows=1070 width=12)\n -> Hash Semi Join (cost=34.08..69.49 rows=1070 width=12)\n Hash Cond: (origin.id = diff.id)\n -> Seq Scan on origin (cost=0.00..20.70 rows=1070 width=22)\n -> Hash (cost=20.70..20.70 rows=1070 width=22)\n -> Seq Scan on diff (cost=0.00..20.70 rows=1070 width=22)\n(6 rows)\n\n3. My real `origin` table has 26 fields and 800 billion rows, real `diff`\ntable has 12 million rows and the query generates plan with nested loop and\nseq scan on `diff` table:\n\nexplain delete from drug_refills origin where exists (select id from\ndrug_refills_diff diff where origin.id = diff.id);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Delete on drug_refills origin (cost=0.57..22049570.11 rows=11975161\nwidth=12)\n -> Nested Loop (cost=0.57..22049570.11 rows=11975161 width=12)\n -> Seq Scan on drug_refills_diff diff (cost=0.00..720405.61\nrows=11975161 width=22)\n -> Index Scan using drug_refills_pkey on drug_refills origin\n(cost=0.57..1.77 rows=1 width=22)\n Index Cond: (id = diff.id)\n(5 rows)\n\nI have run ANALYZE on both tables, but it didn't help. Here are column\ntypes in origin and diff (same schema), if that matters:\n\nuuid\ntimestamp with time zone\ntimestamp with time zone\ncharacter varying(255)\ncharacter varying(255)\ncharacter varying(1024)\nnumeric(10,4)\ninteger\nnumeric(14,8)\nnumeric(14,8)\nnumeric(14,8)\nnumeric(14,8)\nnumeric(14,8)\ncharacter varying(16)\ncharacter varying(16)\ncharacter varying(16)\ncharacter varying(16)\ncharacter varying(16)\ncharacter varying(16)\ndate\njsonb\ntext[]\nuuid\nuuid\nuuid\nuuid\n\nHello. I want to remove rows from first table, that exist in second (equality is done using PK). However I experience seq scan on second table, which counters my intuition - I think it should be index-only. Because tables are large, performance of query is very bad.However I got mixed results when trying to reproduce this behavior on syntetic tables. Here I'll show 3 different plans, which I got for the same query.1. Setup is:---------------------------create table diff (id uuid constraint diff_pkey primary key);create table origin (id uuid constraint origin_pkey primary key);---------------------------The query generates correct plan, which performs only index scans:explain delete from origin where exists (select id from diff where origin.id = diff.id);                                        QUERY PLAN------------------------------------------------------------------------------------------- Delete on origin  (cost=0.30..105.56 rows=1850 width=12)   ->  Merge Semi Join  (cost=0.30..105.56 rows=1850 width=12)         Merge Cond: (origin.id = diff.id)         ->  Index Scan using origin_pkey on origin  (cost=0.15..38.90 rows=1850 width=22)         ->  Index Scan using diff_pkey on diff  (cost=0.15..38.90 rows=1850 width=22)(5 rows)2. Setup is:--------------------------------create table origin (id uuid constraint origin_pkey primary key, data jsonb);create table diff (id uuid constraint diff_pkey primary key, data jsonb);--------------------------------The query generates plan with two seq scans:explain delete from origin where exists (select id from diff where origin.id = diff.id);                                QUERY PLAN--------------------------------------------------------------------------- Delete on origin  (cost=34.08..69.49 rows=1070 width=12)   ->  Hash Semi Join  (cost=34.08..69.49 rows=1070 width=12)         Hash Cond: (origin.id = diff.id)         ->  Seq Scan on origin  (cost=0.00..20.70 rows=1070 width=22)         ->  Hash  (cost=20.70..20.70 rows=1070 width=22)               ->  Seq Scan on diff  (cost=0.00..20.70 rows=1070 width=22)(6 rows)3. My real `origin` table has 26 fields and 800 billion rows, real `diff` table has 12 million rows and the query generates plan with nested loop and seq scan on `diff` table:explain delete from drug_refills origin where exists (select id from drug_refills_diff diff where origin.id = diff.id);                                                QUERY PLAN---------------------------------------------------------------------------------------------------------- Delete on drug_refills origin  (cost=0.57..22049570.11 rows=11975161 width=12)   ->  Nested Loop  (cost=0.57..22049570.11 rows=11975161 width=12)         ->  Seq Scan on drug_refills_diff diff  (cost=0.00..720405.61 rows=11975161 width=22)         ->  Index Scan using drug_refills_pkey on drug_refills origin  (cost=0.57..1.77 rows=1 width=22)               Index Cond: (id = diff.id)(5 rows)I have run ANALYZE on both tables, but it didn't help. Here are column types in origin and diff (same schema), if that matters:uuidtimestamp with time zone timestamp with time zone character varying(255)   character varying(255)   character varying(1024)  numeric(10,4)            integer                  numeric(14,8)            numeric(14,8)            numeric(14,8)            numeric(14,8)            numeric(14,8)            character varying(16)    character varying(16)    character varying(16)    character varying(16)    character varying(16)    character varying(16)    date                     jsonb                    text[]                   uuid                     uuid                     uuid                     uuid", "msg_date": "Fri, 1 Dec 2017 15:03:35 +0200", "msg_from": "Danylo Hlynskyi <[email protected]>", "msg_from_op": true, "msg_subject": "Delete tables difference involves seq scan" }, { "msg_contents": "Oh, sorry, this happens on Postgresql 9.6.6. I've checked that on\nPostgresql 10.0 query plan from setup (1)\nnow uses two seq scans, like in setup (2).\n\n\n2017-12-01 15:03 GMT+02:00 Danylo Hlynskyi <[email protected]>:\n\n> Hello. I want to remove rows from first table, that exist in second\n> (equality is done using PK). However I experience seq scan on second table,\n> which counters my intuition - I think it should be index-only. Because\n> tables are large, performance of query is very bad.\n>\n> However I got mixed results when trying to reproduce this behavior on\n> syntetic tables. Here I'll show 3 different plans, which I got for the same\n> query.\n>\n> 1. Setup is:\n> ---------------------------\n> create table diff (id uuid constraint diff_pkey primary key);\n> create table origin (id uuid constraint origin_pkey primary key);\n> ---------------------------\n>\n> The query generates correct plan, which performs only index scans:\n>\n> explain delete from origin where exists (select id from diff where\n> origin.id = diff.id);\n> QUERY PLAN\n> ------------------------------------------------------------\n> -------------------------------\n> Delete on origin (cost=0.30..105.56 rows=1850 width=12)\n> -> Merge Semi Join (cost=0.30..105.56 rows=1850 width=12)\n> Merge Cond: (origin.id = diff.id)\n> -> Index Scan using origin_pkey on origin (cost=0.15..38.90\n> rows=1850 width=22)\n> -> Index Scan using diff_pkey on diff (cost=0.15..38.90\n> rows=1850 width=22)\n> (5 rows)\n>\n> 2. Setup is:\n> --------------------------------\n> create table origin (id uuid constraint origin_pkey primary key, data\n> jsonb);\n> create table diff (id uuid constraint diff_pkey primary key, data jsonb);\n> --------------------------------\n>\n> The query generates plan with two seq scans:\n>\n> explain delete from origin where exists (select id from diff where\n> origin.id = diff.id);\n> QUERY PLAN\n> ------------------------------------------------------------\n> ---------------\n> Delete on origin (cost=34.08..69.49 rows=1070 width=12)\n> -> Hash Semi Join (cost=34.08..69.49 rows=1070 width=12)\n> Hash Cond: (origin.id = diff.id)\n> -> Seq Scan on origin (cost=0.00..20.70 rows=1070 width=22)\n> -> Hash (cost=20.70..20.70 rows=1070 width=22)\n> -> Seq Scan on diff (cost=0.00..20.70 rows=1070 width=22)\n> (6 rows)\n>\n> 3. My real `origin` table has 26 fields and 800 billion rows, real `diff`\n> table has 12 million rows and the query generates plan with nested loop and\n> seq scan on `diff` table:\n>\n> explain delete from drug_refills origin where exists (select id from\n> drug_refills_diff diff where origin.id = diff.id);\n> QUERY PLAN\n> ------------------------------------------------------------\n> ----------------------------------------------\n> Delete on drug_refills origin (cost=0.57..22049570.11 rows=11975161\n> width=12)\n> -> Nested Loop (cost=0.57..22049570.11 rows=11975161 width=12)\n> -> Seq Scan on drug_refills_diff diff (cost=0.00..720405.61\n> rows=11975161 width=22)\n> -> Index Scan using drug_refills_pkey on drug_refills origin\n> (cost=0.57..1.77 rows=1 width=22)\n> Index Cond: (id = diff.id)\n> (5 rows)\n>\n> I have run ANALYZE on both tables, but it didn't help. Here are column\n> types in origin and diff (same schema), if that matters:\n>\n> uuid\n> timestamp with time zone\n> timestamp with time zone\n> character varying(255)\n> character varying(255)\n> character varying(1024)\n> numeric(10,4)\n> integer\n> numeric(14,8)\n> numeric(14,8)\n> numeric(14,8)\n> numeric(14,8)\n> numeric(14,8)\n> character varying(16)\n> character varying(16)\n> character varying(16)\n> character varying(16)\n> character varying(16)\n> character varying(16)\n> date\n> jsonb\n> text[]\n> uuid\n> uuid\n> uuid\n> uuid\n>\n>\n\nOh, sorry, this happens on Postgresql 9.6.6. I've checked that on Postgresql 10.0 query plan from setup (1) now uses two seq scans, like in setup (2).2017-12-01 15:03 GMT+02:00 Danylo Hlynskyi <[email protected]>:Hello. I want to remove rows from first table, that exist in second (equality is done using PK). However I experience seq scan on second table, which counters my intuition - I think it should be index-only. Because tables are large, performance of query is very bad.However I got mixed results when trying to reproduce this behavior on syntetic tables. Here I'll show 3 different plans, which I got for the same query.1. Setup is:---------------------------create table diff (id uuid constraint diff_pkey primary key);create table origin (id uuid constraint origin_pkey primary key);---------------------------The query generates correct plan, which performs only index scans:explain delete from origin where exists (select id from diff where origin.id = diff.id);                                        QUERY PLAN------------------------------------------------------------------------------------------- Delete on origin  (cost=0.30..105.56 rows=1850 width=12)   ->  Merge Semi Join  (cost=0.30..105.56 rows=1850 width=12)         Merge Cond: (origin.id = diff.id)         ->  Index Scan using origin_pkey on origin  (cost=0.15..38.90 rows=1850 width=22)         ->  Index Scan using diff_pkey on diff  (cost=0.15..38.90 rows=1850 width=22)(5 rows)2. Setup is:--------------------------------create table origin (id uuid constraint origin_pkey primary key, data jsonb);create table diff (id uuid constraint diff_pkey primary key, data jsonb);--------------------------------The query generates plan with two seq scans:explain delete from origin where exists (select id from diff where origin.id = diff.id);                                QUERY PLAN--------------------------------------------------------------------------- Delete on origin  (cost=34.08..69.49 rows=1070 width=12)   ->  Hash Semi Join  (cost=34.08..69.49 rows=1070 width=12)         Hash Cond: (origin.id = diff.id)         ->  Seq Scan on origin  (cost=0.00..20.70 rows=1070 width=22)         ->  Hash  (cost=20.70..20.70 rows=1070 width=22)               ->  Seq Scan on diff  (cost=0.00..20.70 rows=1070 width=22)(6 rows)3. My real `origin` table has 26 fields and 800 billion rows, real `diff` table has 12 million rows and the query generates plan with nested loop and seq scan on `diff` table:explain delete from drug_refills origin where exists (select id from drug_refills_diff diff where origin.id = diff.id);                                                QUERY PLAN---------------------------------------------------------------------------------------------------------- Delete on drug_refills origin  (cost=0.57..22049570.11 rows=11975161 width=12)   ->  Nested Loop  (cost=0.57..22049570.11 rows=11975161 width=12)         ->  Seq Scan on drug_refills_diff diff  (cost=0.00..720405.61 rows=11975161 width=22)         ->  Index Scan using drug_refills_pkey on drug_refills origin  (cost=0.57..1.77 rows=1 width=22)               Index Cond: (id = diff.id)(5 rows)I have run ANALYZE on both tables, but it didn't help. Here are column types in origin and diff (same schema), if that matters:uuidtimestamp with time zone timestamp with time zone character varying(255)   character varying(255)   character varying(1024)  numeric(10,4)            integer                  numeric(14,8)            numeric(14,8)            numeric(14,8)            numeric(14,8)            numeric(14,8)            character varying(16)    character varying(16)    character varying(16)    character varying(16)    character varying(16)    character varying(16)    date                     jsonb                    text[]                   uuid                     uuid                     uuid                     uuid", "msg_date": "Fri, 1 Dec 2017 15:17:30 +0200", "msg_from": "Danylo Hlynskyi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Delete tables difference involves seq scan" }, { "msg_contents": "I was able to speedup original query a lot by using CTE. It still uses seq\nscan on `diff` table, but looks like it does this once:\n\nexplain\nwith\n diff as (select id from drug_refills_diff)\ndelete from drug_refills\nwhere id in (select id from diff);\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------\n Delete on drug_refills (cost=989844.94..990366.86 rows=456888836 width=46)\n CTE diff\n -> Seq Scan on drug_refills_diff (cost=0.00..720404.88 rows=11975088\nwidth=16)\n -> Nested Loop (cost=269440.05..269961.98 rows=456888836 width=46)\n -> HashAggregate (cost=269439.48..269441.48 rows=200 width=56)\n Group Key: diff.id\n -> CTE Scan on diff (cost=0.00..239501.76 rows=11975088\nwidth=56)\n -> Index Scan using drug_refills_pkey on drug_refills\n(cost=0.57..2.59 rows=1 width=22)\n Index Cond: (id = diff.id)\n(9 rows)\n\n\n\n2017-12-01 15:17 GMT+02:00 Danylo Hlynskyi <[email protected]>:\n\n> Oh, sorry, this happens on Postgresql 9.6.6. I've checked that on\n> Postgresql 10.0 query plan from setup (1)\n> now uses two seq scans, like in setup (2).\n>\n>\n> 2017-12-01 15:03 GMT+02:00 Danylo Hlynskyi <[email protected]>:\n>\n>> Hello. I want to remove rows from first table, that exist in second\n>> (equality is done using PK). However I experience seq scan on second table,\n>> which counters my intuition - I think it should be index-only. Because\n>> tables are large, performance of query is very bad.\n>>\n>> However I got mixed results when trying to reproduce this behavior on\n>> syntetic tables. Here I'll show 3 different plans, which I got for the same\n>> query.\n>>\n>> 1. Setup is:\n>> ---------------------------\n>> create table diff (id uuid constraint diff_pkey primary key);\n>> create table origin (id uuid constraint origin_pkey primary key);\n>> ---------------------------\n>>\n>> The query generates correct plan, which performs only index scans:\n>>\n>> explain delete from origin where exists (select id from diff where\n>> origin.id = diff.id);\n>> QUERY PLAN\n>> ------------------------------------------------------------\n>> -------------------------------\n>> Delete on origin (cost=0.30..105.56 rows=1850 width=12)\n>> -> Merge Semi Join (cost=0.30..105.56 rows=1850 width=12)\n>> Merge Cond: (origin.id = diff.id)\n>> -> Index Scan using origin_pkey on origin (cost=0.15..38.90\n>> rows=1850 width=22)\n>> -> Index Scan using diff_pkey on diff (cost=0.15..38.90\n>> rows=1850 width=22)\n>> (5 rows)\n>>\n>> 2. Setup is:\n>> --------------------------------\n>> create table origin (id uuid constraint origin_pkey primary key, data\n>> jsonb);\n>> create table diff (id uuid constraint diff_pkey primary key, data jsonb);\n>> --------------------------------\n>>\n>> The query generates plan with two seq scans:\n>>\n>> explain delete from origin where exists (select id from diff where\n>> origin.id = diff.id);\n>> QUERY PLAN\n>> ------------------------------------------------------------\n>> ---------------\n>> Delete on origin (cost=34.08..69.49 rows=1070 width=12)\n>> -> Hash Semi Join (cost=34.08..69.49 rows=1070 width=12)\n>> Hash Cond: (origin.id = diff.id)\n>> -> Seq Scan on origin (cost=0.00..20.70 rows=1070 width=22)\n>> -> Hash (cost=20.70..20.70 rows=1070 width=22)\n>> -> Seq Scan on diff (cost=0.00..20.70 rows=1070 width=22)\n>> (6 rows)\n>>\n>> 3. My real `origin` table has 26 fields and 800 billion rows, real `diff`\n>> table has 12 million rows and the query generates plan with nested loop and\n>> seq scan on `diff` table:\n>>\n>> explain delete from drug_refills origin where exists (select id from\n>> drug_refills_diff diff where origin.id = diff.id);\n>> QUERY PLAN\n>> ------------------------------------------------------------\n>> ----------------------------------------------\n>> Delete on drug_refills origin (cost=0.57..22049570.11 rows=11975161\n>> width=12)\n>> -> Nested Loop (cost=0.57..22049570.11 rows=11975161 width=12)\n>> -> Seq Scan on drug_refills_diff diff (cost=0.00..720405.61\n>> rows=11975161 width=22)\n>> -> Index Scan using drug_refills_pkey on drug_refills origin\n>> (cost=0.57..1.77 rows=1 width=22)\n>> Index Cond: (id = diff.id)\n>> (5 rows)\n>>\n>> I have run ANALYZE on both tables, but it didn't help. Here are column\n>> types in origin and diff (same schema), if that matters:\n>>\n>> uuid\n>> timestamp with time zone\n>> timestamp with time zone\n>> character varying(255)\n>> character varying(255)\n>> character varying(1024)\n>> numeric(10,4)\n>> integer\n>> numeric(14,8)\n>> numeric(14,8)\n>> numeric(14,8)\n>> numeric(14,8)\n>> numeric(14,8)\n>> character varying(16)\n>> character varying(16)\n>> character varying(16)\n>> character varying(16)\n>> character varying(16)\n>> character varying(16)\n>> date\n>> jsonb\n>> text[]\n>> uuid\n>> uuid\n>> uuid\n>> uuid\n>>\n>>\n>\n\nI was able to speedup original query a lot by using CTE. It still uses seq scan on `diff` table, but looks like it does this once:explainwith      diff as (select id from drug_refills_diff)delete from drug_refillswhere id in (select id from diff);                                            QUERY PLAN                                             --------------------------------------------------------------------------------------------------- Delete on drug_refills  (cost=989844.94..990366.86 rows=456888836 width=46)   CTE diff     ->  Seq Scan on drug_refills_diff  (cost=0.00..720404.88 rows=11975088 width=16)   ->  Nested Loop  (cost=269440.05..269961.98 rows=456888836 width=46)         ->  HashAggregate  (cost=269439.48..269441.48 rows=200 width=56)               Group Key: diff.id               ->  CTE Scan on diff  (cost=0.00..239501.76 rows=11975088 width=56)         ->  Index Scan using drug_refills_pkey on drug_refills  (cost=0.57..2.59 rows=1 width=22)               Index Cond: (id = diff.id)(9 rows)2017-12-01 15:17 GMT+02:00 Danylo Hlynskyi <[email protected]>:Oh, sorry, this happens on Postgresql 9.6.6. I've checked that on Postgresql 10.0 query plan from setup (1) now uses two seq scans, like in setup (2).2017-12-01 15:03 GMT+02:00 Danylo Hlynskyi <[email protected]>:Hello. I want to remove rows from first table, that exist in second (equality is done using PK). However I experience seq scan on second table, which counters my intuition - I think it should be index-only. Because tables are large, performance of query is very bad.However I got mixed results when trying to reproduce this behavior on syntetic tables. Here I'll show 3 different plans, which I got for the same query.1. Setup is:---------------------------create table diff (id uuid constraint diff_pkey primary key);create table origin (id uuid constraint origin_pkey primary key);---------------------------The query generates correct plan, which performs only index scans:explain delete from origin where exists (select id from diff where origin.id = diff.id);                                        QUERY PLAN------------------------------------------------------------------------------------------- Delete on origin  (cost=0.30..105.56 rows=1850 width=12)   ->  Merge Semi Join  (cost=0.30..105.56 rows=1850 width=12)         Merge Cond: (origin.id = diff.id)         ->  Index Scan using origin_pkey on origin  (cost=0.15..38.90 rows=1850 width=22)         ->  Index Scan using diff_pkey on diff  (cost=0.15..38.90 rows=1850 width=22)(5 rows)2. Setup is:--------------------------------create table origin (id uuid constraint origin_pkey primary key, data jsonb);create table diff (id uuid constraint diff_pkey primary key, data jsonb);--------------------------------The query generates plan with two seq scans:explain delete from origin where exists (select id from diff where origin.id = diff.id);                                QUERY PLAN--------------------------------------------------------------------------- Delete on origin  (cost=34.08..69.49 rows=1070 width=12)   ->  Hash Semi Join  (cost=34.08..69.49 rows=1070 width=12)         Hash Cond: (origin.id = diff.id)         ->  Seq Scan on origin  (cost=0.00..20.70 rows=1070 width=22)         ->  Hash  (cost=20.70..20.70 rows=1070 width=22)               ->  Seq Scan on diff  (cost=0.00..20.70 rows=1070 width=22)(6 rows)3. My real `origin` table has 26 fields and 800 billion rows, real `diff` table has 12 million rows and the query generates plan with nested loop and seq scan on `diff` table:explain delete from drug_refills origin where exists (select id from drug_refills_diff diff where origin.id = diff.id);                                                QUERY PLAN---------------------------------------------------------------------------------------------------------- Delete on drug_refills origin  (cost=0.57..22049570.11 rows=11975161 width=12)   ->  Nested Loop  (cost=0.57..22049570.11 rows=11975161 width=12)         ->  Seq Scan on drug_refills_diff diff  (cost=0.00..720405.61 rows=11975161 width=22)         ->  Index Scan using drug_refills_pkey on drug_refills origin  (cost=0.57..1.77 rows=1 width=22)               Index Cond: (id = diff.id)(5 rows)I have run ANALYZE on both tables, but it didn't help. Here are column types in origin and diff (same schema), if that matters:uuidtimestamp with time zone timestamp with time zone character varying(255)   character varying(255)   character varying(1024)  numeric(10,4)            integer                  numeric(14,8)            numeric(14,8)            numeric(14,8)            numeric(14,8)            numeric(14,8)            character varying(16)    character varying(16)    character varying(16)    character varying(16)    character varying(16)    character varying(16)    date                     jsonb                    text[]                   uuid                     uuid                     uuid                     uuid", "msg_date": "Fri, 1 Dec 2017 16:52:54 +0200", "msg_from": "Danylo Hlynskyi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Delete tables difference involves seq scan" } ]
[ { "msg_contents": "Hi,\n\n\nI have a problem on 9.3.14 with a query that accesses table:\n\nSize: (retrieved by query https://gist.github.com/romank0/74f9d1d807bd3f41c0729d0fc6126749)\n\n schemaname | relname | size | toast | associated_idx_size | total\n------------+---------------+--------+--------+---------------------+---------\n public | document_head | 275 MB | 630 MB | 439 MB | 1345 MB\n\n\nDefinition:\n Table \"public.document_head\"\n Column | Type | Modifiers\n-----------------------------+--------------------------+-------------------------------------\n snapshot_id | character varying(36) | not null\n id | character varying(36) | not null\n base_type | character varying(50) | not null\n is_cascade | boolean | not null default false\n parent_id | character varying(36) |\n fileplan_node_id | character varying(36) |\n state | character varying(10) | default 'ACTIVE'::character varying\n title | character varying(4096) | not null\n properties | text | not null\n properties_cache | hstore | not null\n serial_number | integer | not null\n major_version | integer | not null\n minor_version | integer | not null\n version_description | text |\n sensitivity | integer | not null default 10\n id_path | ltree |\n path_name | character varying(4096) | collate C not null\n ltx_id | bigint | not null\n created_by | integer | not null\n created_date | timestamp with time zone | not null\n modified_by | integer | not null\n modified_date | timestamp with time zone | not null\n responsible_user_ids | integer[] |\n origin_id | character varying(36) |\n origin_snapshot_id | character varying(36) |\n ssn | character varying(64) |\n record_physical_location | text |\n record_physical_location_id | text |\n record_created_date | timestamp with time zone |\n record_aggregated_date | timestamp with time zone |\n record_last_review_comment | text |\n record_last_review_date | timestamp with time zone |\n record_next_review_date | timestamp with time zone |\n record_originated_date | timestamp with time zone |\n record_is_vital | boolean | not null default false\n storage_plan_state | text | not null default 'New'::text\n cut_off_date | timestamp with time zone |\n dispose_date | timestamp with time zone |\n archive_date | timestamp with time zone |\nIndexes:\n \"document_head__id__uniq_key\" PRIMARY KEY, btree (id)\n \"document_head__parent_id__path_name__unq_idx\" UNIQUE, btree (parent_id, path_name) WHERE state::text = 'ACTIVE'::text\n \"document_head__snapshot_id__unq\" UNIQUE, btree (snapshot_id)\n \"document_head__base_type__idx\" btree (base_type) WHERE state::text <> 'DELETED'::text\n \"document_head__fileplan_node_id__idx\" btree (fileplan_node_id)\n \"document_head__id__idx\" btree (id) WHERE state::text <> 'DELETED'::text\n \"document_head__id_path__btree__idx\" btree (id_path) WHERE state::text <> 'DELETED'::text\n \"document_head__id_path__gist__idx\" gist (id_path)\n \"document_head__ltx_id__idx\" btree (ltx_id)\n \"document_head__origin_id__hotfix__idx\" btree (origin_id) WHERE origin_id IS NOT NULL\n \"document_head__origin_id__idx\" btree (origin_id) WHERE state::text <> 'DELETED'::text AND origin_id IS NOT NULL\n \"document_head__parent_id__idx\" btree (parent_id)\n \"document_head__properties_cache__contact_username_idx\" btree ((properties_cache -> 'person_meta_info.username'::text)) WHERE base_type::text = 'Contact'::text AND exist(properties_cache, 'person_meta_info.username'::text)\n \"document_head__properties_cache__emailmeta_message_id__idx\" btree ((properties_cache -> 'emailmeta.message_id'::text)) WHERE base_type::text = 'File'::text AND exist(properties_cache, 'emailmeta.message_id'::text)\n \"document_head__properties_cache__idx\" gist (properties_cache) WHERE state::text <> 'DELETED'::text\n \"document_head__properties_cache__project_identifier__idx\" btree ((properties_cache -> 'project.identifier'::text)) WHERE base_type::text = 'Project'::text AND exist(properties_cache, 'project.identifier'::text)\n \"document_head__properties_cache__published_origin__idx\" btree ((properties_cache -> 'file_published_origin_id.origin_id'::text)) WHERE base_type::text = 'File'::text AND exist(properties_cache, 'file_published_origin_id.origin_id'::text)\n \"document_head__state__idx\" btree (state)\n \"document_head__storage_plan_state__idx\" btree (storage_plan_state) WHERE state::text <> 'DELETED'::text\nCheck constraints:\n \"document_base_storage_plan_state_check\" CHECK (storage_plan_state = ANY (ARRAY['NEW'::text, 'READY_FOR_CUTOFF'::text, 'CUTOFF'::text, 'READY_FOR_DISPOSITION'::text, 'DISPOSED'::text]))\n \"document_head__sensitivity_check\" CHECK (sensitivity = ANY (ARRAY[10, 20, 30]))\nForeign-key constraints:\n \"document_head__created_by__fk\" FOREIGN KEY (created_by) REFERENCES auth_user(id)\n \"document_head__modified_by__fk\" FOREIGN KEY (modified_by) REFERENCES auth_user(id)\n \"document_head__parent_id__fk\" FOREIGN KEY (parent_id) REFERENCES document(id)\n\nSome notes:\n1. properties stores json that for some records may be as large as 300k\n\nselect count(*) from document_head where length(properties) > 100000;\n count\n-------\n 535\n(1 row)\n\nselect count(*) from document_head where length(properties) > 20000;\n count\n-------\n 13917\n(1 row)\n\nselect count(*) from document_head where length(properties) > 1000;\n count\n-------\n 51708\n(1 row)\n\nselect count(*) from document_head where length(properties) > 300000;\n count\n-------\n 3\n(1 row)\n\nselect max(length(properties)) from document_head;\n max\n--------\n 334976\n(1 row)\n\n2. properties_cache stores parsed properties: key is jsonpath of a key in json and value is a value.\n3. all results here are retrieved after running `analyze document_head` and `vacuum document_head` manually.\n4. I tried different work_mem settings up to 100MB and there's no effect on the main issue described below.\n5. I haven't tested disks speed as first of all I think it is irrelevant to the problem and it is not easy to do as this is production system.\n\nThe function that is used in the query:\n\nCREATE OR REPLACE FUNCTION public.get_doc_path(document_id character varying)\n RETURNS ltree\n LANGUAGE plpgsql\n STABLE\nAS $function$\nDECLARE\n path ltree;\nBEGIN\n select id_path into path from document_head where id = document_id;\n RETURN path;\nEND $function$\n\n\nThe original query is rather big one and the simplified version where the issue can still be demonstrated is:\n\nexplain (analyze, buffers)\nwith trees AS (\nSELECT d.id, d.snapshot_id , NULL :: text[] AS permissions\n FROM document_head AS d\n WHERE (d.id_path <@ get_doc_path('78157c60-45bc-42c1-9aad-c5651995db5c') \n\tAND d.id != '78157c60-45bc-42c1-9aad-c5651995db5c') AND d.state != 'DELETED'\n)\nSELECT COUNT(*) FROM trees;\n\nI get this plan https://explain.depesz.com/s/UQX4h\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=86227.19..86227.20 rows=1 width=0) (actual time=3878.775..3878.776 rows=1 loops=1)\n Buffers: shared hit=747698, temp written=1587\n CTE trees\n -> Seq Scan on document_head d (cost=0.00..82718.21 rows=155955 width=74) (actual time=0.211..3620.044 rows=154840 loops=1)\n Filter: (((id)::text <> '78157c60-45bc-42c1-9aad-c5651995db5c'::text) AND ((state)::text <> 'DELETED'::text) AND (id_path <@ get_doc_path('78157c60-45bc-42c1-9aad-c5651995db5c'::character varying)))\n Rows Removed by Filter: 23357\n Buffers: shared hit=747698\n -> CTE Scan on trees (cost=0.00..3119.10 rows=155955 width=0) (actual time=0.215..3828.519 rows=154840 loops=1)\n Buffers: shared hit=747698, temp written=1587\n Total runtime: 3881.781 ms\n(10 rows)\n\nIf I change the predicate for ltree to use subquery the plan and execution time changes:\n\nexplain (analyze, buffers)\nwith trees AS (\nSELECT d.id, d.snapshot_id , NULL :: text[] AS permissions\n FROM document_head AS d\n WHERE (d.id_path <@ (select get_doc_path('78157c60-45bc-42c1-9aad-c5651995db5c'))\n\tAND d.id != '78157c60-45bc-42c1-9aad-c5651995db5c') AND d.state != 'DELETED'\n)\nSELECT COUNT(*) FROM trees;\n\n\nhttps://explain.depesz.com/s/eUR\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=763.19..763.20 rows=1 width=0) (actual time=430.519..430.519 rows=1 loops=1)\n Buffers: shared hit=47768, temp written=1587\n CTE trees\n -> Bitmap Heap Scan on document_head d (cost=82.05..759.20 rows=177 width=74) (actual time=70.703..249.419 rows=154840 loops=1)\n Recheck Cond: (id_path <@ $0)\n Rows Removed by Index Recheck: 11698\n Filter: (((id)::text <> '78157c60-45bc-42c1-9aad-c5651995db5c'::text) AND ((state)::text <> 'DELETED'::text))\n Rows Removed by Filter: 23\n Buffers: shared hit=47768\n InitPlan 1 (returns $0)\n -> Result (cost=0.00..0.26 rows=1 width=0) (actual time=0.083..0.084 rows=1 loops=1)\n Buffers: shared hit=4\n -> Bitmap Index Scan on document_head__id_path__gist__idx (cost=0.00..81.74 rows=178 width=0) (actual time=68.326..68.326 rows=159979 loops=1)\n Index Cond: (id_path <@ $0)\n Buffers: shared hit=16238\n -> CTE Scan on trees (cost=0.00..3.54 rows=177 width=0) (actual time=70.707..388.714 rows=154840 loops=1)\n Buffers: shared hit=47768, temp written=1587\n Total runtime: 433.410 ms\n(18 rows)\n\nI can see that:\n1. both queries return exactly the same data and require the same underlying data to produce the result.\n2. when I use subquery postgres cannot estimate the number of records that are going to be found by <@. \n3. number of buffers processed by the quicker query is 15 times smaller than on the slow one. I assume that this is the reason the query is slower. \n\nMy understanding how the planner works is that it evaluates some plans and calculates cost based on the settings and and available statistics. \nSettings define relative cost of operations like random vs sequential io vs processing in memory data.\nI assume that it is better to use queries that allows planner to use correct estimations and tune settings so that planner knows correct operations cost.\n\nMy questions:\n1. how to find out why slow execution requires 15x more buffers.\n2. In the slow executin planner either does not consider the plan similar to the quick on or estimates it as worse. How to find out which of thes cases (if any) is true? And what I can try to make planner use better plan?\n\nI understand that one option to try is to upgrade to more recent version and I'm going to test this but this may take a while until the affected production system will get an upgrade.\n\n\nMore information:\n\nOS: Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-125-generic x86_64)\nPG version: PostgreSQL 9.3.14 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\nHistory: no history as this is a new query.\nHardware: aws m4.large (2 vCPUs, RAM 8GB) with gp2 (SSD) storage with throughtput up to ~20MB/s. \n\n\nNon default server settings:\n\n name | current_setting | source\n--------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------\n application_name | psql | client\n archive_command | /etc/rds/dbbin/pgscripts/rds_wal_archive %p | configuration file\n archive_mode | on | configuration file\n archive_timeout | 5min | configuration file\n checkpoint_completion_target | 0.9 | configuration file\n checkpoint_segments | 16 | configuration file\n client_encoding | UTF8 | client\n effective_cache_size | 3987336kB | configuration file\n fsync | on | configuration file\n hot_standby | off | configuration file\n listen_addresses | * | command line\n lo_compat_privileges | off | configuration file\n log_checkpoints | on | configuration file\n log_destination | stderr | configuration file\n log_directory | /rdsdbdata/log/error | configuration file\n log_file_mode | 0644 | configuration file\n log_filename | postgresql.log.%Y-%m-%d-%H | configuration file\n log_hostname | on | configuration file\n log_line_prefix | %t:%r:%u@%d:[%p]: | configuration file\n log_min_duration_statement | 1s | configuration file\n log_rotation_age | 1h | configuration file\n log_timezone | UTC | configuration file\n log_truncate_on_rotation | off | configuration file\n logging_collector | on | configuration file\n maintenance_work_mem | 127MB | configuration file\n max_connections | 1000 | configuration file\n max_locks_per_transaction | 64 | configuration file\n max_prepared_transactions | 0 | configuration file\n max_stack_depth | 6MB | configuration file\n max_wal_senders | 5 | configuration file\n port | 5432 | configuration file\n rds.extensions | btree_gin,btree_gist,chkpass,citext,cube,dblink,dict_int,dict_xsyn,earthdistance,fuzzystrmatch,hstore,intagg,intarray,isn,ltree,pgcrypto,pgrowlocks,pg_stat_statements,pg_trgm,plcoffee,plls,plperl,plpgsql,pltcl,plv8,postgis,postgis_tiger_geocoder,postgis_topology,postgres_fdw,sslinfo,tablefunc,test_parser,tsearch2,unaccent,uuid-ossp | configuration file\n rds.internal_databases | rdsadmin,template0 | configuration file\n rds.superuser_variables | session_replication_role | configuration file\n shared_buffers | 1993664kB | configuration file\n shared_preload_libraries | rdsutils | configuration file\n ssl | on | configuration file\n ssl_ca_file | /rdsdbdata/rds-metadata/ca-cert.pem | configuration file\n ssl_cert_file | /rdsdbdata/rds-metadata/server-cert.pem | configuration file\n ssl_key_file | /rdsdbdata/rds-metadata/server-key.pem | configuration file\n ssl_renegotiation_limit | 0 | configuration file\n stats_temp_directory | /rdsdbdata/db/pg_stat_tmp | configuration file\n superuser_reserved_connections | 3 | configuration file\n synchronous_commit | on | configuration file\n TimeZone | UTC | configuration file\n unix_socket_directories | /tmp | configuration file\n unix_socket_group | rdsdb | configuration file\n unix_socket_permissions | 0700 | configuration file\n wal_keep_segments | 32 | configuration file\n wal_level | hot_standby | configuration file\n wal_receiver_timeout | 30s | configuration file\n wal_sender_timeout | 30s | configuration file\n(52 rows)\n\n\nHi,I have a problem on 9.3.14 with a query that accesses table:Size: (retrieved by query https://gist.github.com/romank0/74f9d1d807bd3f41c0729d0fc6126749) schemaname |    relname    |  size  | toast  | associated_idx_size |  total------------+---------------+--------+--------+---------------------+--------- public     | document_head | 275 MB | 630 MB | 439 MB              | 1345 MBDefinition:                                 Table \"public.document_head\"           Column            |           Type           |              Modifiers-----------------------------+--------------------------+------------------------------------- snapshot_id                 | character varying(36)    | not null id                          | character varying(36)    | not null base_type                   | character varying(50)    | not null is_cascade                  | boolean                  | not null default false parent_id                   | character varying(36)    | fileplan_node_id            | character varying(36)    | state                       | character varying(10)    | default 'ACTIVE'::character varying title                       | character varying(4096)  | not null properties                  | text                     | not null properties_cache            | hstore                   | not null serial_number               | integer                  | not null major_version               | integer                  | not null minor_version               | integer                  | not null version_description         | text                     | sensitivity                 | integer                  | not null default 10 id_path                     | ltree                    | path_name                   | character varying(4096)  | collate C not null ltx_id                      | bigint                   | not null created_by                  | integer                  | not null created_date                | timestamp with time zone | not null modified_by                 | integer                  | not null modified_date               | timestamp with time zone | not null responsible_user_ids        | integer[]                | origin_id                   | character varying(36)    | origin_snapshot_id          | character varying(36)    | ssn                         | character varying(64)    | record_physical_location    | text                     | record_physical_location_id | text                     | record_created_date         | timestamp with time zone | record_aggregated_date      | timestamp with time zone | record_last_review_comment  | text                     | record_last_review_date     | timestamp with time zone | record_next_review_date     | timestamp with time zone | record_originated_date      | timestamp with time zone | record_is_vital             | boolean                  | not null default false storage_plan_state          | text                     | not null default 'New'::text cut_off_date                | timestamp with time zone | dispose_date                | timestamp with time zone | archive_date                | timestamp with time zone |Indexes:    \"document_head__id__uniq_key\" PRIMARY KEY, btree (id)    \"document_head__parent_id__path_name__unq_idx\" UNIQUE, btree (parent_id, path_name) WHERE state::text = 'ACTIVE'::text    \"document_head__snapshot_id__unq\" UNIQUE, btree (snapshot_id)    \"document_head__base_type__idx\" btree (base_type) WHERE state::text <> 'DELETED'::text    \"document_head__fileplan_node_id__idx\" btree (fileplan_node_id)    \"document_head__id__idx\" btree (id) WHERE state::text <> 'DELETED'::text    \"document_head__id_path__btree__idx\" btree (id_path) WHERE state::text <> 'DELETED'::text    \"document_head__id_path__gist__idx\" gist (id_path)    \"document_head__ltx_id__idx\" btree (ltx_id)    \"document_head__origin_id__hotfix__idx\" btree (origin_id) WHERE origin_id IS NOT NULL    \"document_head__origin_id__idx\" btree (origin_id) WHERE state::text <> 'DELETED'::text AND origin_id IS NOT NULL    \"document_head__parent_id__idx\" btree (parent_id)    \"document_head__properties_cache__contact_username_idx\" btree ((properties_cache -> 'person_meta_info.username'::text)) WHERE base_type::text = 'Contact'::text AND exist(properties_cache, 'person_meta_info.username'::text)    \"document_head__properties_cache__emailmeta_message_id__idx\" btree ((properties_cache -> 'emailmeta.message_id'::text)) WHERE base_type::text = 'File'::text AND exist(properties_cache, 'emailmeta.message_id'::text)    \"document_head__properties_cache__idx\" gist (properties_cache) WHERE state::text <> 'DELETED'::text    \"document_head__properties_cache__project_identifier__idx\" btree ((properties_cache -> 'project.identifier'::text)) WHERE base_type::text = 'Project'::text AND exist(properties_cache, 'project.identifier'::text)    \"document_head__properties_cache__published_origin__idx\" btree ((properties_cache -> 'file_published_origin_id.origin_id'::text)) WHERE base_type::text = 'File'::text AND exist(properties_cache, 'file_published_origin_id.origin_id'::text)    \"document_head__state__idx\" btree (state)    \"document_head__storage_plan_state__idx\" btree (storage_plan_state) WHERE state::text <> 'DELETED'::textCheck constraints:    \"document_base_storage_plan_state_check\" CHECK (storage_plan_state = ANY (ARRAY['NEW'::text, 'READY_FOR_CUTOFF'::text, 'CUTOFF'::text, 'READY_FOR_DISPOSITION'::text, 'DISPOSED'::text]))    \"document_head__sensitivity_check\" CHECK (sensitivity = ANY (ARRAY[10, 20, 30]))Foreign-key constraints:    \"document_head__created_by__fk\" FOREIGN KEY (created_by) REFERENCES auth_user(id)    \"document_head__modified_by__fk\" FOREIGN KEY (modified_by) REFERENCES auth_user(id)    \"document_head__parent_id__fk\" FOREIGN KEY (parent_id) REFERENCES document(id)Some notes:1. properties stores json that for some records may be as large as 300kselect count(*) from document_head  where length(properties) > 100000; count-------   535(1 row)select count(*) from document_head  where length(properties) > 20000; count------- 13917(1 row)select count(*) from document_head  where length(properties) > 1000; count------- 51708(1 row)select count(*) from document_head  where length(properties) > 300000; count-------     3(1 row)select max(length(properties)) from document_head;  max-------- 334976(1 row)2. properties_cache stores parsed properties: key is jsonpath of a key in json and value is a value.3. all results here are retrieved after running `analyze document_head` and `vacuum document_head` manually.4. I tried different work_mem settings up to 100MB and there's no effect on the main issue described below.5. I haven't tested disks speed as first of all I think it is irrelevant to the problem and it is not easy to do as this is production system.The function that is used in the query:CREATE OR REPLACE FUNCTION public.get_doc_path(document_id character varying) RETURNS ltree LANGUAGE plpgsql STABLEAS $function$DECLARE    path ltree;BEGIN    select id_path into path from document_head where id = document_id;    RETURN path;END $function$The original query is rather big one and the simplified version where the issue can still be demonstrated is:explain (analyze, buffers)with trees AS (SELECT d.id, d.snapshot_id , NULL :: text[] AS permissions  FROM document_head AS d  WHERE (d.id_path <@ get_doc_path('78157c60-45bc-42c1-9aad-c5651995db5c')  AND d.id != '78157c60-45bc-42c1-9aad-c5651995db5c') AND d.state != 'DELETED')SELECT COUNT(*) FROM trees;I get this plan https://explain.depesz.com/s/UQX4h                                                                                                    QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=86227.19..86227.20 rows=1 width=0) (actual time=3878.775..3878.776 rows=1 loops=1)   Buffers: shared hit=747698, temp written=1587   CTE trees     ->  Seq Scan on document_head d  (cost=0.00..82718.21 rows=155955 width=74) (actual time=0.211..3620.044 rows=154840 loops=1)           Filter: (((id)::text <> '78157c60-45bc-42c1-9aad-c5651995db5c'::text) AND ((state)::text <> 'DELETED'::text) AND (id_path <@ get_doc_path('78157c60-45bc-42c1-9aad-c5651995db5c'::character varying)))           Rows Removed by Filter: 23357           Buffers: shared hit=747698   ->  CTE Scan on trees  (cost=0.00..3119.10 rows=155955 width=0) (actual time=0.215..3828.519 rows=154840 loops=1)         Buffers: shared hit=747698, temp written=1587 Total runtime: 3881.781 ms(10 rows)If I change the predicate for ltree to use subquery the plan and execution time changes:explain (analyze, buffers)with trees AS (SELECT d.id, d.snapshot_id , NULL :: text[] AS permissions  FROM document_head AS d  WHERE (d.id_path <@ (select get_doc_path('78157c60-45bc-42c1-9aad-c5651995db5c')) AND d.id != '78157c60-45bc-42c1-9aad-c5651995db5c') AND d.state != 'DELETED')SELECT COUNT(*) FROM trees;https://explain.depesz.com/s/eUR                                                                         QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------ Aggregate  (cost=763.19..763.20 rows=1 width=0) (actual time=430.519..430.519 rows=1 loops=1)   Buffers: shared hit=47768, temp written=1587   CTE trees     ->  Bitmap Heap Scan on document_head d  (cost=82.05..759.20 rows=177 width=74) (actual time=70.703..249.419 rows=154840 loops=1)           Recheck Cond: (id_path <@ $0)           Rows Removed by Index Recheck: 11698           Filter: (((id)::text <> '78157c60-45bc-42c1-9aad-c5651995db5c'::text) AND ((state)::text <> 'DELETED'::text))           Rows Removed by Filter: 23           Buffers: shared hit=47768           InitPlan 1 (returns $0)             ->  Result  (cost=0.00..0.26 rows=1 width=0) (actual time=0.083..0.084 rows=1 loops=1)                   Buffers: shared hit=4           ->  Bitmap Index Scan on document_head__id_path__gist__idx  (cost=0.00..81.74 rows=178 width=0) (actual time=68.326..68.326 rows=159979 loops=1)                 Index Cond: (id_path <@ $0)                 Buffers: shared hit=16238   ->  CTE Scan on trees  (cost=0.00..3.54 rows=177 width=0) (actual time=70.707..388.714 rows=154840 loops=1)         Buffers: shared hit=47768, temp written=1587 Total runtime: 433.410 ms(18 rows)I can see that:1. both queries return exactly the same data and require the same underlying data to produce the result.2. when I use subquery postgres cannot estimate the number of records that are going to be found by <@. 3. number of buffers processed by the quicker query is 15 times smaller than on the slow one. I assume that this is the reason the query is slower. My understanding how the planner works is that it evaluates some plans and calculates cost based on the settings and and available statistics. Settings define relative cost of operations like random vs sequential io vs processing in memory data.I assume that it is better to use queries that allows planner to use correct estimations and tune settings so that planner knows correct operations cost.My questions:1. how to find out why slow execution requires 15x more buffers.2. In the slow executin planner either does not consider the plan similar to the quick on or estimates it as worse. How to find out which of thes cases (if any) is true? And what I can try to make planner use better plan?I understand that one option to try is to upgrade to more recent version and I'm going to test this but this may take a while until the affected production system will get an upgrade.More information:OS: Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-125-generic x86_64)PG version: PostgreSQL 9.3.14 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bitHistory: no history as this is a new query.Hardware: aws m4.large (2 vCPUs, RAM 8GB) with gp2 (SSD) storage with throughtput up to ~20MB/s. Non default server settings:             name              |                                                                                                                                                                current_setting                                                                                                                                                                |       source--------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------- application_name               | psql                                                                                                                                                                                                                                                                                                                                          | client archive_command                | /etc/rds/dbbin/pgscripts/rds_wal_archive %p                                                                                                                                                                                                                                                                                                   | configuration file archive_mode                   | on                                                                                                                                                                                                                                                                                                                                            | configuration file archive_timeout                | 5min                                                                                                                                                                                                                                                                                                                                          | configuration file checkpoint_completion_target   | 0.9                                                                                                                                                                                                                                                                                                                                           | configuration file checkpoint_segments            | 16                                                                                                                                                                                                                                                                                                                                            | configuration file client_encoding                | UTF8                                                                                                                                                                                                                                                                                                                                          | client effective_cache_size           | 3987336kB                                                                                                                                                                                                                                                                                                                                     | configuration file fsync                          | on                                                                                                                                                                                                                                                                                                                                            | configuration file hot_standby                    | off                                                                                                                                                                                                                                                                                                                                           | configuration file listen_addresses               | *                                                                                                                                                                                                                                                                                                                                             | command line lo_compat_privileges           | off                                                                                                                                                                                                                                                                                                                                           | configuration file log_checkpoints                | on                                                                                                                                                                                                                                                                                                                                            | configuration file log_destination                | stderr                                                                                                                                                                                                                                                                                                                                        | configuration file log_directory                  | /rdsdbdata/log/error                                                                                                                                                                                                                                                                                                                          | configuration file log_file_mode                  | 0644                                                                                                                                                                                                                                                                                                                                          | configuration file log_filename                   | postgresql.log.%Y-%m-%d-%H                                                                                                                                                                                                                                                                                                                    | configuration file log_hostname                   | on                                                                                                                                                                                                                                                                                                                                            | configuration file log_line_prefix                | %t:%r:%u@%d:[%p]:                                                                                                                                                                                                                                                                                                                             | configuration file log_min_duration_statement     | 1s                                                                                                                                                                                                                                                                                                                                            | configuration file log_rotation_age               | 1h                                                                                                                                                                                                                                                                                                                                            | configuration file log_timezone                   | UTC                                                                                                                                                                                                                                                                                                                                           | configuration file log_truncate_on_rotation       | off                                                                                                                                                                                                                                                                                                                                           | configuration file logging_collector              | on                                                                                                                                                                                                                                                                                                                                            | configuration file maintenance_work_mem           | 127MB                                                                                                                                                                                                                                                                                                                                         | configuration file max_connections                | 1000                                                                                                                                                                                                                                                                                                                                          | configuration file max_locks_per_transaction      | 64                                                                                                                                                                                                                                                                                                                                            | configuration file max_prepared_transactions      | 0                                                                                                                                                                                                                                                                                                                                             | configuration file max_stack_depth                | 6MB                                                                                                                                                                                                                                                                                                                                           | configuration file max_wal_senders                | 5                                                                                                                                                                                                                                                                                                                                             | configuration file port                           | 5432                                                                                                                                                                                                                                                                                                                                          | configuration file rds.extensions                 | btree_gin,btree_gist,chkpass,citext,cube,dblink,dict_int,dict_xsyn,earthdistance,fuzzystrmatch,hstore,intagg,intarray,isn,ltree,pgcrypto,pgrowlocks,pg_stat_statements,pg_trgm,plcoffee,plls,plperl,plpgsql,pltcl,plv8,postgis,postgis_tiger_geocoder,postgis_topology,postgres_fdw,sslinfo,tablefunc,test_parser,tsearch2,unaccent,uuid-ossp | configuration file rds.internal_databases         | rdsadmin,template0                                                                                                                                                                                                                                                                                                                            | configuration file rds.superuser_variables        | session_replication_role                                                                                                                                                                                                                                                                                                                      | configuration file shared_buffers                 | 1993664kB                                                                                                                                                                                                                                                                                                                                     | configuration file shared_preload_libraries       | rdsutils                                                                                                                                                                                                                                                                                                                                      | configuration file ssl                            | on                                                                                                                                                                                                                                                                                                                                            | configuration file ssl_ca_file                    | /rdsdbdata/rds-metadata/ca-cert.pem                                                                                                                                                                                                                                                                                                           | configuration file ssl_cert_file                  | /rdsdbdata/rds-metadata/server-cert.pem                                                                                                                                                                                                                                                                                                       | configuration file ssl_key_file                   | /rdsdbdata/rds-metadata/server-key.pem                                                                                                                                                                                                                                                                                                        | configuration file ssl_renegotiation_limit        | 0                                                                                                                                                                                                                                                                                                                                             | configuration file stats_temp_directory           | /rdsdbdata/db/pg_stat_tmp                                                                                                                                                                                                                                                                                                                     | configuration file superuser_reserved_connections | 3                                                                                                                                                                                                                                                                                                                                             | configuration file synchronous_commit             | on                                                                                                                                                                                                                                                                                                                                            | configuration file TimeZone                       | UTC                                                                                                                                                                                                                                                                                                                                           | configuration file unix_socket_directories        | /tmp                                                                                                                                                                                                                                                                                                                                          | configuration file unix_socket_group              | rdsdb                                                                                                                                                                                                                                                                                                                                         | configuration file unix_socket_permissions        | 0700                                                                                                                                                                                                                                                                                                                                          | configuration file wal_keep_segments              | 32                                                                                                                                                                                                                                                                                                                                            | configuration file wal_level                      | hot_standby                                                                                                                                                                                                                                                                                                                                   | configuration file wal_receiver_timeout           | 30s                                                                                                                                                                                                                                                                                                                                           | configuration file wal_sender_timeout             | 30s                                                                                                                                                                                                                                                                                                                                           | configuration file(52 rows)", "msg_date": "Fri, 1 Dec 2017 18:20:28 +0100", "msg_from": "Roman Konoval <[email protected]>", "msg_from_op": true, "msg_subject": "Bad plan for ltree predicate <@" }, { "msg_contents": "Roman Konoval <[email protected]> writes:\n> I have a problem on 9.3.14 with a query that accesses table:\n\nI think the root of the problem is your intermediate function:\n\n> CREATE OR REPLACE FUNCTION public.get_doc_path(document_id character varying)\n> RETURNS ltree\n> LANGUAGE plpgsql\n> STABLE\n> AS $function$\n> DECLARE\n> path ltree;\n> BEGIN\n> select id_path into path from document_head where id = document_id;\n> RETURN path;\n> END $function$\n\nThis is quite expensive, as it involves another table search, but the\nplanner doesn't know that since you've not marked it as having higher than\nnormal cost. The seqscan formulation of the query results in evaluating\nthis function afresh at most of the rows, whereas shoving it into an\nuncorrelated sub-select causes it to be evaluated only once. That, I\nthink, and not the seqscan-vs-indexscan aspect, is what makes the bitmap\nformulation go faster. Certainly you'd not expect that a bitmap scan that\nhas to hit most of the rows anyway is going to win over a seqscan.\n\nThe fact that the planner goes for a bitmap scan in the second formulation\nis an artifact of the fact that it doesn't try to pre-evaluate sub-selects\nfor selectivity estimation purposes, so you end up with a default estimate\nthat says that the <@ condition only selects a small fraction of the rows.\nNot sure if we should try to change that or not.\n\nI'd suggest setting the function's cost to 1000 or so and seeing if that\ndoesn't improve matters.\n\n(BTW, what tipped me off to this was that the \"buffers hit\" count for\nthe seqscan node was so high, several times more than the actual size\nof the table. I couldn't account for that until I realized that the\nfunction itself would be adding a few buffer hits per execution.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 01 Dec 2017 16:33:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan for ltree predicate <@" }, { "msg_contents": "Hi Tom,\n\nThanks for your help.\n\n> On Dec 1, 2017, at 22:33, Tom Lane <[email protected]> wrote:\n> \n> \n> The seqscan formulation of the query results in evaluating\n> this function afresh at most of the rows\n\nThe function is defined as STABLE. I though that means that there is no need\n to reevaluate it on every row as input parameter is the same for every row and\n return value will be the same during the same query execution. Do I understand\n incorrectly what STABLE means?\nWhy is the function evaluated more than once?\n\n> , whereas shoving it into an\n> uncorrelated sub-select causes it to be evaluated only once. That, I\n> think, and not the seqscan-vs-indexscan aspect, is what makes the bitmap\n> formulation go faster. Certainly you'd not expect that a bitmap scan that\n> has to hit most of the rows anyway is going to win over a seqscan.\n> \n> The fact that the planner goes for a bitmap scan in the second formulation\n> is an artifact of the fact that it doesn't try to pre-evaluate sub-selects\n> for selectivity estimation purposes, so you end up with a default estimate\n> that says that the <@ condition only selects a small fraction of the rows.\n> Not sure if we should try to change that or not.\n> \n> I'd suggest setting the function's cost to 1000 or so and seeing if that\n> doesn't improve matters.\n> \n\n\nIf I set function cost to 1000 I get slightly better plan but still 3.5 more buffers are read when compared to bitmap scan which as you wrote one would expect to be slower than seq scan.\nHere is the plan:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=216438.81..216438.82 rows=1 width=0) (actual time=1262.244..1262.245 rows=1 loops=1)\n Buffers: shared hit=169215\n CTE trees\n -> Index Scan using document_head__id_path__gist__idx on document_head d (cost=2.91..212787.85 rows=162265 width=74) (actual time=0.115..727.119 rows=154854 loops=1)\n Index Cond: (id_path <@ get_doc_path('78157c60-45bc-42c1-9aad-c5651995db5c'::character varying))\n Filter: (((id)::text <> '78157c60-45bc-42c1-9aad-c5651995db5c'::text) AND ((state)::text <> 'DELETED'::text))\n Rows Removed by Filter: 23\n Buffers: shared hit=169215\n -> CTE Scan on trees (cost=0.00..3245.30 rows=162265 width=0) (actual time=0.119..1118.899 rows=154854 loops=1)\n Buffers: shared hit=169215\n Total runtime: 1277.010 ms\n(11 rows)\n\nMy understanding is that the optimal plan in this case should read less data than bitmap scan by the amount of buffers hit by bitmap index scan. \nIt should read roughly all buffers of the table itself. Something like the query with predicate using ltree literal instead of function invocation:\n\nexplain (analyze, buffers)\nwith trees AS (\nSELECT d.id, d.snapshot_id , NULL :: text[] AS permissions\n FROM document_head AS d\n WHERE (d.id_path <@ '869c0187_51ae_4deb_a36f_0425fdafda6e.78157c60_45bc_42c1_9aad_c5651995db5c'::ltree AND d.id != '78157c60-45bc-42c1-9aad-c5651995db5c') AND d.state != 'DELETED'\n)\nSELECT COUNT(*) FROM trees;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=42114.02..42114.03 rows=1 width=0) (actual time=997.427..997.427 rows=1 loops=1)\n Buffers: shared hit=35230\n CTE trees\n -> Seq Scan on document_head d (cost=0.00..38463.06 rows=162265 width=74) (actual time=0.013..593.082 rows=154854 loops=1)\n Filter: ((id_path <@ '869c0187_51ae_4deb_a36f_0425fdafda6e.78157c60_45bc_42c1_9aad_c5651995db5c'::ltree) AND ((id)::text <> '78157c60-45bc-42c1-9aad-c5651995db5c'::text) AND ((state)::text <> 'DELETED'::text))\n Rows Removed by Filter: 23357\n Buffers: shared hit=35230\n -> CTE Scan on trees (cost=0.00..3245.30 rows=162265 width=0) (actual time=0.017..888.076 rows=154854 loops=1)\n Buffers: shared hit=35230\n Total runtime: 1011.565 ms\n(10 rows)\n\n\nThe question is if it possible to get plan like that using function or some other way to get ltree value for given document_head.id value in one query?\n\nAs an alternative I can get ltree value with the separate query but this would require\n1. a round-trip to postgres\n2. me to change isolation level to REPEATABLE READ to make sure that I get consistent result \nso I would like to avoid that.\n\nRegards,\nRoman Konoval\nHi Tom,Thanks for your help.\n\nOn Dec 1, 2017, at 22:33, Tom Lane <[email protected]> wrote: The seqscan formulation of the query results in evaluatingthis function afresh at most of the rowsThe function is defined as STABLE. I though that means that there is no need to reevaluate it on every row as input parameter is the same for every row and return value will be the same during the same query execution. Do I understand incorrectly what STABLE means?Why is the function evaluated more than once?, whereas shoving it into anuncorrelated sub-select causes it to be evaluated only once.  That, Ithink, and not the seqscan-vs-indexscan aspect, is what makes the bitmapformulation go faster.  Certainly you'd not expect that a bitmap scan thathas to hit most of the rows anyway is going to win over a seqscan.The fact that the planner goes for a bitmap scan in the second formulationis an artifact of the fact that it doesn't try to pre-evaluate sub-selectsfor selectivity estimation purposes, so you end up with a default estimatethat says that the <@ condition only selects a small fraction of the rows.Not sure if we should try to change that or not.I'd suggest setting the function's cost to 1000 or so and seeing if thatdoesn't improve matters.If I set function cost to 1000 I get slightly better plan but still 3.5 more buffers are read when compared to bitmap scan which as you wrote one would expect to be slower than seq scan.Here is the plan:                                                                                 QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=216438.81..216438.82 rows=1 width=0) (actual time=1262.244..1262.245 rows=1 loops=1)   Buffers: shared hit=169215   CTE trees     ->  Index Scan using document_head__id_path__gist__idx on document_head d  (cost=2.91..212787.85 rows=162265 width=74) (actual time=0.115..727.119 rows=154854 loops=1)           Index Cond: (id_path <@ get_doc_path('78157c60-45bc-42c1-9aad-c5651995db5c'::character varying))           Filter: (((id)::text <> '78157c60-45bc-42c1-9aad-c5651995db5c'::text) AND ((state)::text <> 'DELETED'::text))           Rows Removed by Filter: 23           Buffers: shared hit=169215   ->  CTE Scan on trees  (cost=0.00..3245.30 rows=162265 width=0) (actual time=0.119..1118.899 rows=154854 loops=1)         Buffers: shared hit=169215 Total runtime: 1277.010 ms(11 rows)My understanding is that the optimal plan in this case should read less data than bitmap scan by the amount of buffers hit by bitmap index scan. It should read roughly all buffers of the table itself. Something like the query with predicate using ltree literal instead of function invocation:explain (analyze, buffers)with trees AS (SELECT d.id, d.snapshot_id , NULL :: text[] AS permissions  FROM document_head AS d  WHERE (d.id_path <@ '869c0187_51ae_4deb_a36f_0425fdafda6e.78157c60_45bc_42c1_9aad_c5651995db5c'::ltree AND d.id != '78157c60-45bc-42c1-9aad-c5651995db5c') AND d.state != 'DELETED')SELECT COUNT(*) FROM trees;                                                                                                         QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=42114.02..42114.03 rows=1 width=0) (actual time=997.427..997.427 rows=1 loops=1)   Buffers: shared hit=35230   CTE trees     ->  Seq Scan on document_head d  (cost=0.00..38463.06 rows=162265 width=74) (actual time=0.013..593.082 rows=154854 loops=1)           Filter: ((id_path <@ '869c0187_51ae_4deb_a36f_0425fdafda6e.78157c60_45bc_42c1_9aad_c5651995db5c'::ltree) AND ((id)::text <> '78157c60-45bc-42c1-9aad-c5651995db5c'::text) AND ((state)::text <> 'DELETED'::text))           Rows Removed by Filter: 23357           Buffers: shared hit=35230   ->  CTE Scan on trees  (cost=0.00..3245.30 rows=162265 width=0) (actual time=0.017..888.076 rows=154854 loops=1)         Buffers: shared hit=35230 Total runtime: 1011.565 ms(10 rows)The question is if it possible to get plan like that using function or some other way to get ltree value for given document_head.id value in one query?As an alternative I can get ltree value with the separate query but this would require1. a round-trip to postgres2. me to change isolation level to REPEATABLE READ to make sure that I get consistent result so I would like to avoid that.Regards,Roman Konoval", "msg_date": "Sat, 2 Dec 2017 03:34:43 +0100", "msg_from": "Roman Konoval <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad plan for ltree predicate <@" } ]
[ { "msg_contents": "Hi,\n\nWe recently had an issue in production, where a bitmap scan was chosen \ninstead of an index scan. Despite being 30x slower, the bitmap scan had \nabout the same cost as the index scan.\n\nI've found some cases where similar issues with bitmap scans were \nreported before:\n\nhttps://www.postgresql.org/message-id/flat/1456154321.976561.528310154.6A623C0E%40webmail.messagingengine.com\n\nhttps://www.postgresql.org/message-id/flat/CA%2BwPC0MRMhF_8fD9dc8%2BQWZQzUvHahPRSv%3DxMtCmsVLSsy-p0w%40mail.gmail.com\n\nI've made a synthetic test, which kind of reproduces the issue:\n\nshared_buffers = 512MB\neffective_cache_size = 512MB\nwork_mem = 100MB\n\nset seq_page_cost = 1.0;\nset random_page_cost = 1.5;\nset cpu_tuple_cost = 0.01;\nset cpu_index_tuple_cost = 0.005;\nset cpu_operator_cost = 0.0025;\n\ndrop table if exists aaa;\ncreate table aaa as select (id%100)::int num, (id%10=1)::bool flag from \ngenerate_series(1, 10000000) id;\ncreate index i1 on aaa  (num);\ncreate index i2 on aaa  (flag);\nanalyze aaa;\n\nselect relname, reltuples::bigint, relpages::bigint, \n(reltuples/relpages)::bigint tpp from pg_class where relname \nin('aaa','i1','i2') order by relname;\n\"aaa\";9999985;44248;226\n\"i1\";9999985;27422;365\n\"i2\";9999985;27422;365\n\nI've been running the same query while enabling and disabling different \nkinds of scans:\n\n1) set enable_bitmapscan = on;  set enable_indexscan = off; set \nenable_seqscan = off;\n2) set enable_bitmapscan = off; set enable_indexscan = on;  set \nenable_seqscan = off;\n3) set enable_bitmapscan = off; set enable_indexscan = off; set \nenable_seqscan = on;\n\nThe query was:\nexplain (analyze,verbose,costs,buffers)\nselect count(*) from aaa where num = 1 and flag = true;\n\nHere are the results for PostgreSQL 9.6 (for 9.3 and 10.1 the results \nare very similar):\n\n1) Aggregate  (cost=24821.70..24821.71 rows=1 width=8) (actual \ntime=184.591..184.591 rows=1 loops=1)\n   Output: count(*)\n   Buffers: shared hit=47259\n   ->  Bitmap Heap Scan on public.aaa  (cost=13038.21..24796.22 \nrows=10189 width=0) (actual time=122.492..178.006 rows=100000 loops=1)\n         Output: num, flag\n         Recheck Cond: (aaa.num = 1)\n         Filter: aaa.flag\n         Heap Blocks: exact=44248\n         Buffers: shared hit=47259\n         ->  BitmapAnd  (cost=13038.21..13038.21 rows=10189 width=0) \n(actual time=110.699..110.699 rows=0 loops=1)\n               Buffers: shared hit=3011\n               ->  Bitmap Index Scan on i1  (cost=0.00..1158.94 \nrows=99667 width=0) (actual time=19.600..19.600 rows=100000 loops=1)\n                     Index Cond: (aaa.num = 1)\n                     Buffers: shared hit=276\n               ->  Bitmap Index Scan on i2  (cost=0.00..11873.92 \nrows=1022332 width=0) (actual time=81.676..81.676 rows=1000000 loops=1)\n                     Index Cond: (aaa.flag = true)\n                     Buffers: shared hit=2735\nPlanning time: 0.104 ms\nExecution time: 184.988 ms\n\n2) Aggregate  (cost=67939.09..67939.10 rows=1 width=8) (actual \ntime=67.510..67.510 rows=1 loops=1)\n   Output: count(*)\n   Buffers: shared hit=44524\n   ->  Index Scan using i1 on public.aaa  (cost=0.44..67910.95 \nrows=11256 width=0) (actual time=0.020..61.180 rows=100000 loops=1)\n         Output: num, flag\n         Index Cond: (aaa.num = 1)\n         Filter: aaa.flag\n         Buffers: shared hit=44524\nPlanning time: 0.096 ms\nExecution time: 67.543 ms\n\n3) Aggregate  (cost=169276.49..169276.50 rows=1 width=8) (actual \ntime=977.063..977.063 rows=1 loops=1)\n   Output: count(*)\n   Buffers: shared hit=44248\n   ->  Seq Scan on public.aaa  (cost=0.00..169248.35 rows=11256 width=0) \n(actual time=0.018..969.294 rows=100000 loops=1)\n         Output: num, flag\n         Filter: (aaa.flag AND (aaa.num = 1))\n         Rows Removed by Filter: 9900000\n         Buffers: shared hit=44248\nPlanning time: 0.099 ms\nExecution time: 977.094 ms\n\n\nThe bitmap scan version runs more than twice slower than the one with \nindex scan, while being costed at more than twice cheaper.\n\nI've tried to increase cpu_tuple_cost and cpu_index_tuple_cost, and this \nbehavior remains after 6x increase in values. Although the difference in \ncosts becomes much less. After increasing the settings more than 6x, \nPostgreSQL decides to use a different plan for bitmap scans, so it's \nhard to make conclusions at that point.\n\nCould such cases be fixed with tuning of cost settings, or that's just \nhow PostgreSQL estimates bitmap scans and this can't be fixed without \nmodifying the optimizer? Or am I missing something and that's the \nexpected behavior? Thoughts?\n\nRegards,\nVitaliy\n\n\n", "msg_date": "Fri, 1 Dec 2017 19:40:08 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Bitmap scan is undercosted?" }, { "msg_contents": "On Fri, Dec 01, 2017 at 07:40:08PM +0200, Vitaliy Garnashevich wrote:\n> We recently had an issue in production, where a bitmap scan was chosen\n> instead of an index scan. Despite being 30x slower, the bitmap scan had\n> about the same cost as the index scan.\n\nMe too, see also:\nhttps://www.postgresql.org/message-id/flat/CAH2-WzkRTggiy_LKQUu-oViyp6y_Hhz-a1yWacPy4tcYWV1HoA%40mail.gmail.com#CAH2-WzkRTggiy_LKQUu-oViyp6y_Hhz-a1yWacPy4tcYWV1HoA@mail.gmail.com\n\n> drop table if exists aaa;\n> create table aaa as select (id%100)::int num, (id%10=1)::bool flag from\n> generate_series(1, 10000000) id;\n> create index i1 on aaa� (num);\n> create index i2 on aaa� (flag);\n> analyze aaa;\n> \n> select relname, reltuples::bigint, relpages::bigint,\n> (reltuples/relpages)::bigint tpp from pg_class where relname\n> in('aaa','i1','i2') order by relname;\n> \"aaa\";9999985;44248;226\n> \"i1\";9999985;27422;365\n> \"i2\";9999985;27422;365\n> \n> The query was:\n> explain (analyze,verbose,costs,buffers)\n> select count(*) from aaa where num = 1 and flag = true;\n\nNote that id%100==1 implies flag='t', so the planner anticipates retrieving\nfewer rows than it will ultimately read, probably by 2x. It makes sense that\ncauses the index scan to be more expensive than expected, but that's only\nsomewhat important, since there's no joins involved.\n\nThe reason why it's more than a bit slower is due to the \"density\" [0] of the\nheap pages read. num=1 is more selective than flag=true, so it scans i1,\nreading 1% of the whole table. But it's not reading the first 1% or \nsome other 1% of the table, it reads tuples evenly distributed across the\nentire table (226*0.01 = ~2 rows of each page). Since the index was created\nafter the INSERT, the repeated keys (logical value: id%100) are read in\nphysical order on the heap, so this is basically doing a seq scan, but with the\nadditional overhead of reading the index, and maybe doing an fseek() before\neach/some read()s, too. You could confirm that by connecting strace to the\nbackend before starting the query.\n\nSince you did that using % and with indices added after the INSERT, you can't\nimprove it by reindexing (as I was able to for my case). That's an elegant\ntest case, so thanks.\n\nI think shared_buffers=512MB is just small enough for this test to be painful\nfor 1e7 rows. I see the table+index is 559MB.\n\nI don't know if that's really similar to your production use case, but I would\nrecommend trying BRIN indices, which always require a bitmap scan. Note that\nsome things (like max()) that can use an btree index cannot use brin. PG10.1\nhas WITH autosummarize, which was important for our use, since we rarely do\nUPDATEs or DELETEs so tables are rarely vacuumed (only analyzed).\n\nJustin\n\n[0] I'm borrowing Jeff's language from here:\nhttps://www.postgresql.org/message-id/CAMkU%3D1xwGn%2BO0jhKsvrUrbW9MQp1YX0iB4Y-6h1mEz0ffBxK-Q%40mail.gmail.com\n\"density\" wasn't our problem, but it's a perfect description of this issue.\n\n", "msg_date": "Fri, 1 Dec 2017 12:34:27 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On 01/12/2017 20:34, Justin Pryzby wrote:\n> On Fri, Dec 01, 2017 at 07:40:08PM +0200, Vitaliy Garnashevich wrote:\n>> We recently had an issue in production, where a bitmap scan was chosen\n>> instead of an index scan. Despite being 30x slower, the bitmap scan had\n>> about the same cost as the index scan.\n> Me too, see also:\n> https://www.postgresql.org/message-id/flat/CAH2-WzkRTggiy_LKQUu-oViyp6y_Hhz-a1yWacPy4tcYWV1HoA%40mail.gmail.com#CAH2-WzkRTggiy_LKQUu-oViyp6y_Hhz-a1yWacPy4tcYWV1HoA@mail.gmail.com\n>\n>> drop table if exists aaa;\n>> create table aaa as select (id%100)::int num, (id%10=1)::bool flag from\n>> generate_series(1, 10000000) id;\n>> create index i1 on aaa  (num);\n>> create index i2 on aaa  (flag);\n>> analyze aaa;\n>>\n>> select relname, reltuples::bigint, relpages::bigint,\n>> (reltuples/relpages)::bigint tpp from pg_class where relname\n>> in('aaa','i1','i2') order by relname;\n>> \"aaa\";9999985;44248;226\n>> \"i1\";9999985;27422;365\n>> \"i2\";9999985;27422;365\n>>\n>> The query was:\n>> explain (analyze,verbose,costs,buffers)\n>> select count(*) from aaa where num = 1 and flag = true;\n> Note that id%100==1 implies flag='t', so the planner anticipates retrieving\n> fewer rows than it will ultimately read, probably by 2x. It makes sense that\n> causes the index scan to be more expensive than expected, but that's only\n> somewhat important, since there's no joins involved.\nI don't think the planner is that smart to account for correlation \nbetween values in different columns. When different values are used in \nfilter (num=2, num=39, num=74), the query actually runs faster, while \nstill being about twice slower than using an index scan. But the cost \ndoes not change much. It jumps up and down for different values, but \nit's still close to the initial value.\n\nAggregate  (cost=24239.02..24239.03 rows=1 width=8) (actual \ntime=105.239..105.239 rows=1 loops=1)\n   Output: count(*)\n   Buffers: shared hit=3011\n   ->  Bitmap Heap Scan on public.aaa  (cost=12812.05..24214.48 \nrows=9816 width=0) (actual time=105.236..105.236 rows=0 loops=1)\n         Output: num, flag\n         Recheck Cond: (aaa.num = 39)\n         Filter: aaa.flag\n         Buffers: shared hit=3011\n         ->  BitmapAnd  (cost=12812.05..12812.05 rows=9816 width=0) \n(actual time=105.157..105.157 rows=0 loops=1)\n               Buffers: shared hit=3011\n               ->  Bitmap Index Scan on i1  (cost=0.00..1134.94 \nrows=97667 width=0) (actual time=15.725..15.725 rows=100000 loops=1)\n                     Index Cond: (aaa.num = 39)\n                     Buffers: shared hit=276\n               ->  Bitmap Index Scan on i2  (cost=0.00..11671.96 \nrows=1005003 width=0) (actual time=77.920..77.920 rows=1000000 loops=1)\n                     Index Cond: (aaa.flag = true)\n                     Buffers: shared hit=2735\nPlanning time: 0.104 ms\nExecution time: 105.553 ms\n\nAggregate  (cost=65785.99..65786.00 rows=1 width=8) (actual \ntime=48.587..48.587 rows=1 loops=1)\n   Output: count(*)\n   Buffers: shared hit=44524\n   ->  Index Scan using i1 on public.aaa  (cost=0.44..65761.45 rows=9816 \nwidth=0) (actual time=48.583..48.583 rows=0 loops=1)\n         Output: num, flag\n         Index Cond: (aaa.num = 39)\n         Filter: aaa.flag\n         Rows Removed by Filter: 100000\n         Buffers: shared hit=44524\nPlanning time: 0.097 ms\nExecution time: 48.620 ms\n\n>\n> The reason why it's more than a bit slower is due to the \"density\" [0] of the\n> heap pages read. num=1 is more selective than flag=true, so it scans i1,\n> reading 1% of the whole table. But it's not reading the first 1% or\n> some other 1% of the table, it reads tuples evenly distributed across the\n> entire table (226*0.01 = ~2 rows of each page). Since the index was created\n> after the INSERT, the repeated keys (logical value: id%100) are read in\n> physical order on the heap, so this is basically doing a seq scan, but with the\n> additional overhead of reading the index, and maybe doing an fseek() before\n> each/some read()s, too. You could confirm that by connecting strace to the\n> backend before starting the query.\n>\n> Since you did that using % and with indices added after the INSERT, you can't\n> improve it by reindexing (as I was able to for my case). That's an elegant\n> test case, so thanks.\n>\n> I think shared_buffers=512MB is just small enough for this test to be painful\n> for 1e7 rows. I see the table+index is 559MB.\n            table           | ~count  |    size    |   toast |  idx   | \nsize + toast + idx\n---------------------------+---------+------------+------------+--------+--------------------\n  aaa                       | 9999994 | 346 MB     | 0 bytes    | 428 MB \n| 774 MB\n\nBut the plan says all buffers are \"shared hit\", and none \"read\", so \nthat's probably not an issue.\n\n>\n> I don't know if that's really similar to your production use case, but I would\n> recommend trying BRIN indices, which always require a bitmap scan. Note that\n> some things (like max()) that can use an btree index cannot use brin. PG10.1\n> has WITH autosummarize, which was important for our use, since we rarely do\n> UPDATEs or DELETEs so tables are rarely vacuumed (only analyzed).\nYes, BRIN indexes should be beneficial for our case, because there is a \ndate column which grows over time (not strictly regularly, but still). \nUnfortunately, we're still migrating our databases from 9.3 to 9.6. \nAnyway, thanks for the advice.\n>\n> Justin\n>\n> [0] I'm borrowing Jeff's language from here:\n> https://www.postgresql.org/message-id/CAMkU%3D1xwGn%2BO0jhKsvrUrbW9MQp1YX0iB4Y-6h1mEz0ffBxK-Q%40mail.gmail.com\n> \"density\" wasn't our problem, but it's a perfect description of this issue.\n>\n\n\n", "msg_date": "Sat, 2 Dec 2017 01:08:03 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "I tried to reproduce this issue and couldn't, under PG95 and 10.1:\n\nOn Fri, Dec 01, 2017 at 12:34:27PM -0600, Justin Pryzby wrote:\n> On Fri, Dec 01, 2017 at 07:40:08PM +0200, Vitaliy Garnashevich wrote:\n> > We recently had an issue in production, where a bitmap scan was chosen\n> > instead of an index scan. Despite being 30x slower, the bitmap scan had\n> > about the same cost as the index scan.\n> \n> > drop table if exists aaa;\n> > create table aaa as select (id%100)::int num, (id%10=1)::bool flag from\n> > generate_series(1, 10000000) id;\n> > create index i1 on aaa� (num);\n> > create index i2 on aaa� (flag);\n> > analyze aaa;\n\nWhat is:\neffective_io_concurrency\nmax_parallel_workers_per_gather (I gather you don't have this)\n\nNote:\npostgres=# SELECT correlation FROM pg_stats WHERE tablename='aaa' AND attname='num';\ncorrelation | 0.00710112\n\n..so this is different from the issue corrected by the patch I created while\ntesting.\n\n> Note that id%100==1 implies flag='t', so the planner anticipates retrieving\n> fewer rows than it will ultimately read, probably by 2x. It makes sense that\n> causes the index scan to be more expensive than expected, but that's only\n> somewhat important, since there's no joins involved.\n\nI changed the query from COUNT(*) TO * for easier to read explain:\n\nCREATE TABLE aaa AS SELECT (id%100)::int num, (id%10=1)::bool flag FROM generate_series(1, 10000000) id;\nCREATE INDEX i1 ON aaa(num);\nCREATE INDEX i2 ON aaa (flag);\nANALYZE VERBOSE aaa;\nEXPLAIN (analyze,verbose,costs,buffers) SELECT * FROM aaa WHERE num=1 AND flag=true;\n Bitmap Heap Scan on public.aaa (cost=20652.98..45751.75 rows=10754 width=5) (actual time=85.314..185.107 rows=100000 loops=1)\n -> BitmapAnd (cost=20652.98..20652.98 rows=10754 width=0) (actual time=163.220..163.220 rows=0 loops=1)\n -> Bitmap Index Scan on i1 (cost=0.00..1965.93 rows=106333 width=0) (actual time=26.943..26.943 rows=100000 loops=1)\n -> Bitmap Index Scan on i2 (cost=0.00..18681.42 rows=1011332 width=0) (actual time=133.804..133.804 rows=1000000 loops=1)\n\n..which is what's wanted with no planner hints (PG10.1 here).\n\nSame on PG95:\npostgres=# EXPLAIN (analyze,verbose,costs,buffers) SELECT * FROM aaa WHERE num=1 AND flag=true;\n Bitmap Heap Scan on public.aaa (cost=19755.64..43640.32 rows=9979 width=5) (actual time=230.017..336.583 rows=100000 loops=1)\n -> BitmapAnd (cost=19755.64..19755.64 rows=9979 width=0) (actual time=205.242..205.242 rows=0 loops=1)\n -> Bitmap Index Scan on i1 (cost=0.00..1911.44 rows=103334 width=0) (actual time=24.911..24.911 rows=100000 loops=1)\n -> Bitmap Index Scan on i2 (cost=0.00..17838.96 rows=965670 width=0) (actual time=154.237..154.237 rows=1000000 loops=1)\n\nThe rowcount is off, but not a critical issue without a join.\n\nJustin\n\n", "msg_date": "Fri, 1 Dec 2017 17:11:05 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On 02/12/2017 01:11, Justin Pryzby wrote:\n> I tried to reproduce this issue and couldn't, under PG95 and 10.1:\n>\n> On Fri, Dec 01, 2017 at 12:34:27PM -0600, Justin Pryzby wrote:\n>> On Fri, Dec 01, 2017 at 07:40:08PM +0200, Vitaliy Garnashevich wrote:\n>>> We recently had an issue in production, where a bitmap scan was chosen\n>>> instead of an index scan. Despite being 30x slower, the bitmap scan had\n>>> about the same cost as the index scan.\n>>> drop table if exists aaa;\n>>> create table aaa as select (id%100)::int num, (id%10=1)::bool flag from\n>>> generate_series(1, 10000000) id;\n>>> create index i1 on aaa  (num);\n>>> create index i2 on aaa  (flag);\n>>> analyze aaa;\n> What is:\n> effective_io_concurrency\n> max_parallel_workers_per_gather (I gather you don't have this)\neffective_io_concurrency = 0\nmax_parallel_workers_per_gather = 0\n\nDid you notice random_page_cost = 1.5 ?\n\nFor this test I'm using SSD and Windows (if that matters). On production \nwe also use SSD, hence lower random_page_cost. But with the default \nrandom_page_cost=4.0, the difference in cost between the index scan plan \nand the bitmap scan plan is even bigger.\n>\n> Note:\n> postgres=# SELECT correlation FROM pg_stats WHERE tablename='aaa' AND attname='num';\n> correlation | 0.00710112\n>\n> ..so this is different from the issue corrected by the patch I created while\n> testing.\n>\n>> Note that id%100==1 implies flag='t', so the planner anticipates retrieving\n>> fewer rows than it will ultimately read, probably by 2x. It makes sense that\n>> causes the index scan to be more expensive than expected, but that's only\n>> somewhat important, since there's no joins involved.\n> I changed the query from COUNT(*) TO * for easier to read explain:\n>\n> CREATE TABLE aaa AS SELECT (id%100)::int num, (id%10=1)::bool flag FROM generate_series(1, 10000000) id;\n> CREATE INDEX i1 ON aaa(num);\n> CREATE INDEX i2 ON aaa (flag);\n> ANALYZE VERBOSE aaa;\n> EXPLAIN (analyze,verbose,costs,buffers) SELECT * FROM aaa WHERE num=1 AND flag=true;\n> Bitmap Heap Scan on public.aaa (cost=20652.98..45751.75 rows=10754 width=5) (actual time=85.314..185.107 rows=100000 loops=1)\n> -> BitmapAnd (cost=20652.98..20652.98 rows=10754 width=0) (actual time=163.220..163.220 rows=0 loops=1)\n> -> Bitmap Index Scan on i1 (cost=0.00..1965.93 rows=106333 width=0) (actual time=26.943..26.943 rows=100000 loops=1)\n> -> Bitmap Index Scan on i2 (cost=0.00..18681.42 rows=1011332 width=0) (actual time=133.804..133.804 rows=1000000 loops=1)\n>\n> ..which is what's wanted with no planner hints (PG10.1 here).\nYes, that's what you get without planner hints, but it's strange to get \nthis plan, when there is another one, which runs 2-3 times faster, but \nhappens to be estimated to be twice more costly than the one with bitmap \nscans:\n\n# set enable_bitmapscan = off; set enable_indexscan = on;  set \nenable_seqscan = off;\n# explain analyze select * from aaa where num = 1 and flag = true;\nIndex Scan using i1 on aaa  (cost=0.44..66369.81 rows=10428 width=5) \n(actual time=0.020..57.765 rows=100000 loops=1)\n\nvs.\n\n# set enable_bitmapscan = on;  set enable_indexscan = off; set \nenable_seqscan = off;\n# explain analyze select * from aaa where num = 1 and flag = true;\nBitmap Heap Scan on aaa  (cost=13099.33..25081.40 rows=10428 width=5) \n(actual time=122.137..182.811 rows=100000 loops=1)\n   ->  BitmapAnd  (cost=13099.33..13099.33 rows=10428 width=0) (actual \ntime=110.168..110.168 rows=0 loops=1)\n         ->  Bitmap Index Scan on i1  (cost=0.00..1181.44 rows=101667 \nwidth=0) (actual time=20.845..20.845 rows=100000 loops=1)\n         ->  Bitmap Index Scan on i2  (cost=0.00..11912.43 rows=1025666 \nwidth=0) (actual time=80.323..80.323 rows=1000000 loops=1)\n\n>\n> Same on PG95:\n> postgres=# EXPLAIN (analyze,verbose,costs,buffers) SELECT * FROM aaa WHERE num=1 AND flag=true;\n> Bitmap Heap Scan on public.aaa (cost=19755.64..43640.32 rows=9979 width=5) (actual time=230.017..336.583 rows=100000 loops=1)\n> -> BitmapAnd (cost=19755.64..19755.64 rows=9979 width=0) (actual time=205.242..205.242 rows=0 loops=1)\n> -> Bitmap Index Scan on i1 (cost=0.00..1911.44 rows=103334 width=0) (actual time=24.911..24.911 rows=100000 loops=1)\n> -> Bitmap Index Scan on i2 (cost=0.00..17838.96 rows=965670 width=0) (actual time=154.237..154.237 rows=1000000 loops=1)\n>\n> The rowcount is off, but not a critical issue without a join.\n>\n> Justin\n>\n\n\n", "msg_date": "Sat, 2 Dec 2017 01:54:09 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On Fri, Dec 01, 2017 at 05:11:04PM -0600, Justin Pryzby wrote:\n> I tried to reproduce this issue and couldn't, under PG95 and 10.1:\n\nI'm embarassed to say that I mis-read your message, despite you're amply clear\nsubject. You're getting a bitmap scan but you'd prefer to get an index scan.\nI anticipated the opposite problem (which is what I've had issues with myself).\n\n> On Fri, Dec 01, 2017 at 12:34:27PM -0600, Justin Pryzby wrote:\n> > On Fri, Dec 01, 2017 at 07:40:08PM +0200, Vitaliy Garnashevich wrote:\n> > > We recently had an issue in production, where a bitmap scan was chosen\n> > > instead of an index scan. Despite being 30x slower, the bitmap scan had\n> > > about the same cost as the index scan.\n> \n> Note:\n> postgres=# SELECT correlation FROM pg_stats WHERE tablename='aaa' AND attname='num';\n> correlation | 0.00710112\n> \n> ..so this is different from the issue corrected by the patch I created while\n> testing.\n\nActually, that the table is \"not correlated\" on \"num\" column is maybe the\nprimary reason why PG avoids using an index scan. It (more or less correctly)\ndeduces that it's going to have to \"read\" a large fraction of the pages (even\nif only to process a small fraction of the rows), which is costly, except it's\nall cached.. In your case, that overly-penalizes the index scan.\n\nThis is cost_index() and cost_bitmap_heap_scan() in costsize.c. Since the\nindex is uncorrelated, it's returning something close to max_IO_cost. It looks\nlike effective_cache_size only affects index_pages_fetched().\n\nI'm going to try to dig some more into it. Maybe there's evidence to\nre-evaluate one of these:\n\ncost_index()\n| run_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\nor\ncost_bitmap_heap_scan()\n| cost_per_page = spc_random_page_cost - \n| (spc_random_page_cost - spc_seq_page_cost)\n| * sqrt(pages_fetched / T);\n\nJustin\n\n", "msg_date": "Fri, 1 Dec 2017 20:06:03 -0600", "msg_from": "[email protected] (Justin Pryzby)", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On Fri, Dec 1, 2017 at 3:54 PM, Vitaliy Garnashevich <\[email protected]> wrote:\n\n> On 02/12/2017 01:11, Justin Pryzby wrote:\n>\n>> I tried to reproduce this issue and couldn't, under PG95 and 10.1:\n>>\n>> On Fri, Dec 01, 2017 at 12:34:27PM -0600, Justin Pryzby wrote:\n>>\n>>> On Fri, Dec 01, 2017 at 07:40:08PM +0200, Vitaliy Garnashevich wrote:\n>>>\n>>>> We recently had an issue in production, where a bitmap scan was chosen\n>>>> instead of an index scan. Despite being 30x slower, the bitmap scan had\n>>>> about the same cost as the index scan.\n>>>> drop table if exists aaa;\n>>>> create table aaa as select (id%100)::int num, (id%10=1)::bool flag from\n>>>> generate_series(1, 10000000) id;\n>>>> create index i1 on aaa (num);\n>>>> create index i2 on aaa (flag);\n>>>> analyze aaa;\n>>>>\n>>> What is:\n>> effective_io_concurrency\n>> max_parallel_workers_per_gather (I gather you don't have this)\n>>\n> effective_io_concurrency = 0\n> max_parallel_workers_per_gather = 0\n>\n> Did you notice random_page_cost = 1.5 ?\n>\n\nFor the aaa.num = 39 case, the faster index scan actually does hit 15 times\nmore buffers than the bitmap scan does. While 1.5 is lot lower than 4.0,\nit is still much higher than the true cost of reading a page from the\nbuffer cache. This why the index scan is getting punished. You could\nlower random_page_cost and seq_page_cost to 0, to remove those\nconsiderations. (I'm not saying you should do this on your production\nsystem, but rather you should do it as a way to investigate the issue. But\nit might make sense on production as well)\n\n\n> For this test I'm using SSD and Windows (if that matters). On production\n> we also use SSD, hence lower random_page_cost. But with the default\n> random_page_cost=4.0, the difference in cost between the index scan plan\n> and the bitmap scan plan is even bigger.\n\n\nSince it is all shared buffers hits, it doesn't matter if you have SSD for\nthis particular test case.\n\nCheers,\n\nJeff\n\nOn Fri, Dec 1, 2017 at 3:54 PM, Vitaliy Garnashevich <[email protected]> wrote:On 02/12/2017 01:11, Justin Pryzby wrote:\n\nI tried to reproduce this issue and couldn't, under PG95 and 10.1:\n\nOn Fri, Dec 01, 2017 at 12:34:27PM -0600, Justin Pryzby wrote:\n\nOn Fri, Dec 01, 2017 at 07:40:08PM +0200, Vitaliy Garnashevich wrote:\n\nWe recently had an issue in production, where a bitmap scan was chosen\ninstead of an index scan. Despite being 30x slower, the bitmap scan had\nabout the same cost as the index scan.\ndrop table if exists aaa;\ncreate table aaa as select (id%100)::int num, (id%10=1)::bool flag from\ngenerate_series(1, 10000000) id;\ncreate index i1 on aaa  (num);\ncreate index i2 on aaa  (flag);\nanalyze aaa;\n\nWhat is:\neffective_io_concurrency\nmax_parallel_workers_per_gather (I gather you don't have this)\n\neffective_io_concurrency = 0\nmax_parallel_workers_per_gather = 0\n\nDid you notice random_page_cost = 1.5 ?For the aaa.num = 39 case, the faster index scan actually does hit 15 times more buffers than the bitmap scan does.  While 1.5 is lot lower than 4.0, it is still much higher than the true cost of reading a page from the buffer cache.   This why the index scan is getting punished.  You could lower random_page_cost and  seq_page_cost to 0, to remove those considerations.  (I'm not saying you should do this on your production system, but rather you should do it as a way to investigate the issue.  But it might make sense on production as well)\n\nFor this test I'm using SSD and Windows (if that matters). On production we also use SSD, hence lower random_page_cost. But with the default random_page_cost=4.0, the difference in cost between the index scan plan and the bitmap scan plan is even bigger.Since it is all shared buffers hits, it doesn't matter if you have SSD for this particular test case. Cheers,Jeff", "msg_date": "Fri, 1 Dec 2017 21:51:56 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On Sat, Dec 02, 2017 at 01:54:09AM +0200, Vitaliy Garnashevich wrote:\n> On 02/12/2017 01:11, Justin Pryzby wrote:\n> >..which is what's wanted with no planner hints (PG10.1 here).\n> Yes, that's what you get without planner hints, but it's strange to get this\n> plan, when there is another one, which runs 2-3 times faster, but happens to\n> be estimated to be twice more costly than the one with bitmap scans:\n> \n> # set enable_bitmapscan = off; set enable_indexscan = on;� set enable_seqscan = off;\n> # explain analyze select * from aaa where num = 1 and flag = true;\n> Index Scan using i1 on aaa� (cost=0.44..66369.81 rows=10428 width=5) (actual time=0.020..57.765 rows=100000 loops=1)\n> \n> vs.\n> \n> # set enable_bitmapscan = on;� set enable_indexscan = off; set enable_seqscan = off;\n> # explain analyze select * from aaa where num = 1 and flag = true;\n> Bitmap Heap Scan on aaa� (cost=13099.33..25081.40 rows=10428 width=5) (actual time=122.137..182.811 rows=100000 loops=1)\n\nI was able to get an index plan with:\n\nSET random_page_cost=1; SET cpu_index_tuple_cost=.04; -- default: 0.005; see selfuncs.c\npostgres=# EXPLAIN (analyze,verbose,costs,buffers) SELECT * FROM aaa WHERE num=1 AND flag=true; \n Index Scan using i1 on public.aaa (cost=0.43..50120.71 rows=10754 width=5) (actual time=0.040..149.580 rows=100000 loops=1)\n\nOr with:\nSET random_page_cost=1; SET cpu_operator_cost=0.03; -- default: 0.0025 see cost_bitmap_tree_node()\nEXPLAIN (analyze,verbose,costs,buffers) SELECT * FROM aaa WHERE num=1 AND flag= true; \n Index Scan using i1 on public.aaa (cost=5.22..49328.00 rows=10754 width=5) (actual time=0.051..109.082 rows=100000 loops=1)\n\nOr a combination trying to minimize the cost of the index scan:\npostgres=# SET random_page_cost=1; SET cpu_index_tuple_cost=.0017; SET cpu_operator_cost=0.03; EXPLAIN (analyze,verbose,costs,buffers) SELECT * FROM aaa WHERE num=1 AND flag= true; \n Index Scan using i1 on public.aaa (cost=5.22..48977.10 rows=10754 width=5) (actual time=0.032..86.883 rows=100000 loops=1)\n\nNot sure if that's reasonable, but maybe it helps to understand.\n\nJustin\n\n", "msg_date": "Sat, 2 Dec 2017 00:41:13 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On 02/12/2017 07:51, Jeff Janes wrote:\n> On Fri, Dec 1, 2017 at 3:54 PM, Vitaliy Garnashevich \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> On 02/12/2017 01:11, Justin Pryzby wrote:\n>\n> I tried to reproduce this issue and couldn't, under PG95 and 10.1:\n>\n> On Fri, Dec 01, 2017 at 12:34:27PM -0600, Justin Pryzby wrote:\n>\n> On Fri, Dec 01, 2017 at 07:40:08PM +0200, Vitaliy\n> Garnashevich wrote:\n>\n> We recently had an issue in production, where a bitmap\n> scan was chosen\n> instead of an index scan. Despite being 30x slower,\n> the bitmap scan had\n> about the same cost as the index scan.\n> drop table if exists aaa;\n> create table aaa as select (id%100)::int num,\n> (id%10=1)::bool flag from\n> generate_series(1, 10000000) id;\n> create index i1 on aaa  (num);\n> create index i2 on aaa  (flag);\n> analyze aaa;\n>\n> What is:\n> effective_io_concurrency\n> max_parallel_workers_per_gather (I gather you don't have this)\n>\n> effective_io_concurrency = 0\n> max_parallel_workers_per_gather = 0\n>\n> Did you notice random_page_cost = 1.5 ?\n>\n>\n> For the aaa.num = 39 case, the faster index scan actually does hit 15 \n> times more buffers than the bitmap scan does.  While 1.5 is lot lower \n> than 4.0, it is still much higher than the true cost of reading a page \n> from the buffer cache.   This why the index scan is getting punished.  \n> You could lower random_page_cost and seq_page_cost to 0, to remove \n> those considerations.  (I'm not saying you should do this on your \n> production system, but rather you should do it as a way to investigate \n> the issue.  But it might make sense on production as well)\nseq_page_cost = 1.0\nrandom_page_cost = 1.0*\n*explain analyze select * from aaa where num = 2 and flag = true;\n\nBitmap Heap Scan on aaa  (cost=11536.74..20856.96 rows=10257 width=5) \n(actual time=108.338..108.338 rows=0 loops=1)\n   ->  BitmapAnd  (cost=11536.74..11536.74 rows=10257 width=0) (actual \ntime=108.226..108.226 rows=0 loops=1)\n         ->  Bitmap Index Scan on i1  (cost=0.00..1025.43 rows=100000 \nwidth=0) (actual time=18.563..18.563 rows=100000 loops=1)\n         ->  Bitmap Index Scan on i2  (cost=0.00..10505.93 rows=1025666 \nwidth=0) (actual time=78.493..78.493 rows=1000000 loops=1)\n\nIndex Scan using i1 on aaa  (cost=0.44..44663.58 rows=10257 width=5) \n(actual time=51.264..51.264 rows=0 loops=1)\n\nHere I've used the filter num = 2, which produces rows=0 at BitmapAnd, \nand thus avoids a lot of work at \"Bitmap Heap Scan\" node, while still \nleaving about the same proportion in bitmap vs index - the bitmap is \ntwice slower but twice less costly. It does not matter much which value \nto use for the filter, if it's other than num = 1.\n\n\nseq_page_cost = 0.0\nrandom_page_cost = 0.0\nexplain analyze select * from aaa where num = 2 and flag = true;\n\nBitmap Heap Scan on aaa  (cost=753.00..2003.00 rows=10257 width=5) \n(actual time=82.212..82.212 rows=0 loops=1)\n   ->  Bitmap Index Scan on i1  (cost=0.00..750.43 rows=100000 width=0) \n(actual time=17.401..17.401 rows=100000 loops=1)\n\nIndex Scan using i1 on aaa  (cost=0.44..1750.43 rows=10257 width=5) \n(actual time=49.766..49.766 rows=0 loops=1)\n\nThe bitmap plan was reduced to use only one bitmap scan, and finally it \ncosts more than the index plan. But I doubt that the settings \nseq_page_cost = random_page_cost = 0.0 should actually be used. Probably \nit should be instead something like 1.0/1.0 or 1.0/1.1, but other costs \nincreased, to have more weight.\n\n\n# x4 tuple/operator costs - bitmap scan still a bit cheaper\nset seq_page_cost = 1.0;\nset random_page_cost = 1.0;\nset cpu_tuple_cost = 0.04;\nset cpu_index_tuple_cost = 0.02;\nset cpu_operator_cost = 0.01;\n\nBitmap Heap Scan on aaa  (cost=36882.97..46587.82 rows=10257 width=5) \n(actual time=106.045..106.045 rows=0 loops=1)\n   ->  BitmapAnd  (cost=36882.97..36882.97 rows=10257 width=0) (actual \ntime=105.966..105.966 rows=0 loops=1)\n         ->  Bitmap Index Scan on i1  (cost=0.00..3276.74 rows=100000 \nwidth=0) (actual time=15.977..15.977 rows=100000 loops=1)\n         ->  Bitmap Index Scan on i2  (cost=0.00..33584.72 rows=1025666 \nwidth=0) (actual time=79.208..79.208 rows=1000000 loops=1)\n\nIndex Scan using i1 on aaa  (cost=1.74..49914.89 rows=10257 width=5) \n(actual time=50.144..50.144 rows=0 loops=1)\n\n\n# x5 tuple/operator costs - switched to single bitmap index scan, but \nnow it costs more than the index scan\nset seq_page_cost = 1.0;\nset random_page_cost = 1.0;\nset cpu_tuple_cost = 0.05;\nset cpu_index_tuple_cost = 0.025;\nset cpu_operator_cost = 0.0125;\n\nBitmap Heap Scan on aaa  (cost=4040.00..54538.00 rows=10257 width=5) \n(actual time=82.338..82.338 rows=0 loops=1)\n   ->  Bitmap Index Scan on i1  (cost=0.00..4027.18 rows=100000 width=0) \n(actual time=19.541..19.541 rows=100000 loops=1)\n\nIndex Scan using i1 on aaa  (cost=2.17..51665.32 rows=10257 width=5) \n(actual time=49.545..49.545 rows=0 loops=1)\n\n\nI've also tried seq_page_cost = 1.0, random_page_cost = 1.1, but that \nwould require more than x10 increase in tuple/operator costs, to make \nbitmap more costly than index.\n\n>\n>\n> For this test I'm using SSD and Windows (if that matters). On\n> production we also use SSD, hence lower random_page_cost. But with\n> the default random_page_cost=4.0, the difference in cost between\n> the index scan plan and the bitmap scan plan is even bigger.\n>\n>\n> Since it is all shared buffers hits, it doesn't matter if you have SSD \n> for this particular test case.\nAgree. I've just tried to justify the value of random_page_cost, which \nis lower than like 2.0.\n\n> Cheers,\n>\n> Jeff\n\n\n\n\n\n\n\n\nOn 02/12/2017 07:51, Jeff Janes wrote:\n\n\n\n\nOn Fri, Dec 1, 2017 at 3:54 PM,\n Vitaliy Garnashevich <[email protected]>\n wrote:\nOn\n 02/12/2017 01:11, Justin Pryzby wrote:\n\n I tried to reproduce this issue and couldn't, under\n PG95 and 10.1:\n\n On Fri, Dec 01, 2017 at 12:34:27PM -0600, Justin\n Pryzby wrote:\n\n On Fri, Dec 01, 2017 at 07:40:08PM +0200, Vitaliy\n Garnashevich wrote:\n\n We recently had an issue in production, where a\n bitmap scan was chosen\n instead of an index scan. Despite being 30x\n slower, the bitmap scan had\n about the same cost as the index scan.\n drop table if exists aaa;\n create table aaa as select (id%100)::int num,\n (id%10=1)::bool flag from\n generate_series(1, 10000000) id;\n create index i1 on aaa  (num);\n create index i2 on aaa  (flag);\n analyze aaa;\n\n\n What is:\n effective_io_concurrency\n max_parallel_workers_per_gather (I gather you\n don't have this)\n\n\n effective_io_concurrency = 0\n max_parallel_workers_per_gather = 0\n\n Did you notice random_page_cost = 1.5 ?\n\n\n\nFor the aaa.num = 39 case, the faster index scan\n actually does hit 15 times more buffers than the bitmap\n scan does.  While 1.5 is lot lower than 4.0, it is still\n much higher than the true cost of reading a page from the\n buffer cache.   This why the index scan is getting\n punished.  You could lower random_page_cost and \n seq_page_cost to 0, to remove those considerations.  (I'm\n not saying you should do this on your production system,\n but rather you should do it as a way to investigate the\n issue.  But it might make sense on production as well)\n\n\n\n\n seq_page_cost = 1.0\n random_page_cost = 1.0\nexplain analyze select * from aaa where num = 2 and flag = true;\n\n Bitmap Heap Scan on aaa  (cost=11536.74..20856.96 rows=10257\n width=5) (actual time=108.338..108.338 rows=0 loops=1)\n   ->  BitmapAnd  (cost=11536.74..11536.74 rows=10257 width=0)\n (actual time=108.226..108.226 rows=0 loops=1)\n         ->  Bitmap Index Scan on i1  (cost=0.00..1025.43\n rows=100000 width=0) (actual time=18.563..18.563 rows=100000\n loops=1)\n         ->  Bitmap Index Scan on i2  (cost=0.00..10505.93\n rows=1025666 width=0) (actual time=78.493..78.493 rows=1000000\n loops=1)\n\n Index Scan using i1 on aaa  (cost=0.44..44663.58 rows=10257 width=5)\n (actual time=51.264..51.264 rows=0 loops=1)\n\n Here I've used the filter num = 2, which produces rows=0 at\n BitmapAnd, and thus avoids a lot of work at \"Bitmap Heap Scan\" node,\n while still leaving about the same proportion in bitmap vs index -\n the bitmap is twice slower but twice less costly. It does not matter\n much which value to use for the filter, if it's other than num = 1.\n\n\n seq_page_cost = 0.0\n random_page_cost = 0.0\n explain analyze select * from aaa where num = 2 and flag = true;\n\n Bitmap Heap Scan on aaa  (cost=753.00..2003.00 rows=10257 width=5)\n (actual time=82.212..82.212 rows=0 loops=1)\n   ->  Bitmap Index Scan on i1  (cost=0.00..750.43 rows=100000\n width=0) (actual time=17.401..17.401 rows=100000 loops=1)\n\n Index Scan using i1 on aaa  (cost=0.44..1750.43 rows=10257 width=5)\n (actual time=49.766..49.766 rows=0 loops=1)\n\n The bitmap plan was reduced to use only one bitmap scan, and finally\n it costs more than the index plan. But I doubt that the settings\n seq_page_cost = random_page_cost = 0.0 should actually be used.\n Probably it should be instead something like 1.0/1.0 or 1.0/1.1, but\n other costs increased, to have more weight.\n\n\n # x4 tuple/operator costs - bitmap scan still a bit cheaper\n set seq_page_cost = 1.0;\n set random_page_cost = 1.0;\n set cpu_tuple_cost = 0.04;\n set cpu_index_tuple_cost = 0.02;\n set cpu_operator_cost = 0.01;\n\n Bitmap Heap Scan on aaa  (cost=36882.97..46587.82 rows=10257\n width=5) (actual time=106.045..106.045 rows=0 loops=1)\n   ->  BitmapAnd  (cost=36882.97..36882.97 rows=10257 width=0)\n (actual time=105.966..105.966 rows=0 loops=1)\n         ->  Bitmap Index Scan on i1  (cost=0.00..3276.74\n rows=100000 width=0) (actual time=15.977..15.977 rows=100000\n loops=1)\n         ->  Bitmap Index Scan on i2  (cost=0.00..33584.72\n rows=1025666 width=0) (actual time=79.208..79.208 rows=1000000\n loops=1)\n\n Index Scan using i1 on aaa  (cost=1.74..49914.89 rows=10257 width=5)\n (actual time=50.144..50.144 rows=0 loops=1)\n\n\n # x5 tuple/operator costs - switched to single bitmap index scan,\n but now it costs more than the index scan\n set seq_page_cost = 1.0;\n set random_page_cost = 1.0;\n set cpu_tuple_cost = 0.05;\n set cpu_index_tuple_cost = 0.025;\n set cpu_operator_cost = 0.0125;\n\n Bitmap Heap Scan on aaa  (cost=4040.00..54538.00 rows=10257 width=5)\n (actual time=82.338..82.338 rows=0 loops=1)\n   ->  Bitmap Index Scan on i1  (cost=0.00..4027.18 rows=100000\n width=0) (actual time=19.541..19.541 rows=100000 loops=1)\n\n Index Scan using i1 on aaa  (cost=2.17..51665.32 rows=10257 width=5)\n (actual time=49.545..49.545 rows=0 loops=1)\n\n\n I've also tried seq_page_cost = 1.0, \n random_page_cost = 1.1, but that would require more than x10\n increase in tuple/operator costs, to make bitmap more costly than\n index.\n\n\n\n\n\n\n\n\n\n For this test I'm using SSD and Windows (if that matters).\n On production we also use SSD, hence lower\n random_page_cost. But with the default\n random_page_cost=4.0, the difference in cost between the\n index scan plan and the bitmap scan plan is even bigger.\n\n\nSince it is all shared buffers hits, it doesn't matter\n if you have SSD for this particular test case.\n\n\n\n\n Agree. I've just tried to justify the value of random_page_cost,\n which is lower than like 2.0.\n\n\n\n\n\n \nCheers,\n\n\nJeff", "msg_date": "Sat, 2 Dec 2017 09:08:38 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On Fri, Dec 1, 2017 at 11:08 PM, Vitaliy Garnashevich <\[email protected]> wrote:\n\n>\n>\n> seq_page_cost = 0.0\n> random_page_cost = 0.0\n> explain analyze select * from aaa where num = 2 and flag = true;\n>\n> Bitmap Heap Scan on aaa (cost=753.00..2003.00 rows=10257 width=5) (actual\n> time=82.212..82.212 rows=0 loops=1)\n> -> Bitmap Index Scan on i1 (cost=0.00..750.43 rows=100000 width=0)\n> (actual time=17.401..17.401 rows=100000 loops=1)\n>\n> Index Scan using i1 on aaa (cost=0.44..1750.43 rows=10257 width=5)\n> (actual time=49.766..49.766 rows=0 loops=1)\n>\n> The bitmap plan was reduced to use only one bitmap scan, and finally it\n> costs more than the index plan.\n>\n\nRight, so there is a cpu costing problem (which could only be fixed by\nhacking postgresql and recompiling it), but it is much smaller of a problem\nthan the IO cost not being accurate due to the high hit rate. Fixing the\nCPU costing problem is unlikely to make a difference to your real query.\nIf you set the page costs to zero, what happens to your real query?\n\n\n> But I doubt that the settings seq_page_cost = random_page_cost = 0.0\n> should actually be used.\n>\n\nWhy not? If your production server really has everything in memory during\nnormal operation, that is the correct course of action. If you ever\nrestart the server, then you could have some unpleasant time getting it\nback up to speed again, but pg_prewarm could help with that.\n\n\n> Probably it should be instead something like 1.0/1.0 or 1.0/1.1, but other\n> costs increased, to have more weight.\n>\n\nThis doesn't make any sense to me. Halving the page costs is\nmathematically the same as doubling all the other constants. But the first\nway of doing things says what you are doing, and the second way is an\nobfuscation of what you are doing.\n\n\n>\n> # x4 tuple/operator costs - bitmap scan still a bit cheaper\n> set seq_page_cost = 1.0;\n> set random_page_cost = 1.0;\n> set cpu_tuple_cost = 0.04;\n> set cpu_index_tuple_cost = 0.02;\n> set cpu_operator_cost = 0.01;\n>\n\nIf you really want to target the plan with the BitmapAnd, you should\nincrease cpu_index_tuple_cost and/or cpu_operator_cost but not increase\ncpu_tuple_cost. That is because the unselective bitmap index scan does\nnot incur any cpu_tuple_cost, but does incur index_tuple and operator\ncosts. Unfortunately all other index scans in the system will also be\nskewed by such a change if you make the change system-wide.\n\nIncidentally, the \"actual rows\" field of BitmapAnd is always zero. That\nfield is not implemented for that node type.\n\n\nWhy do you have an index on flag in the first place? What does the index\naccomplish, other than enticing the planner into bad plans? I don't know\nhow this translates back into your real query, but dropping that index\nshould be considered. Or replace both indexes with one on (num,flag).\n\nOr you can re-write the part of the WHERE clause in a way that it can't use\nan index, something like:\n\nand flag::text ='t'\n\nCheers,\n\nJeff\n\nOn Fri, Dec 1, 2017 at 11:08 PM, Vitaliy Garnashevich <[email protected]> wrote:\n\n\n\n seq_page_cost = 0.0\n random_page_cost = 0.0\n explain analyze select * from aaa where num = 2 and flag = true;\n\n Bitmap Heap Scan on aaa  (cost=753.00..2003.00 rows=10257 width=5)\n (actual time=82.212..82.212 rows=0 loops=1)\n   ->  Bitmap Index Scan on i1  (cost=0.00..750.43 rows=100000\n width=0) (actual time=17.401..17.401 rows=100000 loops=1)\n\n Index Scan using i1 on aaa  (cost=0.44..1750.43 rows=10257 width=5)\n (actual time=49.766..49.766 rows=0 loops=1)\n\n The bitmap plan was reduced to use only one bitmap scan, and finally\n it costs more than the index plan. Right, so there is a cpu costing problem (which could only be fixed by hacking postgresql and recompiling it), but it is much smaller of a problem than the IO cost not being accurate due to the high hit rate.  Fixing the CPU costing problem is unlikely to make a difference to your real query.  If you set the page costs to zero, what happens to your real query? But I doubt that the settings\n seq_page_cost = random_page_cost = 0.0 should actually be used.Why not?  If your production server really has everything in memory during normal operation, that is the correct course of action.  If you ever restart the server, then you could have some unpleasant time getting it back up to speed again, but pg_prewarm could help with that.   \n Probably it should be instead something like 1.0/1.0 or 1.0/1.1, but\n other costs increased, to have more weight.This doesn't make any  sense to me.  Halving the page costs is mathematically the same as doubling all the other constants.  But the first way of doing things says what you are doing, and the second way is an obfuscation of what you are doing. \n\n # x4 tuple/operator costs - bitmap scan still a bit cheaper\n set seq_page_cost = 1.0;\n set random_page_cost = 1.0;\n set cpu_tuple_cost = 0.04;\n set cpu_index_tuple_cost = 0.02;\n set cpu_operator_cost = 0.01;If you really want to target the plan with the BitmapAnd, you should increase  cpu_index_tuple_cost and/or cpu_operator_cost but not increase cpu_tuple_cost.  That is because the  unselective bitmap index scan does not incur any cpu_tuple_cost, but does incur index_tuple and operator costs.  Unfortunately all other index scans in the system will also be skewed by such a change if you make the change system-wide.Incidentally, the \"actual rows\" field of BitmapAnd is always zero.  That field is not implemented for that node type.  Why do you have an index on flag in the first place?  What does the index accomplish, other than enticing the planner into bad plans?  I don't know how this translates back into your real query, but dropping that index should be considered.  Or replace both indexes with one on (num,flag).Or you can re-write the part of the WHERE clause in a way that it can't use an index, something like:and flag::text ='t'Cheers,Jeff", "msg_date": "Sat, 2 Dec 2017 13:17:47 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> On Fri, Dec 1, 2017 at 11:08 PM, Vitaliy Garnashevich <\n> [email protected]> wrote:\n>> # x4 tuple/operator costs - bitmap scan still a bit cheaper\n>> set seq_page_cost = 1.0;\n>> set random_page_cost = 1.0;\n>> set cpu_tuple_cost = 0.04;\n>> set cpu_index_tuple_cost = 0.02;\n>> set cpu_operator_cost = 0.01;\n\n> If you really want to target the plan with the BitmapAnd, you should\n> increase cpu_index_tuple_cost and/or cpu_operator_cost but not increase\n> cpu_tuple_cost. That is because the unselective bitmap index scan does\n> not incur any cpu_tuple_cost, but does incur index_tuple and operator\n> costs. Unfortunately all other index scans in the system will also be\n> skewed by such a change if you make the change system-wide.\n\nI think it'd be a serious error to screw around with your cost settings\non the basis of a single case in which the rowcount estimates are so\nfar off. It's really those estimates that are the problem AFAICS.\n\nThe core issue in this example is that, the way the test data is set up,\nthe \"flag = true\" condition actually adds no selectivity at all, because\nevery row with \"num = 1\" is certain to have \"flag = true\". If the planner\nrealized that, it would certainly not bother with BitmapAnd'ing the flag\nindex onto the results of the num index. But it doesn't know that those\ncolumns are correlated, so it supposes that adding the extra index will\ngive a 10x reduction in the number of heap rows that have to be visited\n(since it knows that only 1/10th of the rows have \"flag = true\").\n*That* is what causes the overly optimistic cost estimate for the\ntwo-index bitmapscan, and no amount of fiddling with the cost parameters\nwill make that better.\n\nI tried creating multiple-column statistics using the v10 facility for\nthat:\n\nregression=# create statistics s1 on num, flag from aaa;\nCREATE STATISTICS\nregression=# analyze aaa;\nANALYZE\n\nbut that changed the estimate not at all, which surprised me because\ndependency statistics are supposed to fix exactly this type of problem.\nI suspect there may be something in the extended-stats code that causes it\nnot to work right for boolean columns --- this wouldn't be excessively\nsurprising because of the way the planner messes around with converting\n\"flag = true\" to just \"flag\" and sometimes back again. But I've not\nlooked closer yet.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 02 Dec 2017 18:44:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On Sat, Dec 2, 2017 at 3:44 PM, Tom Lane <[email protected]> wrote:\n\n> Jeff Janes <[email protected]> writes:\n> > On Fri, Dec 1, 2017 at 11:08 PM, Vitaliy Garnashevich <\n> > [email protected]> wrote:\n> >> # x4 tuple/operator costs - bitmap scan still a bit cheaper\n> >> set seq_page_cost = 1.0;\n> >> set random_page_cost = 1.0;\n> >> set cpu_tuple_cost = 0.04;\n> >> set cpu_index_tuple_cost = 0.02;\n> >> set cpu_operator_cost = 0.01;\n>\n> > If you really want to target the plan with the BitmapAnd, you should\n> > increase cpu_index_tuple_cost and/or cpu_operator_cost but not increase\n> > cpu_tuple_cost. That is because the unselective bitmap index scan does\n> > not incur any cpu_tuple_cost, but does incur index_tuple and operator\n> > costs. Unfortunately all other index scans in the system will also be\n> > skewed by such a change if you make the change system-wide.\n>\n> I think it'd be a serious error to screw around with your cost settings\n> on the basis of a single case in which the rowcount estimates are so\n> far off. It's really those estimates that are the problem AFAICS.\n>\n> The core issue in this example is that, the way the test data is set up,\n> the \"flag = true\" condition actually adds no selectivity at all, because\n> every row with \"num = 1\" is certain to have \"flag = true\". If the planner\n> realized that, it would certainly not bother with BitmapAnd'ing the flag\n> index onto the results of the num index. But it doesn't know that those\n> columns are correlated, so it supposes that adding the extra index will\n> give a 10x reduction in the number of heap rows that have to be visited\n> (since it knows that only 1/10th of the rows have \"flag = true\").\n> *That* is what causes the overly optimistic cost estimate for the\n> two-index bitmapscan, and no amount of fiddling with the cost parameters\n> will make that better.\n>\n\n\nBut he also tested with num=2 and num=39, which reverses the situation so\nthe bitmap is 100% selective rather than the 90% the planner thinks it will\nbe.\n\nBut it is still slower for him (I am having trouble replicating that exact\nbehavior), so building the bitmap to rule out 100% of the rows is\nempirically not worth it, I don't see how building it to rule out 90%, as\nthe planner things, would be any better.\n\n\n> I tried creating multiple-column statistics using the v10 facility for\n> that:\n>\n> regression=# create statistics s1 on num, flag from aaa;\n> CREATE STATISTICS\n> regression=# analyze aaa;\n> ANALYZE\n>\n> but that changed the estimate not at all, which surprised me because\n> dependency statistics are supposed to fix exactly this type of problem.\n> I suspect there may be something in the extended-stats code that causes it\n> not to work right for boolean columns --- this wouldn't be excessively\n> surprising because of the way the planner messes around with converting\n> \"flag = true\" to just \"flag\" and sometimes back again. But I've not\n> looked closer yet.\n>\n\nI think the non-extended stats code also has trouble with booleans.\npg_stats gives me a correlation of 0.8 or higher for the flag column.\n\nDue to that, when I disable bitmapscans and seqscans, I start getting slow\nindex scans on the wrong index, i2 rather than i1. I don't know why he\ndoesn't see that in his example.\n\nCheers,\n\nJeff\n\nOn Sat, Dec 2, 2017 at 3:44 PM, Tom Lane <[email protected]> wrote:Jeff Janes <[email protected]> writes:\n> On Fri, Dec 1, 2017 at 11:08 PM, Vitaliy Garnashevich <\n> [email protected]> wrote:\n>> # x4 tuple/operator costs - bitmap scan still a bit cheaper\n>> set seq_page_cost = 1.0;\n>> set random_page_cost = 1.0;\n>> set cpu_tuple_cost = 0.04;\n>> set cpu_index_tuple_cost = 0.02;\n>> set cpu_operator_cost = 0.01;\n\n> If you really want to target the plan with the BitmapAnd, you should\n> increase  cpu_index_tuple_cost and/or cpu_operator_cost but not increase\n> cpu_tuple_cost.  That is because the  unselective bitmap index scan does\n> not incur any cpu_tuple_cost, but does incur index_tuple and operator\n> costs.  Unfortunately all other index scans in the system will also be\n> skewed by such a change if you make the change system-wide.\n\nI think it'd be a serious error to screw around with your cost settings\non the basis of a single case in which the rowcount estimates are so\nfar off.  It's really those estimates that are the problem AFAICS.\n\nThe core issue in this example is that, the way the test data is set up,\nthe \"flag = true\" condition actually adds no selectivity at all, because\nevery row with \"num = 1\" is certain to have \"flag = true\".  If the planner\nrealized that, it would certainly not bother with BitmapAnd'ing the flag\nindex onto the results of the num index.  But it doesn't know that those\ncolumns are correlated, so it supposes that adding the extra index will\ngive a 10x reduction in the number of heap rows that have to be visited\n(since it knows that only 1/10th of the rows have \"flag = true\").\n*That* is what causes the overly optimistic cost estimate for the\ntwo-index bitmapscan, and no amount of fiddling with the cost parameters\nwill make that better.But he also tested with num=2 and num=39, which reverses the situation so the bitmap is 100% selective rather than the 90% the planner thinks it will be.But it is still slower for him (I am having trouble replicating that exact behavior), so building the bitmap to rule out 100% of the rows is empirically not worth it, I don't see how building it to rule out 90%, as the planner things, would be any better.  \n\nI tried creating multiple-column statistics using the v10 facility for\nthat:\n\nregression=# create statistics s1 on num, flag from aaa;\nCREATE STATISTICS\nregression=# analyze aaa;\nANALYZE\n\nbut that changed the estimate not at all, which surprised me because\ndependency statistics are supposed to fix exactly this type of problem.\nI suspect there may be something in the extended-stats code that causes it\nnot to work right for boolean columns --- this wouldn't be excessively\nsurprising because of the way the planner messes around with converting\n\"flag = true\" to just \"flag\" and sometimes back again.  But I've not\nlooked closer yet.I think the non-extended stats code also has trouble with booleans.  pg_stats gives me a correlation  of 0.8 or higher for the flag column.Due to that, when I disable bitmapscans and seqscans, I start getting slow index scans on the wrong index, i2 rather than i1.  I don't know why he doesn't see that in his example. Cheers,Jeff", "msg_date": "Sat, 2 Dec 2017 17:27:51 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On Sat, Dec 02, 2017 at 05:27:51PM -0800, Jeff Janes wrote:\n> I think the non-extended stats code also has trouble with booleans.\n> pg_stats gives me a correlation of 0.8 or higher for the flag column.\n\nIt's not due to the boolean though; you see the same thing if you do:\nCREATE INDEX aaa_f ON aaa((flag::text));\nANALYZE aaa;\ncorrelation | 0.81193\n\nor:\nALTER TABLE aaa ADD flag2 int; UPDATE aaa SET flag2= flag::int\ncorrelation | 0.81193\n\nI think it's caused by having so few (2) values to correlate.\n\nmost_common_vals | {f,t}\nmost_common_freqs | {0.9014,0.0986}\ncorrelation | 0.822792\n\nIt thinks there's somewhat-high correlation since it gets a list of x and y\nvalues (integer positions by logical and physical sort order) and 90% of the x\nlist (logical value) are the same value ('t'), and the CTIDs are in order on\nthe new index, so 90% of the values are 100% correlated.\n\nIt improves (by which I mean here that it spits out a lower number) if it's not\na 90/10 split:\n\nCREATE TABLE aaa5 AS SELECT (id%100)::int num, (id%10>5)::bool flag FROM generate_series(1, 10000000) id;\nCREATE INDEX ON aaa5 (flag);\n\ntablename | aaa5\nattname | flag\ncorrelation | 0.522184\n\nJustin\n\n", "msg_date": "Sat, 2 Dec 2017 22:04:30 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted? - boolean correlation" }, { "msg_contents": "On 02/12/2017 23:17, Jeff Janes wrote:\n> Right, so there is a cpu costing problem (which could only be fixed by \n> hacking postgresql and recompiling it), but it is much smaller of a \n> problem than the IO cost not being accurate due to the high hit rate. \n> Fixing the CPU costing problem is unlikely to make a difference to \n> your real query.  If you set the page costs to zero, what happens to \n> your real query?\nI can't reproduce the exact issue on the real database any more. The \nquery started to use the slow bitmap scan recently, and had been doing \nso for some time lately, but now it's switched back to use the index \nscan. The table involved in the query gets modified a lot. It has \nhundreds of millions of rows. Lots of new rows are appended to it every \nday, the oldest rows are sometimes removed. The table is analyzed at \nleast daily. It's possible that statistics was updated and that caused \nthe query to run differently. But I still would like to understand why \nthat issue happened, and how to properly fix it, in case the issue returns.\n>\n> But I doubt that the settings seq_page_cost = random_page_cost =\n> 0.0 should actually be used.\n>\n>\n> Why not?  If your production server really has everything in memory \n> during normal operation, that is the correct course of action.  If you \n> ever restart the server, then you could have some unpleasant time \n> getting it back up to speed again, but pg_prewarm could help with that.\nIn the real database, not everything is in memory. There are 200GB+ of \nRAM, but DB is 500GB+. The table involved in the query itself is 60GB+ \nof data and 100GB+ of indexes. I'm running the test case in a way where \nall reads are done from RAM, only to make it easier to reproduce and to \navoid unrelated effects.\n\nAs far as know, costs in Postgres were designed to be relative to \nseq_page_cost, which for that reason is usually defined as 1.0. Even if \neverything would be in RAM, accesses to the pages would still not have \nzero cost. Setting 0.0 just seems too extreme, as all other non-zero \ncosts would become infinitely bigger.\n> If you really want to target the plan with the BitmapAnd, you should \n> increase cpu_index_tuple_cost and/or cpu_operator_cost but not \n> increase cpu_tuple_cost.  That is because the  unselective bitmap \n> index scan does not incur any cpu_tuple_cost, but does incur \n> index_tuple and operator costs.  Unfortunately all other index scans \n> in the system will also be skewed by such a change if you make the \n> change system-wide.\nExactly. I'd like to understand why the worse plan is being chosen, and \n1) if it's fixable by tuning costs, to figure out the right settings \nwhich could be used in production, 2) if there is a bug in Postgres \noptimizer, then to bring some attention to it, so that it's eventually \nfixed in one of future releases, 3) if Postgres is supposed to work this \nway, then at least I (and people who ever read this thread) would \nunderstand it better.\n\nRegards,\nVitaliy\n\n\n\n\n\n\n\n\nOn 02/12/2017 23:17, Jeff Janes wrote:\n\n\n\n\nRight, so there is a cpu costing\n problem (which could only be fixed by hacking postgresql and\n recompiling it), but it is much smaller of a problem than\n the IO cost not being accurate due to the high hit rate. \n Fixing the CPU costing problem is unlikely to make a\n difference to your real query.  If you set the page costs to\n zero, what happens to your real query?\n\n\n\n I can't reproduce the exact issue on the real database any more. The\n query started to use the slow bitmap scan recently, and had been\n doing so for some time lately, but now it's switched back to use the\n index scan. The table involved in the query gets modified a lot. It\n has hundreds of millions of rows. Lots of new rows are appended to\n it every day, the oldest rows are sometimes removed. The table is\n analyzed at least daily. It's possible that statistics was updated\n and that caused the query to run differently. But I still would like\n to understand why that issue happened, and how to properly fix it,\n in case the issue returns.\n\n\n\n\n \n\nBut I doubt that the\n settings seq_page_cost = random_page_cost = 0.0 should\n actually be used.\n\n\n\nWhy not?  If your production server really has\n everything in memory during normal operation, that is the\n correct course of action.  If you ever restart the server,\n then you could have some unpleasant time getting it back\n up to speed again, but pg_prewarm could help with that.  \n\n\n\n\n\n In the real database, not everything is in memory. There are 200GB+\n of RAM, but DB is 500GB+. The table involved in the query itself is\n 60GB+ of data and 100GB+ of indexes. I'm running the test case in a\n way where all reads are done from RAM, only to make it easier to\n reproduce and to avoid unrelated effects.\n\n As far as know, costs in Postgres were designed to be relative to\n seq_page_cost, which for that reason is usually defined as 1.0. Even\n if everything would be in RAM, accesses to the pages would still not\n have zero cost. Setting 0.0 just seems too extreme, as all other\n non-zero costs would become infinitely bigger.\n\n\n\nIf you really want to target the plan\n with the BitmapAnd, you should increase \n cpu_index_tuple_cost and/or cpu_operator_cost but not\n increase cpu_tuple_cost.  That is because the  unselective\n bitmap index scan does not incur any cpu_tuple_cost, but\n does incur index_tuple and operator costs.  Unfortunately\n all other index scans in the system will also be skewed by\n such a change if you make the change system-wide.\n\n\n\n Exactly. I'd like to understand why the worse plan is being chosen,\n and 1) if it's fixable by tuning costs, to figure out the right\n settings which could be used in production, 2) if there is a bug in\n Postgres optimizer, then to bring some attention to it, so that it's\n eventually fixed in one of future releases, 3) if Postgres is\n supposed to work this way, then at least I (and people who ever read\n this thread) would understand it better.\n\n Regards,\n Vitaliy", "msg_date": "Sun, 3 Dec 2017 23:15:01 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On 03/12/2017 01:44, Tom Lane wrote:\n> I think it'd be a serious error to screw around with your cost settings\n> on the basis of a single case in which the rowcount estimates are so\n> far off. It's really those estimates that are the problem AFAICS.\n>\n> The core issue in this example is that, the way the test data is set up,\n> the \"flag = true\" condition actually adds no selectivity at all, because\n> every row with \"num = 1\" is certain to have \"flag = true\". If the planner\n> realized that, it would certainly not bother with BitmapAnd'ing the flag\n> index onto the results of the num index. But it doesn't know that those\n> columns are correlated, so it supposes that adding the extra index will\n> give a 10x reduction in the number of heap rows that have to be visited\n> (since it knows that only 1/10th of the rows have \"flag = true\").\n> *That* is what causes the overly optimistic cost estimate for the\n> two-index bitmapscan, and no amount of fiddling with the cost parameters\n> will make that better.\nHere I've tried to make a test which would not have correlation between \nthe two columns.\n\nshared_buffers = 512MB\neffective_cache_size = 512MB\nwork_mem = 100MB\n\nset seq_page_cost = 1.0;\nset random_page_cost = 1.5;\nset cpu_tuple_cost = 0.01;\nset cpu_index_tuple_cost = 0.005;\nset cpu_operator_cost = 0.0025;\n\ndrop table if exists aaa;\ncreate table aaa as select floor(random()*100)::int num, (random()*10 < \n1)::bool flag from generate_series(1, 10000000) id;\ncreate index i1 on aaa  (num);\ncreate index i2 on aaa  (flag);\nanalyze aaa;\n\nselect relname, reltuples::bigint, relpages::bigint, \n(reltuples/relpages)::bigint tpp from pg_class where relname \nin('aaa','i1','i2') order by relname;\n\"aaa\";10000033;44248;226\n\"i1\";10000033;27422;365\n\"i2\";10000033;27422;365\n\nselect flag, count(*) from aaa group by flag order by flag;\nf;9000661\nt;999339\n\nselect num, count(*) from aaa group by num order by num;\n0;99852\n1;99631\n2;99699\n3;100493\n...\n96;100345\n97;99322\n98;100013\n99;100030\n\nexplain (analyze,verbose,costs,buffers)\nselect * from aaa where num = 1 and flag = true;\n\nBitmap Heap Scan on public.aaa  (cost=12829.83..24729.85 rows=10340 \nwidth=5) (actual time=104.941..112.649 rows=9944 loops=1)\n   Output: num, flag\n   Recheck Cond: (aaa.num = 1)\n   Filter: aaa.flag\n   Heap Blocks: exact=8922\n   Buffers: shared hit=11932\n   ->  BitmapAnd  (cost=12829.83..12829.83 rows=10340 width=0) (actual \ntime=102.926..102.926 rows=0 loops=1)\n         Buffers: shared hit=3010\n         ->  Bitmap Index Scan on i1  (cost=0.00..1201.44 rows=103334 \nwidth=0) (actual time=15.459..15.459 rows=99631 loops=1)\n               Index Cond: (aaa.num = 1)\n               Buffers: shared hit=276\n         ->  Bitmap Index Scan on i2  (cost=0.00..11622.97 rows=1000671 \nwidth=0) (actual time=76.906..76.906 rows=999339 loops=1)\n               Index Cond: (aaa.flag = true)\n               Buffers: shared hit=2734\nPlanning time: 0.110 ms\nExecution time: 113.272 ms\n\nIndex Scan using i1 on public.aaa  (cost=0.44..66621.56 rows=10340 \nwidth=5) (actual time=0.027..47.075 rows=9944 loops=1)\n   Output: num, flag\n   Index Cond: (aaa.num = 1)\n   Filter: aaa.flag\n   Rows Removed by Filter: 89687\n   Buffers: shared hit=39949\nPlanning time: 0.104 ms\nExecution time: 47.351 ms\n\n>\n> I tried creating multiple-column statistics using the v10 facility for\n> that:\n>\n> regression=# create statistics s1 on num, flag from aaa;\n> CREATE STATISTICS\n> regression=# analyze aaa;\n> ANALYZE\n>\n> but that changed the estimate not at all, which surprised me because\n> dependency statistics are supposed to fix exactly this type of problem.\n> I suspect there may be something in the extended-stats code that causes it\n> not to work right for boolean columns --- this wouldn't be excessively\n> surprising because of the way the planner messes around with converting\n> \"flag = true\" to just \"flag\" and sometimes back again. But I've not\n> looked closer yet.\n>\n> \t\t\tregards, tom lane\n> .\n>\n\n\n", "msg_date": "Sun, 3 Dec 2017 23:22:52 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On 03/12/2017 03:27, Jeff Janes wrote:\n> Due to that, when I disable bitmapscans and seqscans, I start getting \n> slow index scans on the wrong index, i2 rather than i1.  I don't know \n> why he doesn't see that in his example.\nWhen I increase effective_cache_size to 1024MB, I start getting the plan \nwith the slower index i2, too.\n\n*Bitmap Heap Scan* on public.aaa  (cost=12600.90..*23688**.70* rows=9488 \nwidth=5) (actual time=107.529..*115.902* rows=9976 loops=1)\n   ->  BitmapAnd  (cost=12600.90..12600.90 rows=9488 width=0) (actual \ntime=105.133..105.133 rows=0 loops=1)\n         ->  Bitmap Index Scan on i1  (cost=0.00..1116.43 rows=96000 \nwidth=0) (actual time=16.313..16.313 rows=100508 loops=1)\n         ->  Bitmap Index Scan on i2  (cost=0.00..11479.47 rows=988338 \nwidth=0) (actual time=77.950..77.950 rows=1000200 loops=1)\n\n*Index Scan* using i2 on public.aaa  (cost=0.44..*48227.31* rows=9488 \nwidth=5) (actual time=0.020..*285.695* rows=9976 loops=1)\n\n*Seq Scan* on public.aaa  (cost=0.00..*169248.54* rows=9488 width=5) \n(actual time=0.024..*966.469* rows=9976 loops=1)\n\nThis way the estimates and the actual time get more sense. But then \nthere's the question - maybe it's i1 runs too fast, and is estimated \nincorrectly? Why that happens?\n\nHere are the complete plans with the two different kinds of index scans \nonce again:\n\nIndex Scan using i1 on public.aaa  (cost=0.44..66621.56 rows=10340 \nwidth=5) (actual time=0.027..47.075 rows=9944 loops=1)\n   Output: num, flag\n   Index Cond: (aaa.num = 1)\n   Filter: aaa.flag\n   Rows Removed by Filter: 89687\n   Buffers: shared hit=39949\nPlanning time: 0.104 ms\nExecution time: 47.351 ms\n\nIndex Scan using i2 on public.aaa  (cost=0.44..48227.31 rows=9488 \nwidth=5) (actual time=0.020..285.695 rows=9976 loops=1)\n   Output: num, flag\n   Index Cond: (aaa.flag = true)\n   Filter: (aaa.flag AND (aaa.num = 1))\n   Rows Removed by Filter: 990224\n   Buffers: shared hit=46984\nPlanning time: 0.098 ms\nExecution time: 286.081 ms\n\n\n// The test DB was populated with: create table aaa as select \nfloor(random()*100)::int num, (random()*10 < 1)::bool flag from \ngenerate_series(1, 10000000) id;\n\nRegards,\nVitaliy\n\n\n\n\n\n\n\nOn 03/12/2017 03:27, Jeff Janes wrote:\n\n\n\n\nDue to that, when I disable\n bitmapscans and seqscans, I start getting slow index scans\n on the wrong index, i2 rather than i1.  I don't know why he\n doesn't see that in his example.\n\n\n\n When I increase effective_cache_size to 1024MB, I start getting the\n plan with the slower index i2, too.\n\nBitmap Heap Scan on public.aaa  (cost=12600.90..23688.70\n rows=9488 width=5) (actual time=107.529..115.902 rows=9976\n loops=1)\n   ->  BitmapAnd  (cost=12600.90..12600.90 rows=9488 width=0)\n (actual time=105.133..105.133 rows=0 loops=1)\n         ->  Bitmap Index Scan on i1  (cost=0.00..1116.43\n rows=96000 width=0) (actual time=16.313..16.313 rows=100508 loops=1)\n         ->  Bitmap Index Scan on i2  (cost=0.00..11479.47\n rows=988338 width=0) (actual time=77.950..77.950 rows=1000200\n loops=1)\n\nIndex Scan using i2 on public.aaa  (cost=0.44..48227.31\n rows=9488 width=5) (actual time=0.020..285.695 rows=9976\n loops=1)\n\nSeq Scan on public.aaa  (cost=0.00..169248.54\n rows=9488 width=5) (actual time=0.024..966.469 rows=9976\n loops=1)\n\n This way the estimates and the actual time get more sense. But then\n there's the question - maybe it's i1 runs too fast, and is estimated\n incorrectly? Why that happens?\n\n Here are the complete plans with the two different kinds of index\n scans once again:\n\n Index Scan using i1 on public.aaa  (cost=0.44..66621.56 rows=10340\n width=5) (actual time=0.027..47.075 rows=9944 loops=1)\n   Output: num, flag\n   Index Cond: (aaa.num = 1)\n   Filter: aaa.flag\n   Rows Removed by Filter: 89687\n   Buffers: shared hit=39949\n Planning time: 0.104 ms\n Execution time: 47.351 ms\n\n Index Scan using i2 on public.aaa  (cost=0.44..48227.31 rows=9488\n width=5) (actual time=0.020..285.695 rows=9976 loops=1)\n   Output: num, flag\n   Index Cond: (aaa.flag = true)\n   Filter: (aaa.flag AND (aaa.num = 1))\n   Rows Removed by Filter: 990224\n   Buffers: shared hit=46984\n Planning time: 0.098 ms\n Execution time: 286.081 ms\n\n\n // The test DB was populated with: create table aaa as select\n floor(random()*100)::int num, (random()*10 < 1)::bool flag from\n generate_series(1, 10000000) id;\n\n Regards,\n Vitaliy", "msg_date": "Mon, 4 Dec 2017 00:11:47 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "I wrote:\n> I tried creating multiple-column statistics using the v10 facility for\n> that:\n> regression=# create statistics s1 on num, flag from aaa;\n> CREATE STATISTICS\n> regression=# analyze aaa;\n> ANALYZE\n> but that changed the estimate not at all, which surprised me because\n> dependency statistics are supposed to fix exactly this type of problem.\n> I suspect there may be something in the extended-stats code that causes it\n> not to work right for boolean columns --- this wouldn't be excessively\n> surprising because of the way the planner messes around with converting\n> \"flag = true\" to just \"flag\" and sometimes back again. But I've not\n> looked closer yet.\n\nAfter looking, I found that indeed dependency_is_compatible_clause()\nrejects expressions like \"flag\" or \"NOT flag\", which it needn't since\nthose are fully equivalent to \"flag = true\" or \"flag = false\"\nrespectively. Moreover there's nothing else in\ndependencies_clauselist_selectivity that depends on the exact form of\nthe clause under test, only on the semantics that it's an equality\ncondition on some Var. Hence I propose the attached patch, which\nfixes the rowcount estimate for the example discussed in this thread:\n\ncreate table aaa as select (id%100)::int num, (id%10=1)::bool flag from \ngenerate_series(1, 10000000) id;\ncreate index i1 on aaa (num);\ncreate index i2 on aaa (flag);\ncreate statistics s1 on num, flag from aaa;\nanalyze aaa;\n\nexplain analyze select count(*) from aaa where num = 1 and flag = true;\n\nWithout patch:\n\n Aggregate (cost=43236.73..43236.74 rows=1 width=8) (actual time=349.365..349.3\n65 rows=1 loops=1)\n -> Bitmap Heap Scan on aaa (cost=20086.40..43212.94 rows=9514 width=0) (act\nual time=101.308..337.985 rows=100000 loops=1)\n Recheck Cond: (num = 1)\n Filter: flag\n Heap Blocks: exact=44248\n -> BitmapAnd (cost=20086.40..20086.40 rows=9514 width=0) (actual time\n=92.214..92.214 rows=0 loops=1)\n -> Bitmap Index Scan on i1 (cost=0.00..1776.43 rows=96000 width\n=0) (actual time=17.236..17.236 rows=100000 loops=1)\n Index Cond: (num = 1)\n -> Bitmap Index Scan on i2 (cost=0.00..18304.96 rows=991003 wid\nth=0) (actual time=72.168..72.168 rows=1000000 loops=1)\n Index Cond: (flag = true)\n Planning time: 0.254 ms\n Execution time: 350.796 ms\n\nWith patch:\n\n Aggregate (cost=43496.19..43496.20 rows=1 width=8) (actual time=359.195..359.1\n95 rows=1 loops=1)\n -> Bitmap Heap Scan on aaa (cost=20129.64..43256.19 rows=96000 width=0) (ac\ntual time=99.750..347.353 rows=100000 loops=1)\n Recheck Cond: (num = 1)\n Filter: flag\n Heap Blocks: exact=44248\n -> BitmapAnd (cost=20129.64..20129.64 rows=9514 width=0) (actual time\n=90.671..90.671 rows=0 loops=1)\n -> Bitmap Index Scan on i1 (cost=0.00..1776.43 rows=96000 width\n=0) (actual time=16.946..16.946 rows=100000 loops=1)\n Index Cond: (num = 1)\n -> Bitmap Index Scan on i2 (cost=0.00..18304.96 rows=991003 wid\nth=0) (actual time=70.898..70.898 rows=1000000 loops=1)\n Index Cond: (flag = true)\n Planning time: 0.218 ms\n Execution time: 360.608 ms\n\nThat's the right overall rowcount estimate for the scan, given the stats\nit's working from. There's apparently still something wrong with bitmap\ncosting, since it's still estimating this as cheaper than the single-index\ncase --- noting the bogus rowcount estimate for the BitmapAnd, I suspect\nthat bitmap costing is taking shortcuts rather than using\nclauselist_selectivity to estimate the overall selectivity of the bitmap\nconditions. But whatever is causing that, it's independent of this\ndeficiency.\n\nIn addition to the bugfix proper, I improved some comments, got rid of\na NumRelids() test that's redundant with the preceding bms_membership()\ntest, and fixed dependencies_clauselist_selectivity so that\nestimatedclauses actually is a pure output argument as stated by its\nAPI contract.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 03 Dec 2017 18:08:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On Sat, Dec 2, 2017 at 8:04 PM, Justin Pryzby <[email protected]> wrote:\n\n> On Sat, Dec 02, 2017 at 05:27:51PM -0800, Jeff Janes wrote:\n> > I think the non-extended stats code also has trouble with booleans.\n> > pg_stats gives me a correlation of 0.8 or higher for the flag column.\n>\n> It's not due to the boolean though; you see the same thing if you do:\n> CREATE INDEX aaa_f ON aaa((flag::text));\n> ANALYZE aaa;\n> correlation | 0.81193\n>\n> or:\n> ALTER TABLE aaa ADD flag2 int; UPDATE aaa SET flag2= flag::int\n> correlation | 0.81193\n>\n> I think it's caused by having so few (2) values to correlate.\n>\n> most_common_vals | {f,t}\n> most_common_freqs | {0.9014,0.0986}\n> correlation | 0.822792\n>\n> It thinks there's somewhat-high correlation since it gets a list of x and y\n> values (integer positions by logical and physical sort order) and 90% of\n> the x\n> list (logical value) are the same value ('t'), and the CTIDs are in order\n> on\n> the new index, so 90% of the values are 100% correlated.\n>\n\nBut there is no index involved (except in the case of the functional\nindex). The correlation of table columns to physical order of the table\ndoesn't depend on the existence of an index, or the physical order within\nan index.\n\nBut I do see that ties within the logical order of the column values are\nbroken to agree with the physical order. That is wrong, right? Is there\nany argument that this is desirable?\n\nIt looks like it could be fixed with a few extra double calcs per distinct\nvalue. Considering we already sorted the sample values using SQL-callable\ncollation dependent comparators, I doubt a few C-level double calcs is\ngoing to be meaningful.\n\nCheers,\n\nJeff\n\nOn Sat, Dec 2, 2017 at 8:04 PM, Justin Pryzby <[email protected]> wrote:On Sat, Dec 02, 2017 at 05:27:51PM -0800, Jeff Janes wrote:\n> I think the non-extended stats code also has trouble with booleans.\n> pg_stats gives me a correlation  of 0.8 or higher for the flag column.\n\nIt's not due to the boolean though; you see the same thing if you do:\nCREATE INDEX aaa_f ON aaa((flag::text));\nANALYZE aaa;\ncorrelation | 0.81193\n\nor:\nALTER TABLE aaa ADD flag2 int; UPDATE aaa SET flag2= flag::int\ncorrelation | 0.81193\n\nI think it's caused by having so few (2) values to correlate.\n\nmost_common_vals       | {f,t}\nmost_common_freqs      | {0.9014,0.0986}\ncorrelation            | 0.822792\n\nIt thinks there's somewhat-high correlation since it gets a list of x and y\nvalues (integer positions by logical and physical sort order) and 90% of the x\nlist (logical value) are the same value ('t'), and the CTIDs are in order on\nthe new index, so 90% of the values are 100% correlated.But there is no index involved (except in the case of the functional index).  The correlation of table columns to physical order of the table doesn't depend on the existence of an index, or the physical order within an index.But I do see that ties within the logical order of the column values are broken to agree with the physical order.  That is wrong, right?  Is there any argument that this is desirable?It looks like it could be fixed with a few extra double calcs per distinct value.  Considering we already sorted the sample values using SQL-callable collation dependent comparators, I doubt a few C-level double calcs is going to be meaningful. Cheers,Jeff", "msg_date": "Sun, 3 Dec 2017 15:21:56 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted? - boolean correlation" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> On Sat, Dec 2, 2017 at 8:04 PM, Justin Pryzby <[email protected]> wrote:\n>> It thinks there's somewhat-high correlation since it gets a list of x\n>> and y values (integer positions by logical and physical sort order) and\n>> 90% of the x list (logical value) are the same value ('t'), and the\n>> CTIDs are in order on the new index, so 90% of the values are 100%\n>> correlated.\n\n> But there is no index involved (except in the case of the functional\n> index). The correlation of table columns to physical order of the table\n> doesn't depend on the existence of an index, or the physical order within\n> an index.\n\n> But I do see that ties within the logical order of the column values are\n> broken to agree with the physical order. That is wrong, right? Is there\n> any argument that this is desirable?\n\nUh ... what do you propose doing instead? We'd have to do something with\nties, and it's not so obvious this way is wrong.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 03 Dec 2017 18:31:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted? - boolean correlation" }, { "msg_contents": "On Dec 3, 2017 15:31, \"Tom Lane\" <[email protected]> wrote:\n\nJeff Janes <[email protected]> writes:\n> On Sat, Dec 2, 2017 at 8:04 PM, Justin Pryzby <[email protected]>\nwrote:\n>> It thinks there's somewhat-high correlation since it gets a list of x\n>> and y values (integer positions by logical and physical sort order) and\n>> 90% of the x list (logical value) are the same value ('t'), and the\n>> CTIDs are in order on the new index, so 90% of the values are 100%\n>> correlated.\n\n> But there is no index involved (except in the case of the functional\n> index). The correlation of table columns to physical order of the table\n> doesn't depend on the existence of an index, or the physical order within\n> an index.\n\n> But I do see that ties within the logical order of the column values are\n> broken to agree with the physical order. That is wrong, right? Is there\n> any argument that this is desirable?\n\nUh ... what do you propose doing instead? We'd have to do something with\nties, and it's not so obvious this way is wrong.\n\n\nLet them be tied. If there are 10 distinct values, number the values 0 to\n9, and all rows of a given distinct values get the same number for the\nlogical order axis.\n\nCalling the correlation 0.8 when it is really 0.0 seems obviously wrong to\nme. Although if we switched btree to store duplicate values with tid as a\ntie breaker, then maybe it wouldn't be as obviously wrong.\n\nCheers,\n\nJeff\n\nOn Dec 3, 2017 15:31, \"Tom Lane\" <[email protected]> wrote:Jeff Janes <[email protected]> writes:\n> On Sat, Dec 2, 2017 at 8:04 PM, Justin Pryzby <[email protected]> wrote:\n>> It thinks there's somewhat-high correlation since it gets a list of x\n>> and y values (integer positions by logical and physical sort order) and\n>> 90% of the x list (logical value) are the same value ('t'), and the\n>> CTIDs are in order on the new index, so 90% of the values are 100%\n>> correlated.\n\n> But there is no index involved (except in the case of the functional\n> index).  The correlation of table columns to physical order of the table\n> doesn't depend on the existence of an index, or the physical order within\n> an index.\n\n> But I do see that ties within the logical order of the column values are\n> broken to agree with the physical order.  That is wrong, right?  Is there\n> any argument that this is desirable?\n\nUh ... what do you propose doing instead?  We'd have to do something with\nties, and it's not so obvious this way is wrong.Let them be tied.  If there are 10 distinct values, number the values 0 to 9, and all rows of a given distinct values get the same number for the logical order axis.Calling the correlation 0.8 when it is really 0.0 seems obviously wrong to me.  Although if we switched btree to store duplicate values with tid as a tie breaker, then maybe it wouldn't be as obviously wrong.Cheers,Jeff", "msg_date": "Sun, 3 Dec 2017 16:27:23 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted? - boolean correlation" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> On Dec 3, 2017 15:31, \"Tom Lane\" <[email protected]> wrote:\n>> Jeff Janes <[email protected]> writes:\n>>> But I do see that ties within the logical order of the column values are\n>>> broken to agree with the physical order. That is wrong, right? Is there\n>>> any argument that this is desirable?\n\n>> Uh ... what do you propose doing instead? We'd have to do something with\n>> ties, and it's not so obvious this way is wrong.\n\n> Let them be tied. If there are 10 distinct values, number the values 0 to\n> 9, and all rows of a given distinct values get the same number for the\n> logical order axis.\n> Calling the correlation 0.8 when it is really 0.0 seems obviously wrong to\n> me. Although if we switched btree to store duplicate values with tid as a\n> tie breaker, then maybe it wouldn't be as obviously wrong.\n\nI thought some more about this. What we really want the correlation stat\nto do is help us estimate how randomly an index-ordered scan will access\nthe heap. If the values we've sampled are all unequal then there's no\nparticular issue. However, if we have some group of equal values, we\ndo not really know what order an indexscan will visit them in. The\nexisting correlation calculation is making the *most optimistic possible*\nassumption, that such a group will be visited exactly in heap order ---\nand that assumption isn't too defensible. IIRC, a freshly built b-tree\nwill behave that way, because the initial sort of a btree breaks ties\nusing heap TIDs; but we don't maintain that property during later\ninsertions. In any case, given that we do this calculation without regard\nto any specific index, we can't reasonably expect to model exactly what\nthe index will do. It would be best to adopt some middle-of-the-road\nassumption about what the heap visitation order will be for a set of\nduplicate values: not exactly heap order, but I think we should not use\na worst-case assumption either, since the btree may retain some amount\nof its initial ordering.\n\nBTW, I disagree that \"correlation = zero\" is the right answer for this\nparticular example. If the btree is freshly built, then an index-order\nscan would visit all the heap pages in sequence to fetch \"f\" rows, and\nthen visit them all in sequence again to fetch \"t\" rows, which is a whole\nlot better than the purely random access that zero correlation implies.\nSo I think 0.8 or so is actually a perfectly reasonable answer when the\nindex is fresh. The trouble is just that it'd behoove us to derate that\nanswer somewhat for the probability that the index isn't fresh.\n\nMy first thought for a concrete fix was to use the mean position of\na group of duplicates for the purposes of the correlation calculation,\nbut on reflection that's clearly wrong. We do have an idea, from the\ndata we have, whether the duplicates are close together in the heap\nor spread all over. Using only mean position would fail to distinguish\nthose cases, but really we'd better penalize the spread-all-over case.\nI'm not sure how to do that.\n\nOr we could leave this calculation alone and try to move towards keeping\nequal values in TID order in btrees. Not sure about the downsides of\nthat, though.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 05 Dec 2017 13:50:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted? - boolean correlation" }, { "msg_contents": "On 04/12/2017 00:11, Vitaliy Garnashevich wrote:\n> On 03/12/2017 03:27, Jeff Janes wrote:\n>> Due to that, when I disable bitmapscans and seqscans, I start getting \n>> slow index scans on the wrong index, i2 rather than i1.  I don't know \n>> why he doesn't see that in his example.\n> When I increase effective_cache_size to 1024MB, I start getting the \n> plan with the slower index i2, too.\n>\n> *Bitmap Heap Scan* on public.aaa  (cost=12600.90..*23688**.70* \n> rows=9488 width=5) (actual time=107.529..*115.902* rows=9976 loops=1)\n>   ->  BitmapAnd  (cost=12600.90..12600.90 rows=9488 width=0) (actual \n> time=105.133..105.133 rows=0 loops=1)\n>         ->  Bitmap Index Scan on i1  (cost=0.00..1116.43 rows=96000 \n> width=0) (actual time=16.313..16.313 rows=100508 loops=1)\n>         ->  Bitmap Index Scan on i2  (cost=0.00..11479.47 rows=988338 \n> width=0) (actual time=77.950..77.950 rows=1000200 loops=1)\n>\n> *Index Scan* using i2 on public.aaa  (cost=0.44..*48227.31* rows=9488 \n> width=5) (actual time=0.020..*285.695* rows=9976 loops=1)\n>\n> *Seq Scan* on public.aaa  (cost=0.00..*169248.54* rows=9488 width=5) \n> (actual time=0.024..*966.469* rows=9976 loops=1)\n>\n> This way the estimates and the actual time get more sense. But then \n> there's the question - maybe it's i1 runs too fast, and is estimated \n> incorrectly? Why that happens?\n\nI've tried to create a better test case:\n- Increase shared_buffers and effective_cache_size to fit whole \ndatabase, including indexes.\n- Use random(), to avoid correlation between the filtered values.\n- Make both columns of integer type, to avoid special cases with boolean \n(like the one happened with CREATE STATISTICS).\n- Flush OS disk cache and then try running the query several times, to \nget both cold-cache results and all-in-ram results.\n- There are several tests, with different frequency of the selected \nvalues in the two columns: [1/10, 1/10], [1/50, 1/10], [1/100, 1/10].\n- There is a split of cost by contribution of each of its components: \nseq_page_cost, random_page_cost, cpu_tuple_cost, cpu_index_tuple_cost, \ncpu_operator_cost. The EXPLAIN is run for each component, every time \nwith only one of the components set to non-zero.\n- The test was run on a Digitalocean VM: Ubuntu 16.04.3 LTS (GNU/Linux \n4.4.0-101-generic x86_64), 2 GB RAM,  2 core CPU, SSD; PostgreSQL 9.5.10.\n\n\nshared_buffers = 1024MB\neffective_cache_size = 1024MB\nwork_mem = 100MB\n\ncreate table aaa as select floor(random()*10)::int num, (random()*10 < \n1)::int flag from generate_series(1, 10000000) id;\ncreate table aaa as select floor(random()*50)::int num, (random()*10 < \n1)::int flag from generate_series(1, 10000000) id;\ncreate table aaa as select floor(random()*100)::int num, (random()*10 < \n1)::int flag from generate_series(1, 10000000) id;\n\ncreate index i1 on aaa  (num);\ncreate index i2 on aaa  (flag);\n\nset enable_bitmapscan = on; set enable_indexscan = off; set \nenable_seqscan = off;\nset enable_bitmapscan = off; set enable_indexscan = on; set \nenable_seqscan = off;\nset enable_bitmapscan = off; set enable_indexscan = off; set \nenable_seqscan = on;\n\nset seq_page_cost = 1.0; set random_page_cost = 1.0; set cpu_tuple_cost \n= 0.01; set cpu_index_tuple_cost = 0.005; set cpu_operator_cost = 0.0025;\n\nexplain (analyze,verbose,costs,buffers) select * from aaa where num = 1 \nand flag = 1;\n\n\nBitmap Heap Scan on public.aaa  (cost=20687.87..66456.91 rows=101403 \nwidth=8) (actual time=345.349..6061.834 rows=99600 loops=1)  read=45091\nBitmap Heap Scan on public.aaa  (cost=20687.87..66456.91 rows=101403 \nwidth=8) (actual time=593.915..991.403 rows=99600 loops=1)   hit=45091\nBitmap Heap Scan on public.aaa  (cost=20687.87..66456.91 rows=101403 \nwidth=8) (actual time=255.273..355.694 rows=99600 loops=1)   hit=45091\nBitmap Heap Scan on public.aaa  (cost=20687.87..66456.91 rows=101403 \nwidth=8) (actual time=284.768..385.505 rows=99600 loops=1)   hit=45091\n? +  ? +  1014.03 +  ? +  5595.52 = ?\n\nBitmap Heap Scan on public.aaa  (cost=12081.43..28452.09 rows=19644 \nwidth=8) (actual time=238.566..3115.445 rows=20114 loops=1)   read=19425\nBitmap Heap Scan on public.aaa  (cost=12081.43..28452.09 rows=19644 \nwidth=8) (actual time=314.590..382.207 rows=20114 loops=1)    hit=19425\nBitmap Heap Scan on public.aaa  (cost=12081.43..28452.09 rows=19644 \nwidth=8) (actual time=265.899..311.064 rows=20114 loops=1)    hit=19425\nBitmap Heap Scan on public.aaa  (cost=12081.43..28452.09 rows=19644 \nwidth=8) (actual time=209.470..237.697 rows=20114 loops=1)    hit=19425\n9689.92 +  ? +  196.44 +  ? +  ? = ?\n\nBitmap Heap Scan on public.aaa  (cost=11273.15..20482.50 rows=10090 \nwidth=8) (actual time=284.381..2019.717 rows=10114 loops=1)   read=12059\nBitmap Heap Scan on public.aaa  (cost=11273.15..20482.50 rows=10090 \nwidth=8) (actual time=153.445..180.770 rows=10114 loops=1)    hit=12059\nBitmap Heap Scan on public.aaa  (cost=11273.15..20482.50 rows=10090 \nwidth=8) (actual time=146.275..159.446 rows=10114 loops=1)    hit=12059\nBitmap Heap Scan on public.aaa  (cost=11273.15..20482.50 rows=10090 \nwidth=8) (actual time=140.973..153.998 rows=10114 loops=1)    hit=12059\n4098.28 +  ? +  100.90 +  ? +  ? = ?\n\n\nSeq Scan on public.aaa  (cost=0.00..194248.49 rows=101403 width=8) \n(actual time=0.126..2056.913 rows=99600 loops=1)               read=44248\nSeq Scan on public.aaa  (cost=0.00..194248.49 rows=101403 width=8) \n(actual time=0.045..1595.377 rows=99600 loops=1)               hit=32 \nread=44216\nSeq Scan on public.aaa  (cost=0.00..194248.49 rows=101403 width=8) \n(actual time=0.066..1392.700 rows=99600 loops=1)               hit=64 \nread=44184\nSeq Scan on public.aaa  (cost=0.00..194248.49 rows=101403 width=8) \n(actual time=0.069..1378.574 rows=99600 loops=1)               hit=96 \nread=44152\n44248.00 +  0.00 +  100000.33 +  0.00 +  50000.17 = 194248.5\n\nSeq Scan on public.aaa  (cost=0.00..194247.77 rows=19644 width=8) \n(actual time=0.646..1801.794 rows=20114 loops=1)                read=44248\nSeq Scan on public.aaa  (cost=0.00..194247.77 rows=19644 width=8) \n(actual time=0.385..1518.613 rows=20114 loops=1)                hit=32 \nread=44216\nSeq Scan on public.aaa  (cost=0.00..194247.77 rows=19644 width=8) \n(actual time=0.346..1369.021 rows=20114 loops=1)                hit=64 \nread=44184\nSeq Scan on public.aaa  (cost=0.00..194247.77 rows=19644 width=8) \n(actual time=0.597..1792.959 rows=20114 loops=1)                hit=96 \nread=44152\n44248.00 +  0.00 +  99999.85 +  0.00 +  49999.93 = 194247.78\n\nSeq Scan on public.aaa  (cost=0.00..194247.77 rows=10090 width=8) \n(actual time=0.700..2194.195 rows=10114 loops=1)                read=44248\nSeq Scan on public.aaa  (cost=0.00..194247.77 rows=10090 width=8) \n(actual time=0.145..1401.274 rows=10114 loops=1)                hit=32 \nread=44216\nSeq Scan on public.aaa  (cost=0.00..194247.77 rows=10090 width=8) \n(actual time=0.185..1602.002 rows=10114 loops=1)                hit=64 \nread=44184\nSeq Scan on public.aaa  (cost=0.00..194247.77 rows=10090 width=8) \n(actual time=0.184..1353.162 rows=10114 loops=1)                hit=96 \nread=44152\n44248.00 +  0.00 +  99999.85 +  0.00 +  49999.93 = 194247.78\n\n\nIndex Scan using i1 on public.aaa  (cost=0.43..67358.15 rows=101403 \nwidth=8) (actual time=0.400..1325.638 rows=99600 loops=1)     read=46983\nIndex Scan using i1 on public.aaa  (cost=0.43..67358.15 rows=101403 \nwidth=8) (actual time=0.020..479.713 rows=99600 loops=1)      hit=46983\nIndex Scan using i1 on public.aaa  (cost=0.43..67358.15 rows=101403 \nwidth=8) (actual time=0.024..642.947 rows=99600 loops=1)      hit=46983\nIndex Scan using i1 on public.aaa  (cost=0.43..67358.15 rows=101403 \nwidth=8) (actual time=0.038..756.045 rows=99600 loops=1)      hit=46983\n45.95 +  46638.42 +  10336.67 +  5168.34 +  5168.77 = 67358.15\nRows Removed by Filter: 900156\n\nIndex Scan using i1 on public.aaa  (cost=0.43..48809.34 rows=19644 \nwidth=8) (actual time=0.600..1059.269 rows=20114 loops=1)      read=44373\nIndex Scan using i1 on public.aaa  (cost=0.43..48809.34 rows=19644 \nwidth=8) (actual time=0.071..208.940 rows=20114 loops=1)       hit=44373\nIndex Scan using i1 on public.aaa  (cost=0.43..48809.34 rows=19644 \nwidth=8) (actual time=0.044..124.437 rows=20114 loops=1)       hit=44373\nIndex Scan using i1 on public.aaa  (cost=0.43..48809.34 rows=19644 \nwidth=8) (actual time=0.044..127.814 rows=20114 loops=1)       hit=44373\n0.23 +  44788.68 +  2010.00 +  1005.00 +  1005.43 = 48809.34\nRows Removed by Filter: 179792\n\nIndex Scan using i1 on public.aaa  (cost=0.43..46544.80 rows=10090 \nwidth=8) (actual time=0.647..1510.482 rows=10114 loops=1)      read=39928\nIndex Scan using i1 on public.aaa  (cost=0.43..46544.80 rows=10090 \nwidth=8) (actual time=0.035..141.847 rows=10114 loops=1)       hit=39928\nIndex Scan using i1 on public.aaa  (cost=0.43..46544.80 rows=10090 \nwidth=8) (actual time=0.032..86.716 rows=10114 loops=1)        hit=39928\nIndex Scan using i1 on public.aaa  (cost=0.43..46544.80 rows=10090 \nwidth=8) (actual time=0.032..79.492 rows=10114 loops=1)        hit=39928\n0.01 +  44524.36 +  1010.00 +  505.00 +  505.44 = 46544.81\nRows Removed by Filter: 89762\n\n\nIndex Scan using i2 on public.aaa  (cost=0.43..39166.34 rows=101403 \nwidth=8) (actual time=1.337..1543.611 rows=99600 loops=1)     read=46985\nIndex Scan using i2 on public.aaa  (cost=0.43..39166.34 rows=101403 \nwidth=8) (actual time=0.027..410.623 rows=99600 loops=1)      hit=46985\nIndex Scan using i2 on public.aaa  (cost=0.43..39166.34 rows=101403 \nwidth=8) (actual time=0.025..377.529 rows=99600 loops=1)      hit=46985\nIndex Scan using i2 on public.aaa  (cost=0.43..39166.34 rows=101403 \nwidth=8) (actual time=0.025..377.554 rows=99600 loops=1)      hit=46985\n2979.08 +  16566.83 +  9810.00 +  4905.00 +  4905.44 = 39166.35\nRows Removed by Filter: 900835\n\nIndex Scan using i2 on public.aaa  (cost=0.43..39247.83 rows=19644 \nwidth=8) (actual time=0.783..2236.765 rows=20114 loops=1)      read=46985\nIndex Scan using i2 on public.aaa  (cost=0.43..39247.83 rows=19644 \nwidth=8) (actual time=0.198..777.279 rows=20114 loops=1)       hit=46985\nIndex Scan using i2 on public.aaa  (cost=0.43..39247.83 rows=19644 \nwidth=8) (actual time=0.196..437.177 rows=20114 loops=1)       hit=46985\nIndex Scan using i2 on public.aaa  (cost=0.43..39247.83 rows=19644 \nwidth=8) (actual time=0.075..346.481 rows=20114 loops=1)       hit=46985\n2949.05 +  16751.68 +  9773.33 +  4886.66 +  4887.10 = 39247.82\nRows Removed by Filter: 980350\n\nIndex Scan using i2 on public.aaa  (cost=0.43..40167.34 rows=10090 \nwidth=8) (actual time=0.619..1350.886 rows=10114 loops=1)      read=46987\nIndex Scan using i2 on public.aaa  (cost=0.43..40167.34 rows=10090 \nwidth=8) (actual time=0.069..448.975 rows=10114 loops=1)       hit=46987\nIndex Scan using i2 on public.aaa  (cost=0.43..40167.34 rows=10090 \nwidth=8) (actual time=0.048..383.066 rows=10114 loops=1)       hit=46987\nIndex Scan using i2 on public.aaa  (cost=0.43..40167.34 rows=10090 \nwidth=8) (actual time=0.050..387.874 rows=10114 loops=1)       hit=46987\n2974.39 +  17212.52 +  9990.00 +  4995.00 +  4995.44 = 40167.35\nRows Removed by Filter: 991389\n\n\nWhat seems odd to me is that in different kinds of tests (with different \nfrequency of column values):\n\ni1 Rows Removed by Filter = 900156, 179792, 89762 (decreased a lot)\ni1 buffers = 46983, 44373, 39928 (decreased, but not a lot)\ni1 best case time = 756.045, 127.814, 79.492 (decreased a lot, as well \nas probably average case too)\ni1 cost estimates = 67358.15, 48809.34, 46544.80 (did not decrease a lot)\n\ni2 Rows Removed by Filter = 900835, 980350, 991389\ni2 buffers = 46985, 46985, 46987\ni2 best case time = 377.554, 346.481, 387.874\ni2 cost estimates = 39166.34, 39247.83, 40167.34\n\nIt's odd that increase in actual execution time for \"i1\" was not \nreflected enough in cost estimates. The cost even didn't go below \"i2\" \ncosts.\n\nDetails about the test are attached.\n\nRegards,\nVitaliy", "msg_date": "Wed, 6 Dec 2017 09:06:52 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On Tue, Dec 05, 2017 at 01:50:11PM -0500, Tom Lane wrote:\n> Jeff Janes <[email protected]> writes:\n> > On Dec 3, 2017 15:31, \"Tom Lane\" <[email protected]> wrote:\n> >> Jeff Janes <[email protected]> writes:\n> >>> But I do see that ties within the logical order of the column values are\n> >>> broken to agree with the physical order. That is wrong, right? Is there\n> >>> any argument that this is desirable?\n> \n> >> Uh ... what do you propose doing instead? We'd have to do something with\n> >> ties, and it's not so obvious this way is wrong.\n> \n> > Let them be tied.\n...\n> I thought some more about this. What we really want the correlation stat\n> to do is help us estimate how randomly an index-ordered scan will access\n> the heap. If the values we've sampled are all unequal then there's no\n> particular issue. However, if we have some group of equal values, we\n> do not really know what order an indexscan will visit them in. The\n> existing correlation calculation is making the *most optimistic possible*\n> assumption, that such a group will be visited exactly in heap order ---\n> and that assumption isn't too defensible.\n\nI'm interested in discusstion regarding bitmap cost, since it would have helped\nour case discussed here ~18 months ago:\nhttps://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsasoft.com#[email protected]\n\n...but remember: in Vitaliy's case (as opposed to mine), the index scan is\n*faster* but being estimated at higher cost than bitmap (I have to keep\nreminding myself). So the rest of this discussion is about the\noverly-optimistic cost estimate of index scans, which moves in the opposite\ndirection for this reported problem. For the test cases I looked at, index\nscans were used when RPC=1 and redundant conditions were avoided, so I'm not\nsure if there's any remaining issue (but I haven't looked at the latest cases\nVitaliy sent).\n\n> In any case, given that we do this calculation without regard\n> to any specific index,\n\nOne solution is to compute stats (at least correlation) for all indices, not\njust expr inds. I did that earlier this year while throwing around/out ideas.\nhttps://www.postgresql.org/message-id/20170707234119.GN17566%40telsasoft.com\n\n> We do have an idea, from the data we have, whether the duplicates are close\n> together in the heap or spread all over.\n\nI think you just mean pg_stats.correlation for all values, not just duplicates\n(with the understanding that duplicates might be a large fraction of the\ntuples, and high weight in correlation).\n\nAnother issue I noted in an earlier thread is that as table sizes increase, the\nexisting correlation computation approaches 1 for correlated insertions, (like\n\"append-only\" timestamps clustered around now()), due to ANALYZE sampling a\nfraction of the table, and thereby representing only large-scale correlation,\nand, to an increasing degree, failing to represent small-scale variations\nbetween adjacent index TIDs, which has real cost (and for which the mitigation\nby cache effects probably decreases WRT table size, too). I think any solution\nneeds to handle this somehow.\n\nGenerated data demonstrating this (I reused this query so it's more complicated\nthan it needs to be):\n\n[pryzbyj@database ~]$ time for sz in 9999{,9{,9{,9{,9}}}} ; do psql postgres -tc \"DROP TABLE IF EXISTS t; CREATE TABLE t(i float, j int); CREATE INDEX ON t(i);INSERT INTO t SELECT i/99999.0+pow(2,(-random())) FROM generate_series(1,$sz) i ORDER BY i; ANALYZE t; SELECT $sz, correlation, most_common_freqs[1] FROM pg_stats WHERE attname='i' AND tablename='t'\"; done\n\n 9999 | 0.187146 | \n 99999 | 0.900629 | \n 999999 | 0.998772 | \n 9999999 | 0.999987 | \n\nTrying to keep it all in my own head: For sufficiently large number of pages,\nbitmap scan should be preferred to idx scan due to reduced random-page-cost\noutweighing its overhead in CPU cost. Probably by penalizing index scans, not\ndiscounting bitmap scans. Conceivably a correlation adjustment can be\nconditionalized or weighted based on index_pages_fetched() ...\n\tx = ln (x/999999);\n\tif (x>1) correlation/=x;\n\nI think one could look at the fraction of duplicated index keys expected to be\nreturned: if we expect to return 1000 tuples, with 200 duplicates from MCV,\ncost_index would multiply correlation by (1 - 200/1000), meaning to use\nsomething closer to max_IO_cost rather than min_IO_cost. I imagine it'd be\npossible to limit to only those MCVs which pass quals - if none pass, then\nthere may be few tuples returned, so apply no correction to (additionally)\npenalize index scan.\n\nIn my tests, at one point I implemented idx_corr_fudge(), returning a value\nlike \"fragmentation\" from pgstatindex (which I'm sure is where I got the phrase\nwhen reporting the problem). That only uses the leaf nodes' \"next\" pointer,\nand not the individual tuples, which probably works if there's a sufficiently\nnumber of repeated keys.\n\nI think that's all for now..\n\nJustin\n\n", "msg_date": "Wed, 6 Dec 2017 15:46:52 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted? - overestimated correlation and\n cost_index" }, { "msg_contents": "\n> What seems odd to me is that in different kinds of tests (with \n> different frequency of column values):\n>\n> i1 Rows Removed by Filter = 900156, 179792, 89762 (decreased a lot)\n> i1 buffers = 46983, 44373, 39928 (decreased, but not a lot)\n> i1 best case time = 756.045, 127.814, 79.492 (decreased a lot, as well \n> as probably average case too)\n> i1 cost estimates = 67358.15, 48809.34, 46544.80 (did not decrease a lot)\n>\n> i2 Rows Removed by Filter = 900835, 980350, 991389\n> i2 buffers = 46985, 46985, 46987\n> i2 best case time = 377.554, 346.481, 387.874\n> i2 cost estimates = 39166.34, 39247.83, 40167.34\n>\n> It's odd that increase in actual execution time for \"i1\" was not \n> reflected enough in cost estimates. The cost even didn't go below \"i2\" \n> costs.\n\nI've added some logging, in order to get the actual numbers which were \nused for estimation.\n\n--drop table if exists aaa;\n--create table aaa as select floor(random()*100)::int num, (random()*10 \n< 1)::int flag from generate_series(1, 10000000) id;\n--analyze aaa;\n\n--set enable_bitmapscan = off; set enable_indexscan = on;  set \nenable_seqscan = off;\n--set seq_page_cost = 1.0; set random_page_cost = 1.0; set \ncpu_tuple_cost = 0.01; set cpu_index_tuple_cost = 0.005; set \ncpu_operator_cost = 0.0025;\n\n--create index i1 on aaa  (num);\n--drop index if exists i2;\n--explain (analyze,verbose,costs,buffers) select * from aaa where num = \n1 and flag = 1;\n\nIndex Scan using i1 on public.aaa  (cost=0.43..46697.59 rows=10641 \nwidth=8) (actual time=0.047..153.521 rows=9826 loops=1)\n   Rows Removed by Filter: 89948\n\n--drop index if exists i1;\n--create index i2 on aaa  (flag);\n--explain (analyze,verbose,costs,buffers) select * from aaa where num = \n1 and flag = 1;\n\nIndex Scan using i2 on public.aaa  (cost=0.43..39583.11 rows=10641 \nwidth=8) (actual time=0.098..351.454 rows=9826 loops=1)\n   Rows Removed by Filter: 990249\n\n\nLOG:  cost_index:\n         seq_page_cost=1.00, random_page_cost=1.00, \ncpu_tuple_cost=0.0100, cpu_index_tuple_cost=0.0050, \ncpu_operator_cost=0.0025, effective_cache_size=131072\n         indexStartupCost=0.43, indexTotalCost=1103.94, \nindexSelectivity=0.01076667, indexCorrelation=0.00208220\n         baserel->tuples=10000033.00, baserel->pages=44248.00, \nbaserel->allvisfrac=0.00000000\n         tuples_fetched=107667.00, pages_fetched=477.00\n         max_IO_cost=44248.0000, min_IO_cost=477.0000, csquared=0.0000\n         qpqual_cost.startup=0.0000, qpqual_cost.per_tuple=0.0025, \ncpu_per_tuple=0.0125\n         spc_seq_page_cost=1.00, spc_random_page_cost=1.00\n         startup_cost=0.43, total_cost=46697.59\n\nLOG:  cost_index:\n         seq_page_cost=1.00, random_page_cost=1.00, \ncpu_tuple_cost=0.0100, cpu_index_tuple_cost=0.0050, \ncpu_operator_cost=0.0025, effective_cache_size=131072\n         indexStartupCost=0.43, indexTotalCost=10123.93, \nindexSelectivity=0.09883333, indexCorrelation=0.82505685\n         baserel->tuples=10000000.00, baserel->pages=44248.00, \nbaserel->allvisfrac=0.00000000\n         tuples_fetched=988333.00, pages_fetched=4374.00\n         max_IO_cost=44248.0000, min_IO_cost=4374.0000, csquared=0.6807\n         qpqual_cost.startup=0.0000, qpqual_cost.per_tuple=0.0025, \ncpu_per_tuple=0.0125\n         spc_seq_page_cost=1.00, spc_random_page_cost=1.00\n         startup_cost=0.43, total_cost=39583.11\n\n\nHere is a break down of the total_cost into components, for i1 query and \nfor i2 query (some rounding was removed from the formula for brevity):\n\npath->path.total_cost =\n   (indexTotalCost + qpqual_cost.startup) +\n   (max_IO_cost + csquared * (min_IO_cost - max_IO_cost)) +\n   (cpu_tuple_cost + qpqual_cost.per_tuple) * (indexSelectivity * \nbaserel->tuples);\npath->path.total_cost =\n   1103.94 + 0.0000 +                                // 1103.94 +\n   44248.0000 + 0.0000 * (477.0000 - 44248.0000) +   // 44248.00 +\n   (0.0100 + 0.0025) * (0.01076667 * 10000033.00)    // 1345.84\n   = 46697.78;                                       // = 46697.78;\n\npath->path.total_cost =\n   (indexTotalCost + qpqual_cost.startup) +\n   (max_IO_cost + csquared * (min_IO_cost - max_IO_cost)) +\n   (cpu_tuple_cost + qpqual_cost.per_tuple) * (indexSelectivity * \nbaserel->tuples);\npath->path.total_cost =\n   10123.93 + 0.0000 +                               // 10123.93 +\n   44248.0000 + 0.6807 * (4374.0000 - 44248.0000) +  // 17105.77 +\n   (0.0100 + 0.0025) * (0.09883333 * 10000000.00)    // 12354.17\n   = 39583.86;                                       // = 39583.86;\n\n\nPS.\nThe code used for logging:\n/postgresql-9.3.1/src/backend/optimizer/path/costsize.c : cost_index()\n\n     ereport(LOG,\n             (errmsg(\"cost_index: \\n\"\n                     \"seq_page_cost=%.2f, random_page_cost=%.2f, \ncpu_tuple_cost=%.4f, cpu_index_tuple_cost=%.4f, cpu_operator_cost=%.4f, \neffective_cache_size=%.0f\\n\"\n                     \"indexStartupCost=%.2f, indexTotalCost=%.2f, \nindexSelectivity=%.8f, indexCorrelation=%.8f\\n\"\n                     \"baserel->tuples=%.2f, baserel->pages=%.2f, \nbaserel->allvisfrac=%.8f\\n\"\n                     \"tuples_fetched=%.2f, pages_fetched=%.2f\\n\"\n                     \"max_IO_cost=%.4f, min_IO_cost=%.4f, csquared=%.4f\\n\"\n                     \"qpqual_cost.startup=%.4f, \nqpqual_cost.per_tuple=%.4f, cpu_per_tuple=%.4f\\n\"\n                     \"spc_seq_page_cost=%.2f, spc_random_page_cost=%.2f\\n\"\n                     \"startup_cost=%.2f, total_cost=%.2f\\n\",\n\n                     seq_page_cost, random_page_cost, cpu_tuple_cost, \ncpu_index_tuple_cost, cpu_operator_cost, (double)effective_cache_size,\n                     indexStartupCost, indexTotalCost, indexSelectivity, \nindexCorrelation,\n                     baserel->tuples, (double) baserel->pages, \nbaserel->allvisfrac,\n                     tuples_fetched, pages_fetched,\n                     max_IO_cost, min_IO_cost, csquared,\n                     qpqual_cost.startup, qpqual_cost.per_tuple, \ncpu_per_tuple,\n                     spc_seq_page_cost, spc_random_page_cost,\n                     startup_cost, startup_cost + run_cost\n                     )));\n\nRegards,\nVitaliy\n\n", "msg_date": "Thu, 7 Dec 2017 02:17:13 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On Sun, Dec 3, 2017 at 1:15 PM, Vitaliy Garnashevich <\[email protected]> wrote:\n\n> On 02/12/2017 23:17, Jeff Janes wrote:\n>\n> Right, so there is a cpu costing problem (which could only be fixed by\n> hacking postgresql and recompiling it), but it is much smaller of a problem\n> than the IO cost not being accurate due to the high hit rate. Fixing the\n> CPU costing problem is unlikely to make a difference to your real query.\n> If you set the page costs to zero, what happens to your real query?\n>\n> I can't reproduce the exact issue on the real database any more. The query\n> started to use the slow bitmap scan recently, and had been doing so for\n> some time lately, but now it's switched back to use the index scan. The\n> table involved in the query gets modified a lot. It has hundreds of\n> millions of rows. Lots of new rows are appended to it every day, the oldest\n> rows are sometimes removed. The table is analyzed at least daily. It's\n> possible that statistics was updated and that caused the query to run\n> differently. But I still would like to understand why that issue happened,\n> and how to properly fix it, in case the issue returns.\n>\n\nWhile your test case displays some cost estimation issues, there is really\nno reason to think that they are the same issues your real query shows.\nParticularly since you said the difference was a factor of 30 in the real\ncase, rather than 3. Any chance you can show EXPLAIN ANALYZE output for\nthe real query, but when it is acting up and when it is not? Something in\nthe plans might stand out to us as the obvious problem. On the other hand,\nmaybe nothing will stand out without having a replicable test case. The\nonly way to know is to try.\n\n\n>\n>\n>\n>> But I doubt that the settings seq_page_cost = random_page_cost = 0.0\n>> should actually be used.\n>>\n>\n> Why not? If your production server really has everything in memory during\n> normal operation, that is the correct course of action. If you ever\n> restart the server, then you could have some unpleasant time getting it\n> back up to speed again, but pg_prewarm could help with that.\n>\n> In the real database, not everything is in memory. There are 200GB+ of\n> RAM, but DB is 500GB+. The table involved in the query itself is 60GB+ of\n> data and 100GB+ of indexes. I'm running the test case in a way where all\n> reads are done from RAM, only to make it easier to reproduce and to avoid\n> unrelated effects.\n>\n\nIs everything that the particular query in questions needs in memory, even\nif other queries need things from disk? Or does the problematic query also\nneed things from disk? If the query does need to read things from disk,\nthe bitmap actually should be faster. Which reinforces the idea that maybe\nthe issue brought up by your test case is not the same as the issue brought\nup by your real case, even if they both point in the same direction.\n\n\n> As far as know, costs in Postgres were designed to be relative to\n> seq_page_cost, which for that reason is usually defined as 1.0. Even if\n> everything would be in RAM, accesses to the pages would still not have zero\n> cost. Setting 0.0 just seems too extreme, as all other non-zero costs would\n> become infinitely bigger.\n>\n\nWhen exploring things, 0.0 certain helps to simplify things. Yes, 0.05 or\nsomething similar might be better for a completely cached database. The\nproblem is that it is very context dependent. Reading a page from\nshared_buffers when there is no contention from other processes for the\nsame page is probably less than 0.01. If it is not in shared_buffers but\nis in effective_cache_size, it is probably a few multiples of 0.01. If\nthere is contention either for that specific page, or for available buffers\ninto which to read pages, then it could be substantially higher yet.\nHigher, none of those are things the planner is aware of.\n\nIf you really want to target the plan with the BitmapAnd, you should\n> increase cpu_index_tuple_cost and/or cpu_operator_cost but not increase\n> cpu_tuple_cost. That is because the unselective bitmap index scan does\n> not incur any cpu_tuple_cost, but does incur index_tuple and operator\n> costs. Unfortunately all other index scans in the system will also be\n> skewed by such a change if you make the change system-wide.\n>\n> Exactly. I'd like to understand why the worse plan is being chosen, and 1)\n> if it's fixable by tuning costs, to figure out the right settings which\n> could be used in production, 2) if there is a bug in Postgres optimizer,\n> then to bring some attention to it, so that it's eventually fixed in one of\n> future releases, 3) if Postgres is supposed to work this way, then at least\n> I (and people who ever read this thread) would understand it better.\n>\n\nI would argue that it is planner \"bug\", (quotes because it doesn't give\nwrong answers, just sub-optimal plans) but one that is very hard to pin\ndown, and also depends on the hardware you are running on. Also, people\nhave made some optimizations to the machinery behind the bitmap code\nrecently, as well as the costing of the bitmap code, so if it is bug, the\nsize of it is changing with the version you are using. If your aim is to\nimprove the planner (rather than simply tuning the planner that currently\nexists) then you should probably 1) make your test case use random number\ngenerators, rather than modulus, to avoid cross-column correlation and\nother such issues, 2) run it against 11dev code, which is where\nimprovements to PostgreSQL are targeted, rather than against production\nversions, and 3) post to pgsql-hackers, rather than performance.\n\nCheers,\n\nJeff\n\nOn Sun, Dec 3, 2017 at 1:15 PM, Vitaliy Garnashevich <[email protected]> wrote:\n\nOn 02/12/2017 23:17, Jeff Janes wrote:\n\n\n\n\nRight, so there is a cpu costing\n problem (which could only be fixed by hacking postgresql and\n recompiling it), but it is much smaller of a problem than\n the IO cost not being accurate due to the high hit rate. \n Fixing the CPU costing problem is unlikely to make a\n difference to your real query.  If you set the page costs to\n zero, what happens to your real query?\n\n\n\n I can't reproduce the exact issue on the real database any more. The\n query started to use the slow bitmap scan recently, and had been\n doing so for some time lately, but now it's switched back to use the\n index scan. The table involved in the query gets modified a lot. It\n has hundreds of millions of rows. Lots of new rows are appended to\n it every day, the oldest rows are sometimes removed. The table is\n analyzed at least daily. It's possible that statistics was updated\n and that caused the query to run differently. But I still would like\n to understand why that issue happened, and how to properly fix it,\n in case the issue returns.While your test case displays some cost estimation issues, there is really no reason to think that they are the same issues your real query shows.  Particularly since you said the difference was a factor of 30 in the real case, rather than 3.  Any chance you can show EXPLAIN ANALYZE output for the real query, but when it is acting up and when it is not?  Something in the plans might stand out to us as the obvious problem.  On the other hand, maybe nothing will stand out without having a replicable test case.  The only way to know is to try. \n\n\n\n\n \n\nBut I doubt that the\n settings seq_page_cost = random_page_cost = 0.0 should\n actually be used.\n\n\n\nWhy not?  If your production server really has\n everything in memory during normal operation, that is the\n correct course of action.  If you ever restart the server,\n then you could have some unpleasant time getting it back\n up to speed again, but pg_prewarm could help with that.  \n\n\n\n\n\n In the real database, not everything is in memory. There are 200GB+\n of RAM, but DB is 500GB+. The table involved in the query itself is\n 60GB+ of data and 100GB+ of indexes. I'm running the test case in a\n way where all reads are done from RAM, only to make it easier to\n reproduce and to avoid unrelated effects.Is everything that the particular query in questions needs in memory, even if other queries need things from disk?  Or does the problematic query also need things from disk?  If the query does need to read things from disk, the bitmap actually should be faster.  Which reinforces the idea that maybe the issue brought up by your test case is not the same as the issue brought up by your real case, even if they both point in the same direction. \n As far as know, costs in Postgres were designed to be relative to\n seq_page_cost, which for that reason is usually defined as 1.0. Even\n if everything would be in RAM, accesses to the pages would still not\n have zero cost. Setting 0.0 just seems too extreme, as all other\n non-zero costs would become infinitely bigger.When exploring things, 0.0 certain helps to simplify things.  Yes, 0.05 or something similar might be better for a completely cached database.  The problem is that it is very  context dependent.  Reading a page from shared_buffers when there is no contention from other processes for the same page is probably less than 0.01.  If it is not in shared_buffers but is in effective_cache_size, it is probably a few multiples of 0.01.  If there is contention either for that specific page, or for available buffers into which to read pages, then it could be substantially higher yet.  Higher, none of those are things the planner is aware of.If you really want to target the plan\n with the BitmapAnd, you should increase \n cpu_index_tuple_cost and/or cpu_operator_cost but not\n increase cpu_tuple_cost.  That is because the  unselective\n bitmap index scan does not incur any cpu_tuple_cost, but\n does incur index_tuple and operator costs.  Unfortunately\n all other index scans in the system will also be skewed by\n such a change if you make the change system-wide.\n\n\n\n Exactly. I'd like to understand why the worse plan is being chosen,\n and 1) if it's fixable by tuning costs, to figure out the right\n settings which could be used in production, 2) if there is a bug in\n Postgres optimizer, then to bring some attention to it, so that it's\n eventually fixed in one of future releases, 3) if Postgres is\n supposed to work this way, then at least I (and people who ever read\n this thread) would understand it better.I  would argue that it is planner \"bug\", (quotes because it doesn't give wrong answers, just sub-optimal plans) but one that is very hard to pin down, and also depends on the hardware you are running on.  Also, people have made some optimizations to the machinery behind the bitmap code recently, as well as the costing of the bitmap code, so if it is bug, the size of it is changing with the version you are using.  If your aim is to improve the planner (rather than simply tuning the planner that currently exists)  then you should probably 1) make your test case use random number generators, rather than modulus, to avoid cross-column correlation and other such issues, 2) run it against 11dev code, which is where improvements to PostgreSQL are targeted, rather than against production versions, and 3) post to pgsql-hackers, rather than performance.Cheers,Jeff", "msg_date": "Wed, 6 Dec 2017 19:48:32 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On Tue, Dec 5, 2017 at 10:50 AM, Tom Lane <[email protected]> wrote:\n\n> Jeff Janes <[email protected]> writes:\n> > On Dec 3, 2017 15:31, \"Tom Lane\" <[email protected]> wrote:\n> >> Jeff Janes <[email protected]> writes:\n> >>> But I do see that ties within the logical order of the column values\n> are\n> >>> broken to agree with the physical order. That is wrong, right? Is\n> there\n> >>> any argument that this is desirable?\n>\n> >> Uh ... what do you propose doing instead? We'd have to do something\n> with\n> >> ties, and it's not so obvious this way is wrong.\n>\n> > Let them be tied. If there are 10 distinct values, number the values 0\n> to\n> > 9, and all rows of a given distinct values get the same number for the\n> > logical order axis.\n> > Calling the correlation 0.8 when it is really 0.0 seems obviously wrong\n> to\n> > me. Although if we switched btree to store duplicate values with tid as\n> a\n> > tie breaker, then maybe it wouldn't be as obviously wrong.\n>\n> I thought some more about this. What we really want the correlation stat\n> to do is help us estimate how randomly an index-ordered scan will access\n> the heap.\n\n\nThe correlation is used in another place, estimating how much of the table\nwe will visit in the first place. If the correlation is very high, then\nscanning 10% of the index leaf pages means we will visit 10% of the table.\nIf the correlation is low, then we use Mackert and Lohman, and (in the case\nof visiting 10% of the index) predict we will visit most of the table.\nAssuming effective_cache_size is high, we will visit most of the table just\nonce, but still in a random order, because subsequent visits for the same\nquery will be found in the cache. Rather than visiting the various pages\nrepeatedly and not finding them in cache each time.\n\nIn addition to estimating how much of the table we visit, we also estimate\nhow \"sequential like\" those visits are. Which is the use that you\ndescribe. Ideally for that use case, we would know for each distinct\nvalue, how correlated the tids are with the leaf page ordering. If the\nindex is freshly built, that is very high. We visit 1/10 of the index,\nwhich causes us to visit 100% of the table but in perfect order, plucking\n1/10 of the tuples from each table page.\n\nBut visiting 100% of the table in physical order in order to pluck out 10%\nof the tuples from each page is quite different than visiting 10% of the\ntable pages in physical order to pluck out 100% of the tuples from those\npages and 0% from the pages not visited.\n\n...\n\nBTW, I disagree that \"correlation = zero\" is the right answer for this\n> particular example. If the btree is freshly built, then an index-order\n> scan would visit all the heap pages in sequence to fetch \"f\" rows, and\n> then visit them all in sequence again to fetch \"t\" rows, which is a whole\n> lot better than the purely random access that zero correlation implies.\n> So I think 0.8 or so is actually a perfectly reasonable answer when the\n> index is fresh. The trouble is just that it'd behoove us to derate that\n> answer somewhat for the probability that the index isn't fresh.\n>\n\nBut, for the case of \"how much of the table do we visit at all\",\ncorrelation = zero is the right answer, even if it isn't the right answer\nfor \"how sequentially do we visit whatever we visit\"\n\n\n\n> My first thought for a concrete fix was to use the mean position of\n> a group of duplicates for the purposes of the correlation calculation,\n> but on reflection that's clearly wrong. We do have an idea, from the\n> data we have, whether the duplicates are close together in the heap\n> or spread all over. Using only mean position would fail to distinguish\n> those cases, but really we'd better penalize the spread-all-over case.\n> I'm not sure how to do that.\n>\n\nDeparting from correlations, we could also try to estimate \"How many\ndifferent table pages does each index leaf page reference\". This could\ncapture functional dependencies which are strong, but not in the form of\nlinear correlations. (The current extended statistics only captures\ndependencies between user columns, not between one user column and one\nsystem column such as table slot)\n\nFor whatever its worth, here is my \"let ties be ties\" patch.\n\nIt breaks two regression tests due to plan changes, and both are cases\nwhere maybe the plan ought to change for the very reason being discussed.\nIf I just put random gibberish into the correlation field, more regression\ntests fail, so I think my implementation is not too far broken.\n\nThe accumulations into corr_ysum and corr_y2sum could trivially be pushed\ndown into the \"if\", and corr_xysum could as well with a little algebra.\nBut that seems like premature optimization for a proof-of-concept patch.\n\n\nCheers,\n\nJeff", "msg_date": "Wed, 6 Dec 2017 20:51:16 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted? - boolean correlation" }, { "msg_contents": "On Tue, Dec 5, 2017 at 11:06 PM, Vitaliy Garnashevich <\[email protected]> wrote:\n\n\nThis is very cool, thanks.\n\n\n> I've tried to create a better test case:\n> - Increase shared_buffers and effective_cache_size to fit whole database,\n> including indexes.\n> - Use random(), to avoid correlation between the filtered values.\n> - Make both columns of integer type, to avoid special cases with boolean\n> (like the one happened with CREATE STATISTICS).\n> - Flush OS disk cache and then try running the query several times, to get\n> both cold-cache results and all-in-ram results.\n> - There are several tests, with different frequency of the selected values\n> in the two columns: [1/10, 1/10], [1/50, 1/10], [1/100, 1/10].\n> - There is a split of cost by contribution of each of its components:\n> seq_page_cost, random_page_cost, cpu_tuple_cost, cpu_index_tuple_cost,\n> cpu_operator_cost. The EXPLAIN is run for each component, every time with\n> only one of the components set to non-zero.\n>\n\nWhere you have question marks, that means you could not force it into the\nplan you wanted with all-but-one settings being zero?\n\n\n\n> - The test was run on a Digitalocean VM: Ubuntu 16.04.3 LTS (GNU/Linux\n> 4.4.0-101-generic x86_64), 2 GB RAM, 2 core CPU, SSD; PostgreSQL 9.5.10.\n>\n>\n> shared_buffers = 1024MB\n> effective_cache_size = 1024MB\n>\n\nI would set this even higher.\n\n\n\n>\n> work_mem = 100MB\n>\n> create table aaa as select floor(random()*10)::int num, (random()*10 <\n> 1)::int flag from generate_series(1, 10000000) id;\n> create table aaa as select floor(random()*50)::int num, (random()*10 <\n> 1)::int flag from generate_series(1, 10000000) id;\n> create table aaa as select floor(random()*100)::int num, (random()*10 <\n> 1)::int flag from generate_series(1, 10000000) id;\n>\n> create index i1 on aaa (num);\n> create index i2 on aaa (flag);\n>\n> set enable_bitmapscan = on; set enable_indexscan = off; set\n> enable_seqscan = off;\n> set enable_bitmapscan = off; set enable_indexscan = on; set\n> enable_seqscan = off;\n> set enable_bitmapscan = off; set enable_indexscan = off; set\n> enable_seqscan = on;\n>\n> set seq_page_cost = 1.0; set random_page_cost = 1.0; set cpu_tuple_cost =\n> 0.01; set cpu_index_tuple_cost = 0.005; set cpu_operator_cost = 0.0025;\n>\n> explain (analyze,verbose,costs,buffers) select * from aaa where num = 1\n> and flag = 1;\n>\n\nOne thing to try is to use explain (analyze, timing off), and then get the\ntotal execution time from the summary line at the end of the explain,\nrather than from \"actual time\" fields. Collecting the times of each\nindividual step in the execution can impose a lot of overhead, and some\nplans have more if this artificial overhead than others. It might change\nthe results, or it might not. I know that sorts and hash joins are very\nsensitive to this, but I don't know about bitmap scans.\n\n....\n\nWhat seems odd to me is that in different kinds of tests (with different\n> frequency of column values):\n>\n> i1 Rows Removed by Filter = 900156, 179792, 89762 (decreased a lot)\n> i1 buffers = 46983, 44373, 39928 (decreased, but not a lot)\n>\n\nTo filter out 89762 tuples, you first have to look them up in the table,\nand since they are randomly scattered that means you hit nearly every page\nin the table at least once. In fact, I don't understand how the empirical\nnumber of buffers hits can be only 46983 in the first case, when it has to\nvisit 1,000,000 rows (and reject 90% of them). I'm guessing that it is\nbecause your newly created index is sorted by ctid order within a given\nindex value, and that the scan holds a pin on the table page between\nvisits, and so doesn't count as a hit if it already holds the pin.\n\nYou could try to create an empty table, create the indexes, then populate\nthe table with your random select query, to see if that changes the buffer\nhit count. (Note that this wouldn't change the cost estimates much even it\ndoes change the measured number of buffer hits, because of\neffective_cache_size. It knows you will be hitting ~47,000 pages ~25 times\neach, and so only charges you for the first time each one is hit.)\n\n\n> i1 best case time = 756.045, 127.814, 79.492 (decreased a lot, as well as\n> probably average case too)\n> i1 cost estimates = 67358.15, 48809.34, 46544.80 (did not decrease a lot)\n>\n\nRight. Your best case times are when the data is completely in cache. But\nyour cost estimates are dominated by *_page_cost, which are irrelevant when\nthe data is entirely in cache. You are telling it to estimate the\nworse-case costs, and it is doing that pretty well (within this one plan).\n\n\n>\n> i2 Rows Removed by Filter = 900835, 980350, 991389\n> i2 buffers = 46985, 46985, 46987\n> i2 best case time = 377.554, 346.481, 387.874\n> i2 cost estimates = 39166.34, 39247.83, 40167.34\n>\n> It's odd that increase in actual execution time for \"i1\" was not reflected\n> enough in cost estimates.\n>\n\nNo, that's entirely expected given your settings. As long as you are\ncharging disk-read costs for reading data from RAM, you will never get\nrealistic cost estimates. Remember, you aren't trying to tune your\nproduction server here, you are trying to get a test case that can be\ndissected.\n\nPerhaps you think that effective_cache_size is supposed to fix this for\nyou. But it only accounts for blocks hit repeatedly within the same\nquery. It has no idea that you are running a bunch of other queries on the\nsame data back to back, and so it will likely find that data already in\nmemory from one query to the next. That knowledge (currently) has to be\nbaked into your *_page_cost setting. There is also no way to say that data\nfor one table is more likely to be found in cache than data for another\ntable is.\n\n\nThe cost even didn't go below \"i2\" costs.\n>\n\nThat is partially because i2 benefits from a spuriously large correlation\ndue to an artifact of how stats are computed. See another thread that has\nspun off of this one.\n\nIf you want to see the effect of this on your cost estimates, you can do\nsomething like:\n\nupdate pg_statistic set stanumbers2='{0}' where starelid='aaa'::regclass\nand staattnum=2;\n\nCheers,\n\nJeff\n\nOn Tue, Dec 5, 2017 at 11:06 PM, Vitaliy Garnashevich <[email protected]> wrote:This is very cool, thanks. \n I've tried to create a better test case: \n - Increase shared_buffers and effective_cache_size to fit whole\n database, including indexes. \n - Use random(), to avoid correlation between the filtered values.\n - Make both columns of integer type, to avoid special cases with\n boolean (like the one happened with CREATE STATISTICS).\n - Flush OS disk cache and then try running the query several times,\n to get both cold-cache results and all-in-ram results.\n - There are several tests, with different frequency of the selected\n values in the two columns: [1/10, 1/10], [1/50, 1/10], [1/100,\n 1/10].\n - There is a split of cost by contribution of each of its\n components: seq_page_cost, random_page_cost, cpu_tuple_cost,\n cpu_index_tuple_cost, cpu_operator_cost. The EXPLAIN is run for each\n component, every time with only one of the components set to\n non-zero.Where you have question marks, that means you could not force it into the plan you wanted with all-but-one settings being zero? \n - The test was run on a Digitalocean VM: Ubuntu 16.04.3 LTS\n (GNU/Linux 4.4.0-101-generic x86_64), 2 GB RAM,  2 core CPU, SSD;\n PostgreSQL 9.5.10.\n\n\nshared_buffers = 1024MB\neffective_cache_size = 1024MBI would set this even higher. \nwork_mem = 100MB\n\ncreate table aaa as select floor(random()*10)::int num,\n (random()*10 < 1)::int flag from generate_series(1, 10000000)\n id;\ncreate table aaa as select floor(random()*50)::int num,\n (random()*10 < 1)::int flag from generate_series(1, 10000000)\n id;\ncreate table aaa as select floor(random()*100)::int num,\n (random()*10 < 1)::int flag from generate_series(1, 10000000)\n id;\n\ncreate index i1 on aaa  (num);\ncreate index i2 on aaa  (flag);\n\nset enable_bitmapscan = on; set enable_indexscan = off; \n set enable_seqscan = off;\nset enable_bitmapscan = off; set enable_indexscan = on; \n set enable_seqscan = off;\nset enable_bitmapscan = off; set enable_indexscan = off; \n set enable_seqscan = on;\n\nset seq_page_cost = 1.0; set random_page_cost = 1.0; set\n cpu_tuple_cost = 0.01; set cpu_index_tuple_cost = 0.005; set\n cpu_operator_cost = 0.0025;\n\n explain (analyze,verbose,costs,buffers) select * from aaa where\n num = 1 and flag = 1;One thing to try is to use explain (analyze, timing off), and then get the total execution time from the summary line at the end of the explain, rather than from \"actual time\" fields.  Collecting the times of each individual step in the execution can impose a lot of overhead, and some plans have more if this artificial overhead than others.  It might change the results, or it might not.  I know that sorts and hash joins are very sensitive to this, but I don't know about bitmap scans.....What seems odd to me is that in different kinds of tests (with\n different frequency of column values):\n\n i1 Rows Removed by Filter = 900156, 179792, 89762 (decreased a lot)\n i1 buffers = 46983, 44373, 39928 (decreased, but not a lot)To filter out 89762 tuples, you first have to look them up in the table, and since they are randomly scattered that means you hit nearly every page in the table at least once.  In fact, I don't understand how the empirical number of buffers hits can be only 46983 in the first case, when it has to visit 1,000,000 rows (and reject 90% of them).  I'm guessing that it is because your newly created index is sorted by ctid order within a given index value, and that the scan holds a pin on the table page between visits, and so doesn't count as a hit if it already holds the pin.  You could try to create an empty table, create the indexes, then populate the table with your random select query, to see if that changes the buffer hit count.  (Note that this wouldn't change the cost estimates much even it does change the measured number of buffer hits, because of effective_cache_size.  It knows you will be hitting ~47,000 pages ~25 times each, and so only charges you for the first time each one is hit.) \n i1 best case time = 756.045, 127.814, 79.492 (decreased a lot, as\n well as probably average case too)\n i1 cost estimates = 67358.15, 48809.34, 46544.80 (did not decrease a\n lot)Right.  Your best case times are when the data is completely in cache.  But your cost estimates are dominated by *_page_cost, which are irrelevant when the data is entirely in cache.  You are telling it to estimate the worse-case costs, and it is doing that pretty well (within this one plan). \n\n i2 Rows Removed by Filter = 900835, 980350, 991389\n i2 buffers = 46985, 46985, 46987\n i2 best case time = 377.554, 346.481, 387.874\n i2 cost estimates = 39166.34, 39247.83, 40167.34\n\n It's odd that increase in actual execution time for \"i1\" was not\n reflected enough in cost estimates.No, that's entirely expected given your settings.  As long as you are charging disk-read costs for reading data from RAM, you will never get realistic cost estimates.  Remember, you aren't trying to tune your production server here, you are trying to get a test case that can be dissected.Perhaps you think that effective_cache_size is supposed to fix this for you.  But it only accounts for blocks hit repeatedly within the same query.  It has no idea that you are running a bunch of other queries on the same data back to back, and so it will likely find that data already in memory from one query to the next.  That knowledge (currently) has to be baked into your *_page_cost setting.  There is also no way to say that data for one table is more likely to be found in cache than data for another table is.  The cost even didn't go below\n \"i2\" costs.That is partially because i2 benefits from a spuriously large correlation due to an artifact of how stats are computed.  See another thread that has spun off of this one.  If you want to see the effect of this on your cost estimates, you can do something like:update pg_statistic set stanumbers2='{0}' where starelid='aaa'::regclass and staattnum=2; Cheers,Jeff", "msg_date": "Mon, 11 Dec 2017 23:29:05 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted?" }, { "msg_contents": "On Wed, Dec 6, 2017 at 1:46 PM, Justin Pryzby <[email protected]> wrote:\n\n> On Tue, Dec 05, 2017 at 01:50:11PM -0500, Tom Lane wrote:\n> > Jeff Janes <[email protected]> writes:\n> > > On Dec 3, 2017 15:31, \"Tom Lane\" <[email protected]> wrote:\n> > >> Jeff Janes <[email protected]> writes:\n> > >>> But I do see that ties within the logical order of the column values\n> are\n> > >>> broken to agree with the physical order. That is wrong, right? Is\n> there\n> > >>> any argument that this is desirable?\n> >\n> > >> Uh ... what do you propose doing instead? We'd have to do something\n> with\n> > >> ties, and it's not so obvious this way is wrong.\n> >\n> > > Let them be tied.\n> ...\n> > I thought some more about this. What we really want the correlation stat\n> > to do is help us estimate how randomly an index-ordered scan will access\n> > the heap. If the values we've sampled are all unequal then there's no\n> > particular issue. However, if we have some group of equal values, we\n> > do not really know what order an indexscan will visit them in. The\n> > existing correlation calculation is making the *most optimistic possible*\n> > assumption, that such a group will be visited exactly in heap order ---\n> > and that assumption isn't too defensible.\n>\n> I'm interested in discusstion regarding bitmap cost, since it would have\n> helped\n> our case discussed here ~18 months ago:\n> https://www.postgresql.org/message-id/flat/20160524173914.GA11880%\n> 40telsasoft.com#[email protected]\n>\n> ...but remember: in Vitaliy's case (as opposed to mine), the index scan is\n> *faster* but being estimated at higher cost than bitmap (I have to keep\n> reminding myself). So the rest of this discussion is about the\n> overly-optimistic cost estimate of index scans, which moves in the opposite\n> direction for this reported problem. For the test cases I looked at, index\n> scans were used when RPC=1 and redundant conditions were avoided, so I'm\n> not\n> sure if there's any remaining issue (but I haven't looked at the latest\n> cases\n> Vitaliy sent).\n>\n> > In any case, given that we do this calculation without regard\n> > to any specific index,\n>\n> One solution is to compute stats (at least correlation) for all indices,\n> not\n> just expr inds. I did that earlier this year while throwing around/out\n> ideas.\n> https://www.postgresql.org/message-id/20170707234119.\n> GN17566%40telsasoft.com\n\n\nWhen is the correlation of a column which is not the leading column of a\nbtree index or in a brin index ever used? If we did compute index-specific\ncorrelations, we could maybe just drop pure-column correlations.\n\n\n>\n> > We do have an idea, from the data we have, whether the duplicates are\n> close\n> > together in the heap or spread all over.\n>\n> I think you just mean pg_stats.correlation for all values, not just\n> duplicates\n> (with the understanding that duplicates might be a large fraction of the\n> tuples, and high weight in correlation).\n>\n> Another issue I noted in an earlier thread is that as table sizes\n> increase, the\n> existing correlation computation approaches 1 for correlated insertions,\n> (like\n> \"append-only\" timestamps clustered around now()), due to ANALYZE sampling a\n> fraction of the table, and thereby representing only large-scale\n> correlation,\n>\n\nThat isn't due to sampling. That is due to the definition of linear\ncorrelation. Large scale is what it is about.\n\n\n> Generated data demonstrating this (I reused this query so it's more\n> complicated\n> than it needs to be):\n>\n> [pryzbyj@database ~]$ time for sz in 9999{,9{,9{,9{,9}}}} ; do psql\n> postgres -tc \"DROP TABLE IF EXISTS t; CREATE TABLE t(i float, j int);\n> CREATE INDEX ON t(i);INSERT INTO t SELECT i/99999.0+pow(2,(-random())) FROM\n> generate_series(1,$sz) i ORDER BY i; ANALYZE t; SELECT $sz, correlation,\n> most_common_freqs[1] FROM pg_stats WHERE attname='i' AND tablename='t'\";\n> done\n>\n> 9999 | 0.187146 |\n> 99999 | 0.900629 |\n> 999999 | 0.998772 |\n> 9999999 | 0.999987 |\n>\n\nBecause the amount of jitter introduced is constant WRT $sz, but the range\nof i/99999.0 increases with $sz, the correlation actually does increase; it\nis not a sampling effect.\n\nTrying to keep it all in my own head: For sufficiently large number of\n> pages,\n> bitmap scan should be preferred to idx scan due to reduced random-page-cost\n> outweighing its overhead in CPU cost.\n\n\nBut CPU cost is probably not why it is losing anyway.\n\nIndex scans get a double bonus from high correlation. It assumes that only\na small fraction of the table will be visited. And then it assumes that\nthe visits it does make will be largely sequential. I think that you are\nsaying that for a large enough table, that last assumption is wrong, that\nthe residual amount of non-correlation is enough to make the table reads\nmore random than sequential. Maybe. Do you have a test case that\ndemonstrates this? If so, how big do we need to go, and can you see the\nproblem on SSD as well as HDD?\n\nThe thing is, the bitmap scan gets cheated out of one of these bonuses. It\ngets no credit for visiting only a small part of the table when the\ncorrelation is high, but does get credit for being mostly sequential in\nwhatever visits it does make (even if the correlation is low, because of\ncourse the bitmap perfects the correlation).\n\nI think it would be easier to give bitmap scans their due rather than try\nto knock down index scans. But of course a synthetic test case would go a\nlong way to either one.\n\n\nProbably by penalizing index scans, not\n> discounting bitmap scans. Conceivably a correlation adjustment can be\n> conditionalized or weighted based on index_pages_fetched() ...\n> x = ln (x/999999);\n> if (x>1) correlation/=x;\n>\n\nI think we should go with something with some statistical principle behind\nit if we want to do that. There is the notion of \"restricted range\" in\ncorrelations, that if there is a high correlation over the full range of\ndata, but you zoom into only a small fraction of that range, the\ncorrelation you see over that restricted range will be much less than the\nfull correlation is. I don't know that field well enough to give a formula\noff the top of my head, but I think it will be based on the fraction of the\nkey space which is being scanned, and (1-RSQ), rather than an arbitrary\nnumber like 999999.\n\nCheers,\n\nJeff\n\nOn Wed, Dec 6, 2017 at 1:46 PM, Justin Pryzby <[email protected]> wrote:On Tue, Dec 05, 2017 at 01:50:11PM -0500, Tom Lane wrote:\n> Jeff Janes <[email protected]> writes:\n> > On Dec 3, 2017 15:31, \"Tom Lane\" <[email protected]> wrote:\n> >> Jeff Janes <[email protected]> writes:\n> >>> But I do see that ties within the logical order of the column values are\n> >>> broken to agree with the physical order.  That is wrong, right?  Is there\n> >>> any argument that this is desirable?\n>\n> >> Uh ... what do you propose doing instead?  We'd have to do something with\n> >> ties, and it's not so obvious this way is wrong.\n>\n> > Let them be tied.\n...\n> I thought some more about this.  What we really want the correlation stat\n> to do is help us estimate how randomly an index-ordered scan will access\n> the heap.  If the values we've sampled are all unequal then there's no\n> particular issue.  However, if we have some group of equal values, we\n> do not really know what order an indexscan will visit them in.  The\n> existing correlation calculation is making the *most optimistic possible*\n> assumption, that such a group will be visited exactly in heap order ---\n> and that assumption isn't too defensible.\n\nI'm interested in discusstion regarding bitmap cost, since it would have helped\nour case discussed here ~18 months ago:\nhttps://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsasoft.com#[email protected]\n\n...but remember: in Vitaliy's case (as opposed to mine), the index scan is\n*faster* but being estimated at higher cost than bitmap (I have to keep\nreminding myself).  So the rest of this discussion is about the\noverly-optimistic cost estimate of index scans, which moves in the opposite\ndirection for this reported problem.  For the test cases I looked at, index\nscans were used when RPC=1 and redundant conditions were avoided, so I'm not\nsure if there's any remaining issue (but I haven't looked at the latest cases\nVitaliy sent).\n\n> In any case, given that we do this calculation without regard\n> to any specific index,\n\nOne solution is to compute stats (at least correlation) for all indices, not\njust expr inds.  I did that earlier this year while throwing around/out ideas.\nhttps://www.postgresql.org/message-id/20170707234119.GN17566%40telsasoft.comWhen is the correlation of a column which is not the leading column of a btree index or in a brin index ever used?  If we did compute index-specific correlations, we could maybe just drop pure-column correlations.\n\n> We do have an idea, from the data we have, whether the duplicates are close\n> together in the heap or spread all over.\n\nI think you just mean pg_stats.correlation for all values, not just duplicates\n(with the understanding that duplicates might be a large fraction of the\ntuples, and high weight in correlation).\n\nAnother issue I noted in an earlier thread is that as table sizes increase, the\nexisting correlation computation approaches 1 for correlated insertions, (like\n\"append-only\" timestamps clustered around now()), due to ANALYZE sampling a\nfraction of the table, and thereby representing only large-scale correlation,That isn't due to sampling.  That is due to the definition of linear correlation.  Large scale is what it is about.\nGenerated data demonstrating this (I reused this query so it's more complicated\nthan it needs to be):\n\n[pryzbyj@database ~]$ time for sz in 9999{,9{,9{,9{,9}}}} ; do psql postgres -tc \"DROP TABLE IF EXISTS t; CREATE TABLE t(i float, j int); CREATE INDEX ON t(i);INSERT INTO t SELECT i/99999.0+pow(2,(-random())) FROM generate_series(1,$sz) i ORDER BY i; ANALYZE t; SELECT $sz, correlation, most_common_freqs[1] FROM pg_stats WHERE attname='i' AND tablename='t'\"; done\n\n     9999 |    0.187146 |\n    99999 |    0.900629 |\n   999999 |    0.998772 |\n  9999999 |    0.999987 |Because the amount of jitter introduced is constant WRT $sz, but the range of i/99999.0 increases with $sz, the correlation actually does increase; it is not a sampling effect.Trying to keep it all in my own head: For sufficiently large number of pages,\nbitmap scan should be preferred to idx scan due to reduced random-page-cost\noutweighing its overhead in CPU cost. But CPU cost is probably not why it is losing anyway.Index scans get a double bonus from high correlation.  It assumes that only a small fraction of the table will be visited.  And then it assumes that the visits it does make will be largely sequential.  I think that you are saying that for a large enough table, that last assumption is wrong, that the residual amount of non-correlation is enough to make the table reads more random than sequential.  Maybe.  Do you have a test case that demonstrates this?  If so, how big do we need to go, and can you see the problem on SSD as well as HDD?The thing is, the bitmap scan gets cheated out of one of these bonuses.  It gets no credit for visiting only a small part of the table when the correlation is high, but does get credit for being mostly sequential in whatever visits it does make (even if the correlation is low, because of course the bitmap perfects the correlation).I think it would be easier to give bitmap scans their due rather than try to knock down index scans.  But of course a synthetic test case would go a long way to either one.  Probably by penalizing index scans, not\ndiscounting bitmap scans.  Conceivably a correlation adjustment can be\nconditionalized or weighted based on index_pages_fetched() ...\n        x = ln (x/999999);\n        if (x>1) correlation/=x;I think we should go with something with some statistical principle behind it if we want to do that.  There is the notion of \"restricted range\" in correlations, that if there is a high correlation over the full range of data, but you zoom into only a small fraction of that range, the correlation you see over that restricted range will be much less than the full correlation is.  I don't know that field well enough to give a formula off the top of my head, but I think it will be based on the fraction of the key space which is being scanned, and (1-RSQ), rather than an arbitrary number like 999999. Cheers,Jeff", "msg_date": "Tue, 12 Dec 2017 01:29:48 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted? - overestimated correlation and\n cost_index" }, { "msg_contents": "On Tue, Dec 12, 2017 at 01:29:48AM -0800, Jeff Janes wrote:\n> On Wed, Dec 6, 2017 at 1:46 PM, Justin Pryzby <[email protected]> wrote:\n> > On Tue, Dec 05, 2017 at 01:50:11PM -0500, Tom Lane wrote:\n\n> > > In any case, given that we do this calculation without regard\n> > > to any specific index,\n> >\n> > One solution is to compute stats (at least correlation) for all indices,\n> > not\n> > just expr inds. I did that earlier this year while throwing around/out\n> > ideas.\n> > https://www.postgresql.org/message-id/20170707234119.\n> > GN17566%40telsasoft.com\n> \n> When is the correlation of a column which is not the leading column of a\n> btree index or in a brin index ever used? If we did compute index-specific\n> correlations, we could maybe just drop pure-column correlations.\n\nYes I think so - correlation is collected for every column, but only used for\nindices.\n\nI also have a comment to myself in that patch to force attstattarget=0 for\nnon-expr indices, to avoid keeping MCV/histogram which duplicates that of their\ncolumn.\n\n> Trying to keep it all in my own head: For sufficiently large number of\n> > pages,\n> > bitmap scan should be preferred to idx scan due to reduced random-page-cost\n> > outweighing its overhead in CPU cost.\n> \n> \n> But CPU cost is probably not why it is losing anyway.\n> \n> Index scans get a double bonus from high correlation. It assumes that only\n> a small fraction of the table will be visited. And then it assumes that\n> the visits it does make will be largely sequential. I think that you are\n> saying that for a large enough table, that last assumption is wrong, that\n> the residual amount of non-correlation is enough to make the table reads\n> more random than sequential. Maybe. Do you have a test case that\n> demonstrates this? If so, how big do we need to go, and can you see the\n> problem on SSD as well as HDD?\n\nRight: The \"residual\"/fine-scale variations (those which are not adequately\nrepresented by correlation metric) are/may be non-sequential, so don't get good\nreadahead. \n\nThe original issue was with an 75GB table (an inheritence child) and an\nanalytic query previously taking ~30min at that point taking 4-5 hours due to\nrandom seeks (from duplicate values in a timestamp column with 1second\nresolution). There would've been very little if any of the previous day's\ntable cached: the child table being queried (by way of its parent) had size\nroughly same as the server's RAM, and would've been loaded over the course of\nthe preceding 6-30hours, and not frequently accessed. It may be that there's a\nsharp change once cache no longer effectively mitigates the random heap reads.\n\nSSD: good question.\n\nHere's an rackspace VM with PG9.6.6, 2GB shared_buffers, 8GB RAM (~4GB of which\nis being used as OS page cache), and 32GB SSD (with random_page_cost=1). The\nserver is in use by our application.\n\nI believe you could scale up the size of the table to see this behavior with\nany cache size. 0.0001 controls the \"jitter\", with smaller values being more\njittery..\n\npostgres=# CREATE TABLE t(i int,j int) TABLESPACE tmp; CREATE INDEX ON t(i); INSERT INTO t SELECT (0.0001*a+9*(random()-0.5))::int FROM generate_series(1,99999999) a; VACUUM ANALYZE t;\n public | t | table | pryzbyj | 3458 MB |\nrelpages | 442478\n\nFor comparison purposes/baseline; here's a scan on an SEPARATE index freshly\nbuilt AFTER insertions:\n\npostgres=# explain(analyze,buffers) SELECT COUNT(j) FROM t WHERE i BETWEEN 0 AND 4000;\nFirst invocation:\n#1 -> Index Scan using t_i_idx1 on t (cost=0.57..1413352.60 rows=39933001 width=4) (actual time=25.660..52575.127 rows=39996029 loops=1)\n Buffers: shared hit=1578644 read=286489 written=1084\nSubsequent invocations with (extra) effect from OS cache:\n#2 -> Index Scan using t_i_idx1 on t (cost=0.57..1413352.60 rows=39933001 width=4) (actual time=61.054..37646.556 rows=39996029 loops=1)\n Buffers: shared hit=1578644 read=286489 written=2223\n#3 -> Index Scan using t_i_idx1 on t (cost=0.57..1413352.60 rows=39933001 width=4) (actual time=9.344..31265.398 rows=39996029 loops=1)\n Buffers: shared hit=1578644 read=286489 written=1192\n\nDropping that index, and scanning a different range on the non-fresh index:\n\npostgres=# explain(analyze,buffers) SELECT COUNT(j) FROM t WHERE i BETWEEN 4000 AND 8000;\n#1 -> Index Scan using t_i_idx on t (cost=0.57..1546440.47 rows=40298277 width=4) (actual time=95.815..139152.147 rows=40009853 loops=1)\n Buffers: shared hit=1948069 read=316536 written=3411\nRerunning with cache effects:\n#2 -> Index Scan using t_i_idx on t (cost=0.57..1546440.47 rows=40298277 width=4) (actual time=203.590..87547.287 rows=40009853 loops=1)\n Buffers: shared hit=1948069 read=316536 written=5712\n#3 -> Index Scan using t_i_idx on t (cost=0.57..1546440.47 rows=40298277 width=4) (actual time=164.504..83768.890 rows=40009853 loops=1)\n Buffers: shared hit=1948069 read=316536 written=1979\n\nCompare to seq scan:\n -> Seq Scan on t (cost=0.00..1942478.00 rows=40298277 width=4) (actual time=1173.162..20980.069 rows=40009853 loops=1)\n Buffers: shared hit=47341 read=395137\n\nBitmap:\n -> Bitmap Heap Scan on t (cost=975197.91..2022150.06 rows=40298277 width=4) (actual time=24396.270..39304.813 rows=40009853 loops=1)\n Buffers: shared read=316536 written=1431\n\n\nThe index scan reads 2.3e6 pages, compared to 4e5 pages (seq) and 3e5 pages\n(bitmap). And idx scans were 4-7x slower than seq scan. Was the index scan\nactually that badly affected by CPU cost of revisiting pages (rather than IO\ncosts)? Or did the OS actually fail to cache the 3e5 pages \"read\"? That would\nbe consistent with running almost 2x faster on the 3rd invocation.\n\nThe \"hits\" are largely from pages being revisited and recently accessed. Are\nthe misses (reads) mostly from pages being revisited after already falling out\nof cache ? Or mostly initial access ? Or ??\n\nIf the index scan is really paying an high IO cost for rereads and not\nprimarily a CPU cost, this seems to be something like the \"correlated index\nscan\" variant of traditional failure to effectively cache doing seq scan on a\nsufficiently large table using a MRU buffer - the cache is ineffective for\nadequately mitigating IO costs when the (re)reads have sufficient \"spread\".\n\npostgres=# SELECT tablename, attname, correlation FROM pg_stats WHERE tablename='t';\ntablename | t\nattname | i\ncorrelation | 1\n\nI still want to say that's unreasonble due to (I think) high fraction of\nnonrandom reads and associated absense of readahead.\n\n> I think it would be easier to give bitmap scans their due rather than try\n> to knock down index scans. But of course a synthetic test case would go a\n> long way to either one.\n\nAs Tom said: index scans involving repeated keys are assuming best-case\nsequential reads for a given computed correlation. I'd be happy if they were\ncosted to avoid that assumption, and instead used some \"middle of the road\"\ninterpretation (probably based on correlation and MCV fraction?), but, in the\nalternate, need to distinguish the cost_index cases, rather than adjusting\nbitmap. This is what led me to play around with stats computation for all\nindices.\n\nJustin\n\n", "msg_date": "Fri, 15 Dec 2017 14:54:06 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted? - overestimated correlation and\n cost_index" }, { "msg_contents": "On Fri, Dec 15, 2017 at 02:54:06PM -0600, Justin Pryzby wrote:\n> SSD: good question.\n> \n> Here's an rackspace VM with PG9.6.6, 2GB shared_buffers, 8GB RAM (~4GB of which\n> is being used as OS page cache), and 32GB SSD (with random_page_cost=1). The\n> server is in use by our application.\n> \n> I believe you could scale up the size of the table to see this behavior with\n> any cache size. 0.0001 controls the \"jitter\", with smaller values being more\n> jittery..\n> \n> postgres=# CREATE TABLE t(i int,j int) TABLESPACE tmp; CREATE INDEX ON t(i); INSERT INTO t SELECT (0.0001*a+9*(random()-0.5))::int FROM generate_series(1,99999999) a; VACUUM ANALYZE t;\n> public | t | table | pryzbyj | 3458 MB |\n> relpages | 442478\n\nI realized I've made a mistake here; the table is on SSD but not its index...\nSo all this cost is apparently coming from the index and not the heap.\n\n -> Bitmap Heap Scan on t (cost=855041.91..1901994.06 rows=40298277 width=4) (actual time=14202.624..27754.982 rows=40009853 loops=1)\n -> Bitmap Index Scan on t_i_idx1 (cost=0.00..844967.34 rows=40298277 width=0) (actual time=14145.877..14145.877 rows=40009853 loops=1)\n\nLet me get back to you about that.\n\nJustin\n\n", "msg_date": "Sat, 16 Dec 2017 13:18:38 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted? - overestimated correlation and\n cost_index" }, { "msg_contents": "On Fri, Dec 15, 2017 at 02:54:06PM -0600, Justin Pryzby wrote:\n> SSD: good question.\n> \n> Here's an rackspace VM with PG9.6.6, 2GB shared_buffers, 8GB RAM (~4GB of which\n> is being used as OS page cache), and 32GB SSD (with random_page_cost=1). The\n> server is in use by our application.\n> \n> I believe you could scale up the size of the table to see this behavior with\n> any cache size. 0.0001 controls the \"jitter\", with smaller values being more\n> jittery..\n\nOn Sat, Dec 16, 2017 at 01:18:38PM -0600, Justin Pryzby wrote:\n> I realized I've made a mistake here; the table is on SSD but not its index...\n> So all this cost is apparently coming from the index and not the heap.\n> \n> -> Bitmap Heap Scan on t (cost=855041.91..1901994.06 rows=40298277 width=4) (actual time=14202.624..27754.982 rows=40009853 loops=1)\n> -> Bitmap Index Scan on t_i_idx1 (cost=0.00..844967.34 rows=40298277 width=0) (actual time=14145.877..14145.877 rows=40009853 loops=1)\n\nI'm rerunning with this:\n\npostgres=# CREATE TABLE t(i int,j int) TABLESPACE tmp; CREATE INDEX ON t(i) TABLESPACE tmp; INSERT INTO t SELECT (0.0001*a+9*(random()-0.5))::int FROM generate_series(1,99999999) a; VACUUM ANALYZE t; CREATE INDEX ON t(i) TABLESPACE tmp;\n\nThat doesn't seem to invalidate my conclusions regarding the test data.\n\nThe non-fresh index:\n#1 -> Index Scan using t_i_idx on t (cost=0.57..1103588.59 rows=39536704 width=4) (actual time=2.295..60094.704 rows=40009646 loops=1)\nRerun:\n#2 -> Index Scan using t_i_idx on t (cost=0.57..1103588.59 rows=39536704 width=4) (actual time=1.671..54209.037 rows=40009646 loops=1)\n#3 -> Index Scan using t_i_idx on t (cost=0.57..1103588.59 rows=39536704 width=4) (actual time=1.743..46436.538 rows=40009646 loops=1)\n\nScan fresh index:\n -> Index Scan using t_i_idx1 on t (cost=0.57..1074105.46 rows=39536704 width=4) (actual time=1.715..16119.720 rows=40009646 loops=1)\n\nbitmap scan on non-fresh idx:\n -> Bitmap Heap Scan on t (cost=543141.78..1578670.34 rows=39536704 width=4) (actual time=4397.767..9137.541 rows=40009646 loops=1)\n Buffers: shared hit=91235 read=225314\n -> Bitmap Index Scan on t_i_idx (cost=0.00..533257.61 rows=39536704 width=0) (actual time=4346.556..4346.556 rows=40009646 loops=1)\n Buffers: shared read=139118\n\nseq scan:\n -> Seq Scan on t (cost=0.00..1942478.00 rows=39536704 width=4) (actual time=6093.269..17880.164 rows=40009646 loops=1)\n\nI also tried an idx only scan (note COUNT i vs j / \"eye\" vs \"jay\"), which I\nthink should be like an index scan without heap costs:\n\npostgres=# SET max_parallel_workers_per_gather=0;SET enable_bitmapscan=off;SET enable_indexscan=on; begin; DROP INDEX t_i_idx1; explain(analyze,buffers) SELECT COUNT(i) FROM t WHERE i BETWEEN 4000 AND 8000; rollback;\n -> Index Only Scan using t_i_idx on t (cost=0.57..928624.65 rows=39536704 width=4) (actual time=0.515..12646.676 rows=40009646 loops=1)\n Buffers: shared hit=276 read=139118\n\nHowever, in this test, random reads on the INDEX are still causing a large\nfraction of the query time. When cached by the OS, this is much faster.\nCompare:\n\n#1 -> Bitmap Heap Scan on t (cost=543141.78..1578670.34 rows=39536704 width=4) (actual time=25498.978..41418.870 rows=40009646 loops=1)\n Buffers: shared read=316549 written=497\n -> Bitmap Index Scan on t_i_idx (cost=0.00..533257.61 rows=39536704 width=0) (actual time=25435.865..25435.865 rows=40009646 loops=1)\n Buffers: shared read=139118 written=2\n\n#2 -> Bitmap Heap Scan on t (cost=543141.78..1578670.34 rows=39536704 width=4) (actual time=5863.003..17531.860 rows=40009646 loops=1)\n Buffers: shared read=316549 written=31\n -> Bitmap Index Scan on t_i_idx (cost=0.00..533257.61 rows=39536704 width=0) (actual time=5799.400..5799.400 rows=40009646 loops=1)\n Buffers: shared read=139118 written=31\n\nNote that for the test data, the index is a large fraction of the table data\n(since the only non-indexed column is nullfrac=1):\n public | t | table | pryzbyj | 3458 MB | \n public | t_i_idx | index | pryzbyj | t | 2725 MB | \n public | t_i_idx1 | index | pryzbyj | t | 2142 MB | \n(that could be 10% smaller with fillfactor=100)\n\nI think the test case are reasonably reproducing the original issue. Note that\nthe 2nd invocation of the bitmap scan scanned the index in 5.8sec and the heap\nin 11sec, but the 2nd invocation of the index scan took 54sec, of which I\ngather ~6sec was from the index. So there's still 48sec spent accessing the\nheap randomly, rather than 11sec sequentially.\n\nI'm also playing with the tables which were the source of the original problem,\nfor which index reads in bitmap scan do not appear to be a large fraction of\nthe query time, probably because the index are 1-2% of the table size rather\nthan 60-70%. I'll mail about that separately.\n\nJustin\n\n", "msg_date": "Sat, 16 Dec 2017 20:37:01 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap scan is undercosted? - overestimated correlation and\n cost_index" }, { "msg_contents": "Sorry for delay with response, I had to switch to other tasks and didn't \nhave time to run proper tests and write some meaningful response.\n\nRecently,  a similar issue happened with another our database, so I \ndecided to write an update.\n\nBitmap scan was preferred to index scan by the planner, but bitmap scan \nwas running worse in practice. Here are the relevant pieces of a much \nbigger query plan:\n\n  ->  Bitmap Heap Scan on cmdb_program_daily_usage \ncmdb_program_daily_usage_6  (cost=6707.08..6879.35 rows=32 width=20) \n(actual time=39.994..40.019 rows=12 loops=336)\n        Recheck Cond: ((used_from = cmdb_ci_computer_12.id) AND \n(usage_date >= '2018-02-02'::date) AND (usage_date <= '2018-02-12'::date))\n        Filter: (((NOT thin_client) OR (thin_client IS NULL)) AND \n(program_instance IS NOT NULL) AND (minutes_in_use > 0))\n        Rows Removed by Filter: 69\n        Heap Blocks: exact=2995\n        Buffers: shared hit=563448\n        ->  BitmapAnd  (cost=6707.08..6707.08 rows=154 width=0) (actual \ntime=39.978..39.978 rows=0 loops=336)\n              Buffers: shared hit=560453\n              ->  Bitmap Index Scan on idx_fk_5317241949468942  \n(cost=0.00..133.87 rows=12641 width=0) (actual time=0.373..0.373 \nrows=4780 loops=336)\n                    Index Cond: (used_from = cmdb_ci_computer_12.id)\n                    Buffers: shared hit=5765\n              ->  Bitmap Index Scan on idx_263911642415136  \n(cost=0.00..6572.94 rows=504668 width=0) (actual time=40.873..40.873 \nrows=540327 loops=324)\n                    Index Cond: ((usage_date >= '2018-02-02'::date) AND \n(usage_date <= '2018-02-12'::date))\n                    Buffers: shared hit=554688\n\n  ->  Index Scan using idx_fk_5317241949468942 on \ncmdb_program_daily_usage cmdb_program_daily_usage_6 (cost=0.56..24322.97 \nrows=35 width=20) (actual time=1.211..2.196 rows=14 loops=338)\n        Index Cond: (used_from = cmdb_ci_computer_12.id)\n        Filter: (((NOT thin_client) OR (thin_client IS NULL)) AND \n(program_instance IS NOT NULL) AND (minutes_in_use > 0) AND (usage_date \n >= '2018-02-02'::date) AND (usage_date <= '2018-02-12'::date))\n        Rows Removed by Filter: 4786\n        Buffers: shared hit=289812\n\nThe difference in run time does not look very huge, but when it's a part \nof a loop, that could mean difference between minutes and hours.\n\nAfter running some tests, here are the conclusions we've made:\n\n- When running with cold cache, and data is being read from disk, then \nthe planner estimates look adequate. Bitmap scan has better costs, and \nindeed it performs better in that case.\n\n- When running with hot cache, and most of data is already in RAM, then \nindex scan starts to outperform bitmap scan. Unfortunately the planner \ncannot account for the cache very well, and can't switch the plan. \nBecause even if the planner would ever learn to account for the current \ncontent of shared buffers, it still can't know much about the content of \nfilesystem cache.\n\n- Tests showed that the costs are dominated by random_page_cost, but \nthere is still potential to change the total plan cost, if \"cpu_*\" costs \nwould be less distant from \"*_page_cost\".\n\n- In our case the data is likely to be in cache, so we decided to change \ncost settings: seq_page_cost 1.0 -> 0.5; random_page_cost 1.1 -> 0.6\n\nRegards,\nVitaliy\n\n\n", "msg_date": "Sat, 24 Feb 2018 10:45:14 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bitmap scan is undercosted?" } ]
[ { "msg_contents": "Hi,\nI have a big function that includes many truncates on different tables. In\nthe documentation is is written that truncates creates a new file and\nresign the old filenode to the new filenode and the old file (old data of\nthe table) is deleted in commit.\n\nIn order to execute my function I run psql -d 'aa' -U 'bb' -C \"select\nfunction()\";\n\nI have a few questions about it ?\n\n1.When I start the function, it means that the whole function is one big\ntransaction right ?\n2..Because the truncates I preform are part of a transaction it means that\nonly at the end of the transaction that space will be freed ? Which mean\nonly when the function is finished?\n3..Does running vacuum at the end of the function on the tables that were\ntruncated and then populated with data will have any impact or is it better\njust to analyze them ?\n\n\nThanks.\n\nHi,I have a big function that includes many truncates on different tables. In the documentation is is written that truncates creates a new file and resign the old filenode to the new filenode and the old file (old data of the table) is deleted in commit. In order to execute my function I run psql -d 'aa' -U 'bb' -C \"select function()\";I have a few questions about it ?1.When I start the function, it means that the whole function is one big transaction right ?2..Because the truncates I preform are part of a transaction it means that only at the end of the transaction that space will be freed ? Which mean only when the function is finished?3..Does running vacuum at the end of the function on the tables that were truncated and then populated with data will have any impact or is it better just to analyze them ?Thanks.", "msg_date": "Tue, 5 Dec 2017 16:03:11 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum after truncate" }, { "msg_contents": "Mariel Cherkassky wrote:\n> Hi,\n> I have a big function that includes many truncates on different tables.\n> In the documentation is is written that truncates creates a new file\n> and resign the old filenode to the new filenode and the old file\n> (old data of the table) is deleted in commit. \n> \n> In order to execute my function I run psql -d 'aa' -U 'bb' -C \"select function()\";\n> \n> I have a few questions about it ?\n> \n> 1.When I start the function, it means that the whole function is one big transaction right ?\n\nRight.\n\n> 2..Because the truncates I preform are part of a transaction it means that only at the end\n> of the transaction that space will be freed ? Which mean only when the function is finished?\n\nExactly. The old file has to be retained, because there could be a ROLLBACK.\n\n> 3..Does running vacuum at the end of the function on the tables that were truncated and\n> then populated with data will have any impact or is it better just to analyze them ?\n\nFor up-to-date statistics, ANALYZE is enough.\nIf you want to set hint bits so that the first reader doesn't have to do it,\nVACUUM will help. But that is not necessary.\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Tue, 05 Dec 2017 15:40:54 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum after truncate" } ]
[ { "msg_contents": "Hi, I think something changed recently in my development environment as \nI don't recall deletes being so slow before.\n\nI've created a new dump and restored to a new database, ran VACUUM FULL \nANALYSE and a simple delete takes forever as you can see here:\n\n\nexplain analyze delete from field_values where transaction_id=226;\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n  Delete on field_values  (cost=0.43..257.93 rows=481 width=6) (actual \ntime=367375.805..367375.805 rows=0 loops=1)\n    ->  Index Scan using index_field_values_on_transaction_id on \nfield_values  (cost=0.43..257.93 rows=481 width=6) (actual \ntime=0.223..4.216 rows=651 loops=1)\n          Index Cond: (transaction_id = 226)\n  Planning time: 0.234 ms\n  Execution time: 367375.882 ms\n(5 registros)\n\nTime: 367377,085 ms (06:07,377)\n\n\nAny ideas on what could be causing this? Could it be an issue with my \nhard drive?\n\nThere aren't that many records to delete from the other tables \nreferencing field_values. I've done this sort of operation earlier this \nyear and it was quite fast. Any clues?\n\nThanks in advance,\n\nRodrigo.\n\n\n", "msg_date": "Tue, 5 Dec 2017 14:21:38 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Extremely slow DELETE with cascade foreign keys" }, { "msg_contents": "Rodrigo Rosenfeld Rosas <[email protected]> writes:\n> Hi, I think something changed recently in my development environment as \n> I don't recall deletes being so slow before.\n> I've created a new dump and restored to a new database, ran VACUUM FULL \n> ANALYSE and a simple delete takes forever as you can see here:\n\nThe usual suspect for this is not having an index on some FK referencing\ncolumn, thus forcing the FK check trigger to seq-scan the entire\nreferencing table for each referenced row that is to be deleted.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 05 Dec 2017 11:27:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow DELETE with cascade foreign keys" }, { "msg_contents": "Rodrigo Rosenfeld Rosas wrote:\n\n> explain analyze delete from field_values where transaction_id=226;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n> �Delete on field_values� (cost=0.43..257.93 rows=481 width=6) (actual\n> time=367375.805..367375.805 rows=0 loops=1)\n> �� ->� Index Scan using index_field_values_on_transaction_id on\n> field_values� (cost=0.43..257.93 rows=481 width=6) (actual time=0.223..4.216\n> rows=651 loops=1)\n> �������� Index Cond: (transaction_id = 226)\n> �Planning time: 0.234 ms\n> �Execution time: 367375.882 ms\n> (5 registros)\n> \n> Time: 367377,085 ms (06:07,377)\n\nNormally this is because you lack indexes on the referencing columns, so\nthe query that scans the table to find the referencing rows is a\nseqscan.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 5 Dec 2017 13:43:28 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow DELETE with cascade foreign keys" }, { "msg_contents": "Em 05-12-2017 14:27, Tom Lane escreveu:\n> Rodrigo Rosenfeld Rosas <[email protected]> writes:\n>> Hi, I think something changed recently in my development environment as\n>> I don't recall deletes being so slow before.\n>> I've created a new dump and restored to a new database, ran VACUUM FULL\n>> ANALYSE and a simple delete takes forever as you can see here:\n> The usual suspect for this is not having an index on some FK referencing\n> column, thus forcing the FK check trigger to seq-scan the entire\n> referencing table for each referenced row that is to be deleted.\n>\n> \t\t\tregards, tom lane\n\n\nThanks, indeed that was the case. I manually inspected about a dozen \ntables referencing field_values and the last one (\"references\") was \nreferenced by another table (\"highlighted_texts\") and the reference_id \ncolumn that has a foreign key on \"references\"(id) was missing an index.\n\nGood job :)\n\nBest,\n\nRodrigo.\n\n\n", "msg_date": "Tue, 5 Dec 2017 15:00:41 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extremely slow DELETE with cascade foreign keys" }, { "msg_contents": "Em 05-12-2017 14:43, Alvaro Herrera escreveu:\n> Rodrigo Rosenfeld Rosas wrote:\n>\n>> explain analyze delete from field_values where transaction_id=226;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>  Delete on field_values  (cost=0.43..257.93 rows=481 width=6) (actual\n>> time=367375.805..367375.805 rows=0 loops=1)\n>>    ->  Index Scan using index_field_values_on_transaction_id on\n>> field_values  (cost=0.43..257.93 rows=481 width=6) (actual time=0.223..4.216\n>> rows=651 loops=1)\n>>          Index Cond: (transaction_id = 226)\n>>  Planning time: 0.234 ms\n>>  Execution time: 367375.882 ms\n>> (5 registros)\n>>\n>> Time: 367377,085 ms (06:07,377)\n> Normally this is because you lack indexes on the referencing columns, so\n> the query that scans the table to find the referencing rows is a\n> seqscan.\n>\n\nThank you, Álvaro, that was indeed the case, just like Tom Lane \nsuggested as well. I found the missing index and fixed it. Thanks :)\n\n\n", "msg_date": "Tue, 5 Dec 2017 15:22:45 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extremely slow DELETE with cascade foreign keys" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Rodrigo Rosenfeld Rosas wrote:\n>> explain analyze delete from field_values where transaction_id=226;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>  Delete on field_values  (cost=0.43..257.93 rows=481 width=6) (actual\n>> time=367375.805..367375.805 rows=0 loops=1)\n>>    ->  Index Scan using index_field_values_on_transaction_id on\n>> field_values  (cost=0.43..257.93 rows=481 width=6) (actual time=0.223..4.216\n>> rows=651 loops=1)\n>>          Index Cond: (transaction_id = 226)\n>>  Planning time: 0.234 ms\n>>  Execution time: 367375.882 ms\n>> (5 registros)\n>> \n>> Time: 367377,085 ms (06:07,377)\n\n> Normally this is because you lack indexes on the referencing columns, so\n> the query that scans the table to find the referencing rows is a\n> seqscan.\n\nActually though ... the weird thing about this is that I'd expect to\nsee a separate line in the EXPLAIN output for time spent in the FK\ntrigger. Where'd that go?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 05 Dec 2017 12:25:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow DELETE with cascade foreign keys" }, { "msg_contents": "Em 05-12-2017 15:25, Tom Lane escreveu:\n> Alvaro Herrera <[email protected]> writes:\n>> Rodrigo Rosenfeld Rosas wrote:\n>>> explain analyze delete from field_values where transaction_id=226;\n>>> QUERY PLAN\n>>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>  Delete on field_values  (cost=0.43..257.93 rows=481 width=6) (actual\n>>> time=367375.805..367375.805 rows=0 loops=1)\n>>>    ->  Index Scan using index_field_values_on_transaction_id on\n>>> field_values  (cost=0.43..257.93 rows=481 width=6) (actual time=0.223..4.216\n>>> rows=651 loops=1)\n>>>          Index Cond: (transaction_id = 226)\n>>>  Planning time: 0.234 ms\n>>>  Execution time: 367375.882 ms\n>>> (5 registros)\n>>>\n>>> Time: 367377,085 ms (06:07,377)\n>> Normally this is because you lack indexes on the referencing columns, so\n>> the query that scans the table to find the referencing rows is a\n>> seqscan.\n> Actually though ... the weird thing about this is that I'd expect to\n> see a separate line in the EXPLAIN output for time spent in the FK\n> trigger. Where'd that go?\n>\n> \t\t\tregards, tom lane\n\n\nYes, I was also hoping to get more insights through the EXPLAIN output :)\n\n\n", "msg_date": "Tue, 5 Dec 2017 15:27:28 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extremely slow DELETE with cascade foreign keys" }, { "msg_contents": "Rodrigo Rosenfeld Rosas wrote:\n> Em 05-12-2017 15:25, Tom Lane escreveu:\n\n> > > Normally this is because you lack indexes on the referencing columns, so\n> > > the query that scans the table to find the referencing rows is a\n> > > seqscan.\n> > Actually though ... the weird thing about this is that I'd expect to\n> > see a separate line in the EXPLAIN output for time spent in the FK\n> > trigger. Where'd that go?\n> \n> Yes, I was also hoping to get more insights through the EXPLAIN output :)\n\nIt normally does. Can you show \\d of the table containing the FK?\n\nalvherre=# begin; explain analyze delete from pk where a = 505; rollback;\nBEGIN\nDuración: 0,207 ms\n QUERY PLAN \n──────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n Delete on pk (cost=0.00..8.27 rows=1 width=6) (actual time=0.023..0.023 rows=0 loops=1)\n -> Index Scan using pk_pkey on pk (cost=0.00..8.27 rows=1 width=6) (actual time=0.012..0.013 rows=1 loops=1)\n Index Cond: (a = 505)\n Trigger for constraint fk_a_fkey: time=201.580 calls=1\n Total runtime: 201.625 ms\n(5 filas)\n\nalvherre=# \\d fk\n Tabla «public.fk»\n Columna │ Tipo │ Modificadores \n─────────┼─────────┼───────────────\n a │ integer │ \nRestricciones de llave foránea:\n \"fk_a_fkey\" FOREIGN KEY (a) REFERENCES pk(a) ON DELETE CASCADE\n\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 5 Dec 2017 14:49:14 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow DELETE with cascade foreign keys" }, { "msg_contents": "Em 05-12-2017 15:49, Alvaro Herrera escreveu:\n> Rodrigo Rosenfeld Rosas wrote:\n>> Em 05-12-2017 15:25, Tom Lane escreveu:\n>>>> Normally this is because you lack indexes on the referencing columns, so\n>>>> the query that scans the table to find the referencing rows is a\n>>>> seqscan.\n>>> Actually though ... the weird thing about this is that I'd expect to\n>>> see a separate line in the EXPLAIN output for time spent in the FK\n>>> trigger. Where'd that go?\n>> Yes, I was also hoping to get more insights through the EXPLAIN output :)\n> It normally does. Can you show \\d of the table containing the FK?\n\n\\d highlighted_text\n                                          Tabela \"public.highlighted_text\"\n     Coluna    |            Tipo             | Collation | Nullable \n|                   Default\n--------------+-----------------------------+-----------+----------+----------------------------------------------\n  id           | integer                     |           | not null | \nnextval('highlighted_text_id_seq'::regclass)\n  date_created | timestamp without time zone |           | not null |\n  last_updated | timestamp without time zone |           | not null |\n  reference_id | integer                     |           | not null |\n  highlighting | text                        |           | |\nÍndices:\n     \"highlighted_text_pkey\" PRIMARY KEY, btree (id)\n     \"highlighted_text_reference_id_idx\" btree (reference_id)\nRestrições de chave estrangeira:\n     \"fk_highlighted_text_reference\" FOREIGN KEY (reference_id) \nREFERENCES \"references\"(id) ON DELETE CASCADE\n\nThe highlighted_text_reference_id_idx was previously missing.\n\nbegin; explain analyze delete from \"references\" where id=966539; rollback;\nBEGIN\nTempo: 0,466 ms\n                                                              QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n  Delete on \"references\"  (cost=0.43..8.45 rows=1 width=6) (actual \ntime=2.683..2.683 rows=0 loops=1)\n    ->  Index Scan using references_pkey on \"references\" \n(cost=0.43..8.45 rows=1 width=6) (actual time=2.609..2.612 rows=1 loops=1)\n          Index Cond: (id = 966539)\n  Planning time: 0.186 ms\n  Trigger for constraint fk_highlighted_text_reference: time=0.804 calls=1\n  Execution time: 3.551 ms\n(6 registros)\n\nTempo: 4,791 ms\nROLLBACK\nTempo: 0,316 ms\n\ndrop index highlighted_text_reference_id_idx;\nDROP INDEX\nTempo: 35,938 ms\n\nbegin; explain analyze delete from \"references\" where id=966539; rollback;\nBEGIN\nTempo: 0,494 ms\n                                                              QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n  Delete on \"references\"  (cost=0.43..8.45 rows=1 width=6) (actual \ntime=0.112..0.112 rows=0 loops=1)\n    ->  Index Scan using references_pkey on \"references\" \n(cost=0.43..8.45 rows=1 width=6) (actual time=0.071..0.074 rows=1 loops=1)\n          Index Cond: (id = 966539)\n  Planning time: 0.181 ms\n  Trigger for constraint fk_highlighted_text_reference: time=2513.816 \ncalls=1\n  Execution time: 2513.992 ms\n(6 registros)\n\nTime: 2514,801 ms (00:02,515)\nROLLBACK\nTempo: 0,291 ms\n\nIt displayed the spent on the trigger this time. How about deleting the \nfield values?\n\nbegin; explain analyze delete from field_values where \ntransaction_id=2479; rollback;\nBEGIN\nTempo: 0,461 ms\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n  Delete on field_values  (cost=0.43..364.98 rows=453 width=6) (actual \ntime=4.732..4.732 rows=0 loops=1)\n    ->  Index Scan using index_field_values_on_transaction_id on \nfield_values (cost=0.43..364.98 rows=453 width=6) (actual \ntime=0.137..0.949 rows=624 loops=1)\n          Index Cond: (transaction_id = 2479)\n  Planning time: 0.210 ms\n  Trigger for constraint field_value_booleans_field_value_id_fkey on \nfield_values: time=7.953 calls=624\n  Trigger for constraint field_value_currencies_field_value_id_fkey on \nfield_values: time=5.548 calls=624\n  Trigger for constraint field_value_jurisdictions_field_value_id_fkey \non field_values: time=6.376 calls=624\n  Trigger for constraint fk_field_value_date_range_field_value_id on \nfield_values: time=5.735 calls=624\n  Trigger for constraint fk_field_value_dates_field_value_id on \nfield_values: time=6.316 calls=624\n  Trigger for constraint fk_field_value_numerics_field_value_id on \nfield_values: time=6.368 calls=624\n  Trigger for constraint fk_field_value_options_field_value_id on \nfield_values: time=6.503 calls=624\n  Trigger for constraint fk_field_value_strings_field_value_id on \nfield_values: time=6.794 calls=624\n  Trigger for constraint fk_field_value_time_spans_field_value_id on \nfield_values: time=6.332 calls=624\n  Trigger for constraint fk_references_field_value_id on field_values: \ntime=7.382 calls=624\n  Trigger for constraint fk_highlighted_text_reference on references: \ntime=644994.047 calls=390\n  Execution time: 645065.326 ms\n(16 registros)\n\nTime: 645066,726 ms (10:45,067)\nROLLBACK\nTempo: 0,300 ms\n\nYeah, for some reason, now I got the relevant trigger hints :) Go figure \nout why it didn't work the last time I tried before subscribing to this \nlist :)\n\nGlad it's working now anyway :)\n\nThanks,\nRodrigo.\n\n>\n> alvherre=# begin; explain analyze delete from pk where a = 505; rollback;\n> BEGIN\n> Duración: 0,207 ms\n> QUERY PLAN\n> ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n> Delete on pk (cost=0.00..8.27 rows=1 width=6) (actual time=0.023..0.023 rows=0 loops=1)\n> -> Index Scan using pk_pkey on pk (cost=0.00..8.27 rows=1 width=6) (actual time=0.012..0.013 rows=1 loops=1)\n> Index Cond: (a = 505)\n> Trigger for constraint fk_a_fkey: time=201.580 calls=1\n> Total runtime: 201.625 ms\n> (5 filas)\n>\n> alvherre=# \\d fk\n> Tabla «public.fk»\n> Columna │ Tipo │ Modificadores\n> ─────────┼─────────┼───────────────\n> a │ integer │\n> Restricciones de llave foránea:\n> \"fk_a_fkey\" FOREIGN KEY (a) REFERENCES pk(a) ON DELETE CASCADE\n>\n>\n\n\n", "msg_date": "Tue, 5 Dec 2017 16:15:14 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extremely slow DELETE with cascade foreign keys" }, { "msg_contents": "Em 05-12-2017 16:15, Rodrigo Rosenfeld Rosas escreveu:\n> Em 05-12-2017 15:49, Alvaro Herrera escreveu:\n>> Rodrigo Rosenfeld Rosas wrote:\n>>> Em 05-12-2017 15:25, Tom Lane escreveu:\n>>>>> Normally this is because you lack indexes on the referencing \n>>>>> columns, so\n>>>>> the query that scans the table to find the referencing rows is a\n>>>>> seqscan.\n>>>> Actually though ... the weird thing about this is that I'd expect to\n>>>> see a separate line in the EXPLAIN output for time spent in the FK\n>>>> trigger.  Where'd that go?\n>>> Yes, I was also hoping to get more insights through the EXPLAIN \n>>> output :)\n>> It normally does.  Can you show \\d of the table containing the FK?\n>\n> \\d highlighted_text\n>                                          Tabela \"public.highlighted_text\"\n>     Coluna    |            Tipo             | Collation | Nullable \n> |                   Default\n> --------------+-----------------------------+-----------+----------+---------------------------------------------- \n>\n>  id           | integer                     |           | not null | \n> nextval('highlighted_text_id_seq'::regclass)\n>  date_created | timestamp without time zone |           | not null |\n>  last_updated | timestamp without time zone |           | not null |\n>  reference_id | integer                     |           | not null |\n>  highlighting | text                        |           | |\n> Índices:\n>     \"highlighted_text_pkey\" PRIMARY KEY, btree (id)\n>     \"highlighted_text_reference_id_idx\" btree (reference_id)\n> Restrições de chave estrangeira:\n>     \"fk_highlighted_text_reference\" FOREIGN KEY (reference_id) \n> REFERENCES \"references\"(id) ON DELETE CASCADE\n>\n> The highlighted_text_reference_id_idx was previously missing.\n>\n> begin; explain analyze delete from \"references\" where id=966539; \n> rollback;\n> BEGIN\n> Tempo: 0,466 ms\n>                                                              QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------ \n>\n>  Delete on \"references\"  (cost=0.43..8.45 rows=1 width=6) (actual \n> time=2.683..2.683 rows=0 loops=1)\n>    ->  Index Scan using references_pkey on \"references\" \n> (cost=0.43..8.45 rows=1 width=6) (actual time=2.609..2.612 rows=1 \n> loops=1)\n>          Index Cond: (id = 966539)\n>  Planning time: 0.186 ms\n>  Trigger for constraint fk_highlighted_text_reference: time=0.804 calls=1\n>  Execution time: 3.551 ms\n> (6 registros)\n>\n> Tempo: 4,791 ms\n> ROLLBACK\n> Tempo: 0,316 ms\n>\n> drop index highlighted_text_reference_id_idx;\n> DROP INDEX\n> Tempo: 35,938 ms\n>\n> begin; explain analyze delete from \"references\" where id=966539; \n> rollback;\n> BEGIN\n> Tempo: 0,494 ms\n>                                                              QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------ \n>\n>  Delete on \"references\"  (cost=0.43..8.45 rows=1 width=6) (actual \n> time=0.112..0.112 rows=0 loops=1)\n>    ->  Index Scan using references_pkey on \"references\" \n> (cost=0.43..8.45 rows=1 width=6) (actual time=0.071..0.074 rows=1 \n> loops=1)\n>          Index Cond: (id = 966539)\n>  Planning time: 0.181 ms\n>  Trigger for constraint fk_highlighted_text_reference: time=2513.816 \n> calls=1\n>  Execution time: 2513.992 ms\n> (6 registros)\n>\n> Time: 2514,801 ms (00:02,515)\n> ROLLBACK\n> Tempo: 0,291 ms\n>\n> It displayed the spent on the trigger this time. How about deleting \n> the field values?\n>\n> begin; explain analyze delete from field_values where \n> transaction_id=2479; rollback;\n> BEGIN\n> Tempo: 0,461 ms\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------- \n>\n>  Delete on field_values  (cost=0.43..364.98 rows=453 width=6) (actual \n> time=4.732..4.732 rows=0 loops=1)\n>    ->  Index Scan using index_field_values_on_transaction_id on \n> field_values (cost=0.43..364.98 rows=453 width=6) (actual \n> time=0.137..0.949 rows=624 loops=1)\n>          Index Cond: (transaction_id = 2479)\n>  Planning time: 0.210 ms\n>  Trigger for constraint field_value_booleans_field_value_id_fkey on \n> field_values: time=7.953 calls=624\n>  Trigger for constraint field_value_currencies_field_value_id_fkey on \n> field_values: time=5.548 calls=624\n>  Trigger for constraint field_value_jurisdictions_field_value_id_fkey \n> on field_values: time=6.376 calls=624\n>  Trigger for constraint fk_field_value_date_range_field_value_id on \n> field_values: time=5.735 calls=624\n>  Trigger for constraint fk_field_value_dates_field_value_id on \n> field_values: time=6.316 calls=624\n>  Trigger for constraint fk_field_value_numerics_field_value_id on \n> field_values: time=6.368 calls=624\n>  Trigger for constraint fk_field_value_options_field_value_id on \n> field_values: time=6.503 calls=624\n>  Trigger for constraint fk_field_value_strings_field_value_id on \n> field_values: time=6.794 calls=624\n>  Trigger for constraint fk_field_value_time_spans_field_value_id on \n> field_values: time=6.332 calls=624\n>  Trigger for constraint fk_references_field_value_id on field_values: \n> time=7.382 calls=624\n>  Trigger for constraint fk_highlighted_text_reference on references: \n> time=644994.047 calls=390\n>  Execution time: 645065.326 ms\n> (16 registros)\n>\n> Time: 645066,726 ms (10:45,067)\n> ROLLBACK\n> Tempo: 0,300 ms\n>\n> Yeah, for some reason, now I got the relevant trigger hints :) Go \n> figure out why it didn't work the last time I tried before subscribing \n> to this list :)\n>\n> Glad it's working now anyway :) \n\nJust in case you're curious, creating the index took 6s and running the \nsame delete this time only took 75ms with the index in place :)\n\n\n", "msg_date": "Tue, 5 Dec 2017 16:17:35 -0200", "msg_from": "Rodrigo Rosenfeld Rosas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extremely slow DELETE with cascade foreign keys" } ]
[ { "msg_contents": "I get very different plan chosen when my query is in a lateral subquery vs\nstandalone -- it doesn't use a key when joining on a table, instead opting\nto do a hash join. Here is the query:\n\nselect distinct on (sub.entity_id, sub.note_id, sub.series_id)\n entity_id, note_id, series_id\nfrom\n(\nselect alloc.entity_id, alloc.note_id, alloc.series_id, alloc.amount,\ninv.name\nfrom public.portfolio_allocations alloc\nJOIN contributions contrib on contrib.id = alloc.note_id\nJOIN investments inv on inv.id = contrib.investment_id\nwhere entity_id = '\\x5787f132f50f7b03002cf835' and\nalloc.allocated_on <= dates.date\n) sub\n\nAnd wrapped inside the lateral:\n\n explain analyze\n select *\n from generate_series('2017-03-14 20:59:59.999'::TIMESTAMPTZ,\n current_timestamp::TIMESTAMP + INTERVAL '1 day', '24 hours') dates,\n LATERAL (\n ... <SUB QUERY HERE> ...\n ) lat\n\nRun by itself injecting a hard coded value for dates.date, I get the\nexpected plan which uses a key index on contributions:\n\n Unique (cost=14.54..14.54 rows=2 width=39) (actual time=0.052..0.053\nrows=2 loops=1)\n -> Sort (cost=14.54..14.54 rows=2 width=39) (actual\ntime=0.052..0.052 rows=2 loops=1)\n Sort Key: alloc.note_id, alloc.series_id\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.25..14.53 rows=2 width=39) (actual\ntime=0.030..0.042 rows=2 loops=1)\n -> Nested Loop (cost=0.17..14.23 rows=2 width=52)\n(actual time=0.022..0.028 rows=2 loops=1)\n -> Index Scan using\nportfolio_allocations_entity_id_allocated_on_idx on\nportfolio_allocations alloc (cost=0.09..6.05 rows=2 width=39) (actual\n time=0.012..0.014\n Index Cond: ((entity_id =\n'\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= '2017-03-14\n20:59:59.999+00'::timestamp with time zone))\n -> Index Scan using\ncontributions_id_accrue_from_idx on contributions contrib\n(cost=0.08..4.09 rows=1 width=26) (actual time=0.005..0.005 rows=1\nloops=2)\n Index Cond: (id = alloc.note_id)\n -> Index Only Scan using investments_pkey on\ninvestments inv ( cost=0.08..0.15 rows=1 width=13) (actual\ntime=0.005..0.006 rows=1 loops=2)\n Index Cond: (id = contrib.investment_id)\n Heap Fetches: 2\n Planning time: 0.617 ms\n Execution time: 0.100 ms\n (15 rows)\n\nBut run in the lateral, it doesn't use the index:\n\n Nested Loop (cost=14.54..24.55 rows=2000 width=47) (actual\ntime=0.085..0.219 rows=534 loops=1)\n -> Function Scan on generate_series dates (cost=0.00..3.00\nrows=1000 width=8) (actual time=0.031..0.043 rows=267 loops=1)\n -> Materialize (cost=14.54..14.55 rows=2 width=39) (actual\ntime=0.000..0.000 rows=2 loops=267)\n -> Unique (cost=14.54..14.54 rows=2 width=39) (actual\ntime=0.052..0.053 rows=2 loops=1)\n -> Sort (cost=14.54..14.54 rows=2 width=39) (actual\ntime=0.051..0.052 rows=2 loops=1)\n Sort Key: alloc.note_id, alloc.series_id\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.25..14.53 rows=2\nwidth=39) (actual time=0.029..0.041 rows=2 loops=1)\n -> Nested Loop (cost=0.17..14.23 rows=2\nwidth=52) (actual time=0.021..0.027 rows=2 loops=1)\n -> Index Scan using\n portfolio_allocations_entity_id_allocated_on_idx on\nportfolio_allocations alloc (cost=0.09..6.05 rows=2 width=39) (actual\ntime=0\n Index Cond: ((entity_id =\n '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= '2017-03-14\n20:59:59.999+00'::timestamp with time zone))\n -> Index Scan using\ncontributions_id_accrue_from_idx on contributions contrib\n(cost=0.08..4.09 rows=1 width=26) ( actual time=0.005..0.005 rows=1 loo\n Index Cond: (id =\nalloc.note_id)\n -> Index Only Scan using investments_pkey\non investments inv ( cost=0.08..0.15 rows=1 width=13) (actual\ntime=0.005..0.006 rows=1 loops=2)\n Index Cond: (id =\ncontrib.investment_id)\n Heap Fetches: 2\n Planning time: 0.718 ms\n Execution time: 0.296 ms\n (18 rows)\n\nFor reference, here are the indexes on the relevant tables:\n\nIndexes:\n \"portfolio_allocations_entity_id_allocated_on_idx\" btree (entity_id,\nallocated_on DESC)\n \"portfolio_allocations_note_id_allocated_on_idx\" btree (note_id,\nallocated_on DESC)\n \"portfolio_allocations_pnsa\" btree (entity_id, note_id, series_id,\nallocated_on DESC)\n\nIndexes:\n \"contributions_pkey\" PRIMARY KEY, btree (id)\n \"contributions_id_accrue_from_idx\" btree (id,\nevents_earnings_accrue_from)\n\nI have a few questions here:\n - Why doesn't it use the primary key index in either case?\n - Why isn't it choosing portfolio_allocations_pnsa, which seems like it\nwould prevent it from having to sort?\n\nBest,\n~Alex\n\nI get very different plan chosen when my query is in a lateral subquery vs standalone -- it doesn't use a key when joining on a table, instead opting to do a hash join. Here is the query: select distinct on (sub.entity_id, sub.note_id, sub.series_id)        entity_id, note_id, series_id from ( select alloc.entity_id, alloc.note_id, alloc.series_id, alloc.amount, inv.name from public.portfolio_allocations alloc JOIN contributions contrib on contrib.id = alloc.note_id JOIN investments inv on inv.id = contrib.investment_id where entity_id = '\\x5787f132f50f7b03002cf835' and  alloc.allocated_on <= dates.date ) subAnd wrapped inside the lateral:        explain analyze        select *        from generate_series('2017-03-14 20:59:59.999'::TIMESTAMPTZ,            current_timestamp::TIMESTAMP + INTERVAL '1 day', '24 hours') dates,        LATERAL (         ... <SUB QUERY HERE> ...        ) latRun by itself injecting a hard coded value for dates.date, I get the expected plan which uses a key index on contributions:      Unique  (cost=14.54..14.54 rows=2 width=39) (actual time=0.052..0.053 rows=2 loops=1)         ->  Sort  (cost=14.54..14.54 rows=2 width=39) (actual time=0.052..0.052 rows=2 loops=1)               Sort Key: alloc.note_id, alloc.series_id               Sort Method: quicksort  Memory: 25kB               ->  Nested Loop  (cost=0.25..14.53 rows=2 width=39) (actual time=0.030..0.042 rows=2      loops=1)                     ->  Nested Loop  (cost=0.17..14.23 rows=2 width=52) (actual time=0.022..0.028       rows=2 loops=1)                           ->  Index Scan using portfolio_allocations_entity_id_allocated_on_idx on      portfolio_allocations alloc  (cost=0.09..6.05 rows=2 width=39) (actual     time=0.012..0.014                                  Index Cond: ((entity_id = '\\x5787f132f50f7b03002cf835'::bytea) AND      (allocated_on <= '2017-03-14 20:59:59.999+00'::timestamp with time   zone))                           ->  Index Scan using contributions_id_accrue_from_idx on contributions     contrib  (cost=0.08..4.09 rows=1 width=26) (actual time=0.005..0.005    rows=1 loops=2)                                 Index Cond: (id = alloc.note_id)                     ->  Index Only Scan using investments_pkey on investments inv  (     cost=0.08..0.15 rows=1 width=13) (actual time=0.005..0.006 rows=1 loops=2)                           Index Cond: (id = contrib.investment_id)                           Heap Fetches: 2       Planning time: 0.617 ms       Execution time: 0.100 ms      (15 rows)But run in the lateral, it doesn't use the index:       Nested Loop  (cost=14.54..24.55 rows=2000 width=47) (actual time=0.085..0.219 rows=534      loops=1)         ->  Function Scan on generate_series dates  (cost=0.00..3.00 rows=1000 width=8) (actual      time=0.031..0.043 rows=267 loops=1)         ->  Materialize  (cost=14.54..14.55 rows=2 width=39) (actual time=0.000..0.000 rows=2     loops=267)               ->  Unique  (cost=14.54..14.54 rows=2 width=39) (actual time=0.052..0.053 rows=2       loops=1)                     ->  Sort  (cost=14.54..14.54 rows=2 width=39) (actual time=0.051..0.052 rows=2      loops=1)                           Sort Key: alloc.note_id, alloc.series_id                           Sort Method: quicksort  Memory: 25kB                           ->  Nested Loop  (cost=0.25..14.53 rows=2 width=39) (actual       time=0.029..0.041 rows=2 loops=1)                                 ->  Nested Loop  (cost=0.17..14.23 rows=2 width=52) (actual       time=0.021..0.027 rows=2 loops=1)                                       ->  Index Scan using       portfolio_allocations_entity_id_allocated_on_idx on      portfolio_allocations alloc  (cost=0.09..6.05 rows=2     width=39) (actual time=0                                             Index Cond: ((entity_id =     '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on      <= '2017-03-14 20:59:59.999+00'::timestamp with time    zone))                                       ->  Index Scan using contributions_id_accrue_from_idx on       contributions contrib  (cost=0.08..4.09 rows=1 width=26) (     actual time=0.005..0.005 rows=1 loo                                             Index Cond: (id = alloc.note_id)                                 ->  Index Only Scan using investments_pkey on investments inv  (     cost=0.08..0.15 rows=1 width=13) (actual time=0.005..0.006 rows=1    loops=2)                                       Index Cond: (id = contrib.investment_id)                                       Heap Fetches: 2       Planning time: 0.718 ms       Execution time: 0.296 ms      (18 rows)For reference, here are the indexes on the relevant tables:Indexes:    \"portfolio_allocations_entity_id_allocated_on_idx\" btree (entity_id, allocated_on DESC)    \"portfolio_allocations_note_id_allocated_on_idx\" btree (note_id, allocated_on DESC)    \"portfolio_allocations_pnsa\" btree (entity_id, note_id, series_id, allocated_on DESC)Indexes:    \"contributions_pkey\" PRIMARY KEY, btree (id)    \"contributions_id_accrue_from_idx\" btree (id, events_earnings_accrue_from)I have a few questions here:  - Why doesn't it use the primary key index in either case?  - Why isn't it choosing portfolio_allocations_pnsa, which seems like it would prevent it from having to sort?Best,~Alex", "msg_date": "Tue, 05 Dec 2017 18:04:27 +0000", "msg_from": "Alex Reece <[email protected]>", "msg_from_op": true, "msg_subject": "Different plan chosen when in lateral subquery" }, { "msg_contents": "Weird, when I deleted an erroneous index it started picking a reasonable\nplan. This now works as expected, for posterity here is the bad plan:\n\n Nested Loop (cost=21281.50..21323812.82 rows=5621000 width=47) (actual\ntime=171.648..7233.298 rows=85615 loops=1)\n\n -> Function Scan on generate_series dates (cost=0.00..3.00 rows=1000\nwidth=8) (actual time=0.031..0.252 rows=267 loops=1)\n\n -> Unique (cost=21281.50..21290.08 rows=5621 width=39) (actual\ntime=25.730..27.050 rows=321 loops=267)\n\n -> Sort (cost=21281.50..21284.36 rows=5724 width=39) (actual\ntime=25.728..26.242 rows=6713 loops=267)\n\n Sort Key: alloc.note_id, alloc.series_id\n\n Sort Method: quicksort Memory: 2220kB\n\n -> Nested Loop (cost=10775.92..21210.05 rows=5724\nwidth=39) (actual time=1.663..21.938 rows=6713 loops=267)\n\n -> Hash Join (cost=10775.83..20355.61 rows=5724\nwidth=52) (actual time=1.657..5.980 rows=6713 loops=267)\n\n Hash Cond: (alloc.note_id = contrib.id)\n\n -> Bitmap Heap Scan on portfolio_allocations\nalloc (cost=69.82..9628.13 rows=5724 width=39) (actual time=1.010..2.278\nrows=6713 loops=267)\n\n Recheck Cond: ((entity_id =\n'\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <=\ndate(dates.dates)))\n\n Heap Blocks: exact=118074\n\n -> Bitmap Index Scan on\nportfolio_allocations_entity_id_allocated_on_idx (cost=0.00..69.53\nrows=5724 width=0) (actual time=0.956..0.956 rows=6713 lo\n\n Index Cond: ((entity_id =\n'\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <=\ndate(dates.dates)))\n\n -> Hash (cost=9464.85..9464.85 rows=354617\nwidth=26) (actual time=169.792..169.792 rows=354617 loops=1)\n\n Buckets: 524288 Batches: 1 Memory Usage:\n24296kB\n\n -> Seq Scan on contributions contrib\n (cost=0.00..9464.85\nrows=354617 width=26) (actual time=0.007..83.246 rows=354617 loops=1)\n\n -> Index Only Scan using investments_pkey on\ninvestments inv (cost=0.08..0.15 rows=1 width=13) (actual\ntime=0.002..0.002 rows=1 loops=1792457)\n\n Index Cond: (id = contrib.investment_id)\n\n Heap Fetches: 1792457\n\n Planning time: 0.721 ms\n\n Execution time: 7236.507 ms\n\n\nOn Tue, Dec 5, 2017 at 10:04 AM Alex Reece <[email protected]> wrote:\n\n> I get very different plan chosen when my query is in a lateral subquery vs\n> standalone -- it doesn't use a key when joining on a table, instead opting\n> to do a hash join. Here is the query:\n>\n> select distinct on (sub.entity_id, sub.note_id, sub.series_id)\n> entity_id, note_id, series_id\n> from\n> (\n> select alloc.entity_id, alloc.note_id, alloc.series_id, alloc.amount,\n> inv.name\n> from public.portfolio_allocations alloc\n> JOIN contributions contrib on contrib.id = alloc.note_id\n> JOIN investments inv on inv.id = contrib.investment_id\n> where entity_id = '\\x5787f132f50f7b03002cf835' and\n> alloc.allocated_on <= dates.date\n> ) sub\n>\n> And wrapped inside the lateral:\n>\n> explain analyze\n> select *\n> from generate_series('2017-03-14 20:59:59.999'::TIMESTAMPTZ,\n> current_timestamp::TIMESTAMP + INTERVAL '1 day', '24 hours') dates,\n> LATERAL (\n> ... <SUB QUERY HERE> ...\n> ) lat\n>\n> Run by itself injecting a hard coded value for dates.date, I get the\n> expected plan which uses a key index on contributions:\n>\n> Unique (cost=14.54..14.54 rows=2 width=39) (actual\n> time=0.052..0.053 rows=2 loops=1)\n> -> Sort (cost=14.54..14.54 rows=2 width=39) (actual\n> time=0.052..0.052 rows=2 loops=1)\n> Sort Key: alloc.note_id, alloc.series_id\n> Sort Method: quicksort Memory: 25kB\n> -> Nested Loop (cost=0.25..14.53 rows=2 width=39) (actual\n> time=0.030..0.042 rows=2 loops=1)\n> -> Nested Loop (cost=0.17..14.23 rows=2 width=52)\n> (actual time=0.022..0.028 rows=2 loops=1)\n> -> Index Scan using\n> portfolio_allocations_entity_id_allocated_on_idx on\n> portfolio_allocations alloc (cost=0.09..6.05 rows=2 width=39) (actual\n> time=0.012..0.014\n> Index Cond: ((entity_id =\n> '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= '2017-03-14\n> 20:59:59.999+00'::timestamp with time zone))\n> -> Index Scan using\n> contributions_id_accrue_from_idx on contributions contrib\n> (cost=0.08..4.09 rows=1 width=26) (actual time=0.005..0.005 rows=1\n> loops=2)\n> Index Cond: (id = alloc.note_id)\n> -> Index Only Scan using investments_pkey on\n> investments inv ( cost=0.08..0.15 rows=1 width=13) (actual\n> time=0.005..0.006 rows=1 loops=2)\n> Index Cond: (id = contrib.investment_id)\n> Heap Fetches: 2\n> Planning time: 0.617 ms\n> Execution time: 0.100 ms\n> (15 rows)\n>\n> But run in the lateral, it doesn't use the index:\n>\n> Nested Loop (cost=14.54..24.55 rows=2000 width=47) (actual\n> time=0.085..0.219 rows=534 loops=1)\n> -> Function Scan on generate_series dates (cost=0.00..3.00\n> rows=1000 width=8) (actual time=0.031..0.043 rows=267 loops=1)\n> -> Materialize (cost=14.54..14.55 rows=2 width=39) (actual\n> time=0.000..0.000 rows=2 loops=267)\n> -> Unique (cost=14.54..14.54 rows=2 width=39) (actual\n> time=0.052..0.053 rows=2 loops=1)\n> -> Sort (cost=14.54..14.54 rows=2 width=39) (actual\n> time=0.051..0.052 rows=2 loops=1)\n> Sort Key: alloc.note_id, alloc.series_id\n> Sort Method: quicksort Memory: 25kB\n> -> Nested Loop (cost=0.25..14.53 rows=2\n> width=39) (actual time=0.029..0.041 rows=2 loops=1)\n> -> Nested Loop (cost=0.17..14.23 rows=2\n> width=52) (actual time=0.021..0.027 rows=2 loops=1)\n> -> Index Scan using\n> portfolio_allocations_entity_id_allocated_on_idx on\n> portfolio_allocations alloc (cost=0.09..6.05 rows=2 width=39) (actual\n> time=0\n> Index Cond: ((entity_id =\n> '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= '2017-03-14\n> 20:59:59.999+00'::timestamp with time zone))\n> -> Index Scan using\n> contributions_id_accrue_from_idx on contributions contrib\n> (cost=0.08..4.09 rows=1 width=26) ( actual time=0.005..0.005 rows=1 loo\n> Index Cond: (id =\n> alloc.note_id)\n> -> Index Only Scan using\n> investments_pkey on investments inv ( cost=0.08..0.15 rows=1 width=13)\n> (actual time=0.005..0.006 rows=1 loops=2)\n> Index Cond: (id =\n> contrib.investment_id)\n> Heap Fetches: 2\n> Planning time: 0.718 ms\n> Execution time: 0.296 ms\n> (18 rows)\n>\n> For reference, here are the indexes on the relevant tables:\n>\n> Indexes:\n> \"portfolio_allocations_entity_id_allocated_on_idx\" btree (entity_id,\n> allocated_on DESC)\n> \"portfolio_allocations_note_id_allocated_on_idx\" btree (note_id,\n> allocated_on DESC)\n> \"portfolio_allocations_pnsa\" btree (entity_id, note_id, series_id,\n> allocated_on DESC)\n>\n> Indexes:\n> \"contributions_pkey\" PRIMARY KEY, btree (id)\n> \"contributions_id_accrue_from_idx\" btree (id,\n> events_earnings_accrue_from)\n>\n> I have a few questions here:\n> - Why doesn't it use the primary key index in either case?\n> - Why isn't it choosing portfolio_allocations_pnsa, which seems like it\n> would prevent it from having to sort?\n>\n> Best,\n> ~Alex\n>\n\nWeird, when I deleted an erroneous index it started picking a reasonable plan. This now works as expected, for posterity here is the bad plan:\n Nested Loop  (cost=21281.50..21323812.82 rows=5621000 width=47) (actual time=171.648..7233.298 rows=85615 loops=1)\n   ->  Function Scan on generate_series dates  (cost=0.00..3.00 rows=1000 width=8) (actual time=0.031..0.252 rows=267 loops=1)\n   ->  Unique  (cost=21281.50..21290.08 rows=5621 width=39) (actual time=25.730..27.050 rows=321 loops=267)\n         ->  Sort  (cost=21281.50..21284.36 rows=5724 width=39) (actual time=25.728..26.242 rows=6713 loops=267)\n               Sort Key: alloc.note_id, alloc.series_id\n               Sort Method: quicksort  Memory: 2220kB\n               ->  Nested Loop  (cost=10775.92..21210.05 rows=5724 width=39) (actual time=1.663..21.938 rows=6713 loops=267)\n                     ->  Hash Join  (cost=10775.83..20355.61 rows=5724 width=52) (actual time=1.657..5.980 rows=6713 loops=267)\n                           Hash Cond: (alloc.note_id = contrib.id)\n                           ->  Bitmap Heap Scan on portfolio_allocations alloc  (cost=69.82..9628.13 rows=5724 width=39) (actual time=1.010..2.278 rows=6713 loops=267)\n                                 Recheck Cond: ((entity_id = '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= date(dates.dates)))\n                                 Heap Blocks: exact=118074\n                                 ->  Bitmap Index Scan on portfolio_allocations_entity_id_allocated_on_idx  (cost=0.00..69.53 rows=5724 width=0) (actual time=0.956..0.956 rows=6713 lo\n                                       Index Cond: ((entity_id = '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= date(dates.dates)))\n                           ->  Hash  (cost=9464.85..9464.85 rows=354617 width=26) (actual time=169.792..169.792 rows=354617 loops=1)\n                                 Buckets: 524288  Batches: 1  Memory Usage: 24296kB\n                                 ->  Seq Scan on contributions contrib  (cost=0.00..9464.85 rows=354617 width=26) (actual time=0.007..83.246 rows=354617 loops=1)\n                     ->  Index Only Scan using investments_pkey on investments inv  (cost=0.08..0.15 rows=1 width=13) (actual time=0.002..0.002 rows=1 loops=1792457)\n                           Index Cond: (id = contrib.investment_id)\n                           Heap Fetches: 1792457\n Planning time: 0.721 ms\n Execution time: 7236.507 msOn Tue, Dec 5, 2017 at 10:04 AM Alex Reece <[email protected]> wrote:I get very different plan chosen when my query is in a lateral subquery vs standalone -- it doesn't use a key when joining on a table, instead opting to do a hash join. Here is the query: select distinct on (sub.entity_id, sub.note_id, sub.series_id)        entity_id, note_id, series_id from ( select alloc.entity_id, alloc.note_id, alloc.series_id, alloc.amount, inv.name from public.portfolio_allocations alloc JOIN contributions contrib on contrib.id = alloc.note_id JOIN investments inv on inv.id = contrib.investment_id where entity_id = '\\x5787f132f50f7b03002cf835' and  alloc.allocated_on <= dates.date ) subAnd wrapped inside the lateral:        explain analyze        select *        from generate_series('2017-03-14 20:59:59.999'::TIMESTAMPTZ,            current_timestamp::TIMESTAMP + INTERVAL '1 day', '24 hours') dates,        LATERAL (         ... <SUB QUERY HERE> ...        ) latRun by itself injecting a hard coded value for dates.date, I get the expected plan which uses a key index on contributions:      Unique  (cost=14.54..14.54 rows=2 width=39) (actual time=0.052..0.053 rows=2 loops=1)         ->  Sort  (cost=14.54..14.54 rows=2 width=39) (actual time=0.052..0.052 rows=2 loops=1)               Sort Key: alloc.note_id, alloc.series_id               Sort Method: quicksort  Memory: 25kB               ->  Nested Loop  (cost=0.25..14.53 rows=2 width=39) (actual time=0.030..0.042 rows=2      loops=1)                     ->  Nested Loop  (cost=0.17..14.23 rows=2 width=52) (actual time=0.022..0.028       rows=2 loops=1)                           ->  Index Scan using portfolio_allocations_entity_id_allocated_on_idx on      portfolio_allocations alloc  (cost=0.09..6.05 rows=2 width=39) (actual     time=0.012..0.014                                  Index Cond: ((entity_id = '\\x5787f132f50f7b03002cf835'::bytea) AND      (allocated_on <= '2017-03-14 20:59:59.999+00'::timestamp with time   zone))                           ->  Index Scan using contributions_id_accrue_from_idx on contributions     contrib  (cost=0.08..4.09 rows=1 width=26) (actual time=0.005..0.005    rows=1 loops=2)                                 Index Cond: (id = alloc.note_id)                     ->  Index Only Scan using investments_pkey on investments inv  (     cost=0.08..0.15 rows=1 width=13) (actual time=0.005..0.006 rows=1 loops=2)                           Index Cond: (id = contrib.investment_id)                           Heap Fetches: 2       Planning time: 0.617 ms       Execution time: 0.100 ms      (15 rows)But run in the lateral, it doesn't use the index:       Nested Loop  (cost=14.54..24.55 rows=2000 width=47) (actual time=0.085..0.219 rows=534      loops=1)         ->  Function Scan on generate_series dates  (cost=0.00..3.00 rows=1000 width=8) (actual      time=0.031..0.043 rows=267 loops=1)         ->  Materialize  (cost=14.54..14.55 rows=2 width=39) (actual time=0.000..0.000 rows=2     loops=267)               ->  Unique  (cost=14.54..14.54 rows=2 width=39) (actual time=0.052..0.053 rows=2       loops=1)                     ->  Sort  (cost=14.54..14.54 rows=2 width=39) (actual time=0.051..0.052 rows=2      loops=1)                           Sort Key: alloc.note_id, alloc.series_id                           Sort Method: quicksort  Memory: 25kB                           ->  Nested Loop  (cost=0.25..14.53 rows=2 width=39) (actual       time=0.029..0.041 rows=2 loops=1)                                 ->  Nested Loop  (cost=0.17..14.23 rows=2 width=52) (actual       time=0.021..0.027 rows=2 loops=1)                                       ->  Index Scan using       portfolio_allocations_entity_id_allocated_on_idx on      portfolio_allocations alloc  (cost=0.09..6.05 rows=2     width=39) (actual time=0                                             Index Cond: ((entity_id =     '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on      <= '2017-03-14 20:59:59.999+00'::timestamp with time    zone))                                       ->  Index Scan using contributions_id_accrue_from_idx on       contributions contrib  (cost=0.08..4.09 rows=1 width=26) (     actual time=0.005..0.005 rows=1 loo                                             Index Cond: (id = alloc.note_id)                                 ->  Index Only Scan using investments_pkey on investments inv  (     cost=0.08..0.15 rows=1 width=13) (actual time=0.005..0.006 rows=1    loops=2)                                       Index Cond: (id = contrib.investment_id)                                       Heap Fetches: 2       Planning time: 0.718 ms       Execution time: 0.296 ms      (18 rows)For reference, here are the indexes on the relevant tables:Indexes:    \"portfolio_allocations_entity_id_allocated_on_idx\" btree (entity_id, allocated_on DESC)    \"portfolio_allocations_note_id_allocated_on_idx\" btree (note_id, allocated_on DESC)    \"portfolio_allocations_pnsa\" btree (entity_id, note_id, series_id, allocated_on DESC)Indexes:    \"contributions_pkey\" PRIMARY KEY, btree (id)    \"contributions_id_accrue_from_idx\" btree (id, events_earnings_accrue_from)I have a few questions here:  - Why doesn't it use the primary key index in either case?  - Why isn't it choosing portfolio_allocations_pnsa, which seems like it would prevent it from having to sort?Best,~Alex", "msg_date": "Tue, 05 Dec 2017 18:08:35 +0000", "msg_from": "Alex Reece <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Different plan chosen when in lateral subquery" }, { "msg_contents": "Argh, so sorry for repeated posts; I'll be very careful to review them\nbefore posting. The \"good plan\" was the result of me hard coding '2017-03-14\n20:59:59.999+00'::timestamp of using dates.date inside the lateral\nsubquery. When I correctly use dates.date, it takes 7000ms instead of\n0.3ms. My questions still remain:\n\nI have a few questions here:\n - Why doesn't it use the primary key on contributions in either case,\npreferring contributions_id_accrue_from_idx or none at all?\n - Why isn't it choosing portfolio_allocations_pnsa, which seems like it\nwould prevent it from having to sort?\n - What information can I gather to answer these questions on my own?\n\n~Alex\n\nOn Tue, Dec 5, 2017 at 10:08 AM Alex Reece <[email protected]> wrote:\n\n> Weird, when I deleted an erroneous index it started picking a reasonable\n> plan. This now works as expected, for posterity here is the bad plan:\n>\n> Nested Loop (cost=21281.50..21323812.82 rows=5621000 width=47) (actual\n> time=171.648..7233.298 rows=85615 loops=1)\n>\n> -> Function Scan on generate_series dates (cost=0.00..3.00 rows=1000\n> width=8) (actual time=0.031..0.252 rows=267 loops=1)\n>\n> -> Unique (cost=21281.50..21290.08 rows=5621 width=39) (actual\n> time=25.730..27.050 rows=321 loops=267)\n>\n> -> Sort (cost=21281.50..21284.36 rows=5724 width=39) (actual\n> time=25.728..26.242 rows=6713 loops=267)\n>\n> Sort Key: alloc.note_id, alloc.series_id\n>\n> Sort Method: quicksort Memory: 2220kB\n>\n> -> Nested Loop (cost=10775.92..21210.05 rows=5724\n> width=39) (actual time=1.663..21.938 rows=6713 loops=267)\n>\n> -> Hash Join (cost=10775.83..20355.61 rows=5724\n> width=52) (actual time=1.657..5.980 rows=6713 loops=267)\n>\n> Hash Cond: (alloc.note_id = contrib.id)\n>\n> -> Bitmap Heap Scan on portfolio_allocations\n> alloc (cost=69.82..9628.13 rows=5724 width=39) (actual time=1.010..2.278\n> rows=6713 loops=267)\n>\n> Recheck Cond: ((entity_id =\n> '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <=\n> date(dates.dates)))\n>\n> Heap Blocks: exact=118074\n>\n> -> Bitmap Index Scan on\n> portfolio_allocations_entity_id_allocated_on_idx (cost=0.00..69.53\n> rows=5724 width=0) (actual time=0.956..0.956 rows=6713 lo\n>\n> Index Cond: ((entity_id =\n> '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <=\n> date(dates.dates)))\n>\n> -> Hash (cost=9464.85..9464.85 rows=354617\n> width=26) (actual time=169.792..169.792 rows=354617 loops=1)\n>\n> Buckets: 524288 Batches: 1 Memory\n> Usage: 24296kB\n>\n> -> Seq Scan on contributions contrib (cost=0.00..9464.85\n> rows=354617 width=26) (actual time=0.007..83.246 rows=354617 loops=1)\n>\n> -> Index Only Scan using investments_pkey on\n> investments inv (cost=0.08..0.15 rows=1 width=13) (actual\n> time=0.002..0.002 rows=1 loops=1792457)\n>\n> Index Cond: (id = contrib.investment_id)\n>\n> Heap Fetches: 1792457\n>\n> Planning time: 0.721 ms\n>\n> Execution time: 7236.507 ms\n>\n>\n> On Tue, Dec 5, 2017 at 10:04 AM Alex Reece <[email protected]> wrote:\n>\n>> I get very different plan chosen when my query is in a lateral subquery\n>> vs standalone -- it doesn't use a key when joining on a table, instead\n>> opting to do a hash join. Here is the query:\n>>\n>> select distinct on (sub.entity_id, sub.note_id, sub.series_id)\n>> entity_id, note_id, series_id\n>> from\n>> (\n>> select alloc.entity_id, alloc.note_id, alloc.series_id, alloc.amount,\n>> inv.name\n>> from public.portfolio_allocations alloc\n>> JOIN contributions contrib on contrib.id = alloc.note_id\n>> JOIN investments inv on inv.id = contrib.investment_id\n>> where entity_id = '\\x5787f132f50f7b03002cf835' and\n>> alloc.allocated_on <= dates.date\n>> ) sub\n>>\n>> And wrapped inside the lateral:\n>>\n>> explain analyze\n>> select *\n>> from generate_series('2017-03-14 20:59:59.999'::TIMESTAMPTZ,\n>> current_timestamp::TIMESTAMP + INTERVAL '1 day', '24 hours') dates,\n>> LATERAL (\n>> ... <SUB QUERY HERE> ...\n>> ) lat\n>>\n>> Run by itself injecting a hard coded value for dates.date, I get the\n>> expected plan which uses a key index on contributions:\n>>\n>> Unique (cost=14.54..14.54 rows=2 width=39) (actual\n>> time=0.052..0.053 rows=2 loops=1)\n>> -> Sort (cost=14.54..14.54 rows=2 width=39) (actual\n>> time=0.052..0.052 rows=2 loops=1)\n>> Sort Key: alloc.note_id, alloc.series_id\n>> Sort Method: quicksort Memory: 25kB\n>> -> Nested Loop (cost=0.25..14.53 rows=2 width=39)\n>> (actual time=0.030..0.042 rows=2 loops=1)\n>> -> Nested Loop (cost=0.17..14.23 rows=2 width=52)\n>> (actual time=0.022..0.028 rows=2 loops=1)\n>> -> Index Scan using\n>> portfolio_allocations_entity_id_allocated_on_idx on\n>> portfolio_allocations alloc (cost=0.09..6.05 rows=2 width=39) (actual\n>> time=0.012..0.014\n>> Index Cond: ((entity_id =\n>> '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= '2017-03-14\n>> 20:59:59.999+00'::timestamp with time zone))\n>> -> Index Scan using\n>> contributions_id_accrue_from_idx on contributions contrib\n>> (cost=0.08..4.09 rows=1 width=26) (actual time=0.005..0.005 rows=1\n>> loops=2)\n>> Index Cond: (id = alloc.note_id)\n>> -> Index Only Scan using investments_pkey on\n>> investments inv ( cost=0.08..0.15 rows=1 width=13) (actual\n>> time=0.005..0.006 rows=1 loops=2)\n>> Index Cond: (id = contrib.investment_id)\n>> Heap Fetches: 2\n>> Planning time: 0.617 ms\n>> Execution time: 0.100 ms\n>> (15 rows)\n>>\n>> But run in the lateral, it doesn't use the index:\n>>\n>> Nested Loop (cost=14.54..24.55 rows=2000 width=47) (actual\n>> time=0.085..0.219 rows=534 loops=1)\n>> -> Function Scan on generate_series dates (cost=0.00..3.00\n>> rows=1000 width=8) (actual time=0.031..0.043 rows=267 loops=1)\n>> -> Materialize (cost=14.54..14.55 rows=2 width=39) (actual\n>> time=0.000..0.000 rows=2 loops=267)\n>> -> Unique (cost=14.54..14.54 rows=2 width=39) (actual\n>> time=0.052..0.053 rows=2 loops=1)\n>> -> Sort (cost=14.54..14.54 rows=2 width=39)\n>> (actual time=0.051..0.052 rows=2 loops=1)\n>> Sort Key: alloc.note_id, alloc.series_id\n>> Sort Method: quicksort Memory: 25kB\n>> -> Nested Loop (cost=0.25..14.53 rows=2\n>> width=39) (actual time=0.029..0.041 rows=2 loops=1)\n>> -> Nested Loop (cost=0.17..14.23\n>> rows=2 width=52) (actual time=0.021..0.027 rows=2 loops=1)\n>> -> Index Scan using\n>> portfolio_allocations_entity_id_allocated_on_idx on\n>> portfolio_allocations alloc (cost=0.09..6.05 rows=2 width=39) (actual\n>> time=0\n>> Index Cond: ((entity_id =\n>> '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= '2017-03-14\n>> 20:59:59.999+00'::timestamp with time zone))\n>> -> Index Scan using\n>> contributions_id_accrue_from_idx on contributions contrib\n>> (cost=0.08..4.09 rows=1 width=26) ( actual time=0.005..0.005 rows=1 loo\n>> Index Cond: (id =\n>> alloc.note_id)\n>> -> Index Only Scan using\n>> investments_pkey on investments inv ( cost=0.08..0.15 rows=1 width=13)\n>> (actual time=0.005..0.006 rows=1 loops=2)\n>> Index Cond: (id =\n>> contrib.investment_id)\n>> Heap Fetches: 2\n>> Planning time: 0.718 ms\n>> Execution time: 0.296 ms\n>> (18 rows)\n>>\n>> For reference, here are the indexes on the relevant tables:\n>>\n>> Indexes:\n>> \"portfolio_allocations_entity_id_allocated_on_idx\" btree (entity_id,\n>> allocated_on DESC)\n>> \"portfolio_allocations_note_id_allocated_on_idx\" btree (note_id,\n>> allocated_on DESC)\n>> \"portfolio_allocations_pnsa\" btree (entity_id, note_id, series_id,\n>> allocated_on DESC)\n>>\n>> Indexes:\n>> \"contributions_pkey\" PRIMARY KEY, btree (id)\n>> \"contributions_id_accrue_from_idx\" btree (id,\n>> events_earnings_accrue_from)\n>>\n>> I have a few questions here:\n>> - Why doesn't it use the primary key index in either case?\n>> - Why isn't it choosing portfolio_allocations_pnsa, which seems like it\n>> would prevent it from having to sort?\n>>\n>> Best,\n>> ~Alex\n>>\n>\n\nArgh, so sorry for repeated posts; I'll be very careful to review them before posting. The \"good plan\" was the result of me hard coding '2017-03-14 20:59:59.999+00'::timestamp of using dates.date inside the lateral subquery. When I correctly use dates.date, it takes 7000ms instead of 0.3ms. My questions still remain:I have a few questions here:  - Why doesn't it use the primary key on contributions in either case, preferring contributions_id_accrue_from_idx or none at all?  - Why isn't it choosing portfolio_allocations_pnsa, which seems like it would prevent it from having to sort?  - What information can I gather to answer these questions on my own?~AlexOn Tue, Dec 5, 2017 at 10:08 AM Alex Reece <[email protected]> wrote:Weird, when I deleted an erroneous index it started picking a reasonable plan. This now works as expected, for posterity here is the bad plan:\n Nested Loop  (cost=21281.50..21323812.82 rows=5621000 width=47) (actual time=171.648..7233.298 rows=85615 loops=1)\n   ->  Function Scan on generate_series dates  (cost=0.00..3.00 rows=1000 width=8) (actual time=0.031..0.252 rows=267 loops=1)\n   ->  Unique  (cost=21281.50..21290.08 rows=5621 width=39) (actual time=25.730..27.050 rows=321 loops=267)\n         ->  Sort  (cost=21281.50..21284.36 rows=5724 width=39) (actual time=25.728..26.242 rows=6713 loops=267)\n               Sort Key: alloc.note_id, alloc.series_id\n               Sort Method: quicksort  Memory: 2220kB\n               ->  Nested Loop  (cost=10775.92..21210.05 rows=5724 width=39) (actual time=1.663..21.938 rows=6713 loops=267)\n                     ->  Hash Join  (cost=10775.83..20355.61 rows=5724 width=52) (actual time=1.657..5.980 rows=6713 loops=267)\n                           Hash Cond: (alloc.note_id = contrib.id)\n                           ->  Bitmap Heap Scan on portfolio_allocations alloc  (cost=69.82..9628.13 rows=5724 width=39) (actual time=1.010..2.278 rows=6713 loops=267)\n                                 Recheck Cond: ((entity_id = '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= date(dates.dates)))\n                                 Heap Blocks: exact=118074\n                                 ->  Bitmap Index Scan on portfolio_allocations_entity_id_allocated_on_idx  (cost=0.00..69.53 rows=5724 width=0) (actual time=0.956..0.956 rows=6713 lo\n                                       Index Cond: ((entity_id = '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= date(dates.dates)))\n                           ->  Hash  (cost=9464.85..9464.85 rows=354617 width=26) (actual time=169.792..169.792 rows=354617 loops=1)\n                                 Buckets: 524288  Batches: 1  Memory Usage: 24296kB\n                                 ->  Seq Scan on contributions contrib  (cost=0.00..9464.85 rows=354617 width=26) (actual time=0.007..83.246 rows=354617 loops=1)\n                     ->  Index Only Scan using investments_pkey on investments inv  (cost=0.08..0.15 rows=1 width=13) (actual time=0.002..0.002 rows=1 loops=1792457)\n                           Index Cond: (id = contrib.investment_id)\n                           Heap Fetches: 1792457\n Planning time: 0.721 ms\n Execution time: 7236.507 msOn Tue, Dec 5, 2017 at 10:04 AM Alex Reece <[email protected]> wrote:I get very different plan chosen when my query is in a lateral subquery vs standalone -- it doesn't use a key when joining on a table, instead opting to do a hash join. Here is the query: select distinct on (sub.entity_id, sub.note_id, sub.series_id)        entity_id, note_id, series_id from ( select alloc.entity_id, alloc.note_id, alloc.series_id, alloc.amount, inv.name from public.portfolio_allocations alloc JOIN contributions contrib on contrib.id = alloc.note_id JOIN investments inv on inv.id = contrib.investment_id where entity_id = '\\x5787f132f50f7b03002cf835' and  alloc.allocated_on <= dates.date ) subAnd wrapped inside the lateral:        explain analyze        select *        from generate_series('2017-03-14 20:59:59.999'::TIMESTAMPTZ,            current_timestamp::TIMESTAMP + INTERVAL '1 day', '24 hours') dates,        LATERAL (         ... <SUB QUERY HERE> ...        ) latRun by itself injecting a hard coded value for dates.date, I get the expected plan which uses a key index on contributions:      Unique  (cost=14.54..14.54 rows=2 width=39) (actual time=0.052..0.053 rows=2 loops=1)         ->  Sort  (cost=14.54..14.54 rows=2 width=39) (actual time=0.052..0.052 rows=2 loops=1)               Sort Key: alloc.note_id, alloc.series_id               Sort Method: quicksort  Memory: 25kB               ->  Nested Loop  (cost=0.25..14.53 rows=2 width=39) (actual time=0.030..0.042 rows=2      loops=1)                     ->  Nested Loop  (cost=0.17..14.23 rows=2 width=52) (actual time=0.022..0.028       rows=2 loops=1)                           ->  Index Scan using portfolio_allocations_entity_id_allocated_on_idx on      portfolio_allocations alloc  (cost=0.09..6.05 rows=2 width=39) (actual     time=0.012..0.014                                  Index Cond: ((entity_id = '\\x5787f132f50f7b03002cf835'::bytea) AND      (allocated_on <= '2017-03-14 20:59:59.999+00'::timestamp with time   zone))                           ->  Index Scan using contributions_id_accrue_from_idx on contributions     contrib  (cost=0.08..4.09 rows=1 width=26) (actual time=0.005..0.005    rows=1 loops=2)                                 Index Cond: (id = alloc.note_id)                     ->  Index Only Scan using investments_pkey on investments inv  (     cost=0.08..0.15 rows=1 width=13) (actual time=0.005..0.006 rows=1 loops=2)                           Index Cond: (id = contrib.investment_id)                           Heap Fetches: 2       Planning time: 0.617 ms       Execution time: 0.100 ms      (15 rows)But run in the lateral, it doesn't use the index:       Nested Loop  (cost=14.54..24.55 rows=2000 width=47) (actual time=0.085..0.219 rows=534      loops=1)         ->  Function Scan on generate_series dates  (cost=0.00..3.00 rows=1000 width=8) (actual      time=0.031..0.043 rows=267 loops=1)         ->  Materialize  (cost=14.54..14.55 rows=2 width=39) (actual time=0.000..0.000 rows=2     loops=267)               ->  Unique  (cost=14.54..14.54 rows=2 width=39) (actual time=0.052..0.053 rows=2       loops=1)                     ->  Sort  (cost=14.54..14.54 rows=2 width=39) (actual time=0.051..0.052 rows=2      loops=1)                           Sort Key: alloc.note_id, alloc.series_id                           Sort Method: quicksort  Memory: 25kB                           ->  Nested Loop  (cost=0.25..14.53 rows=2 width=39) (actual       time=0.029..0.041 rows=2 loops=1)                                 ->  Nested Loop  (cost=0.17..14.23 rows=2 width=52) (actual       time=0.021..0.027 rows=2 loops=1)                                       ->  Index Scan using       portfolio_allocations_entity_id_allocated_on_idx on      portfolio_allocations alloc  (cost=0.09..6.05 rows=2     width=39) (actual time=0                                             Index Cond: ((entity_id =     '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on      <= '2017-03-14 20:59:59.999+00'::timestamp with time    zone))                                       ->  Index Scan using contributions_id_accrue_from_idx on       contributions contrib  (cost=0.08..4.09 rows=1 width=26) (     actual time=0.005..0.005 rows=1 loo                                             Index Cond: (id = alloc.note_id)                                 ->  Index Only Scan using investments_pkey on investments inv  (     cost=0.08..0.15 rows=1 width=13) (actual time=0.005..0.006 rows=1    loops=2)                                       Index Cond: (id = contrib.investment_id)                                       Heap Fetches: 2       Planning time: 0.718 ms       Execution time: 0.296 ms      (18 rows)For reference, here are the indexes on the relevant tables:Indexes:    \"portfolio_allocations_entity_id_allocated_on_idx\" btree (entity_id, allocated_on DESC)    \"portfolio_allocations_note_id_allocated_on_idx\" btree (note_id, allocated_on DESC)    \"portfolio_allocations_pnsa\" btree (entity_id, note_id, series_id, allocated_on DESC)Indexes:    \"contributions_pkey\" PRIMARY KEY, btree (id)    \"contributions_id_accrue_from_idx\" btree (id, events_earnings_accrue_from)I have a few questions here:  - Why doesn't it use the primary key index in either case?  - Why isn't it choosing portfolio_allocations_pnsa, which seems like it would prevent it from having to sort?Best,~Alex", "msg_date": "Tue, 05 Dec 2017 18:16:15 +0000", "msg_from": "Alex Reece <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Different plan chosen when in lateral subquery" }, { "msg_contents": "Alex Reece wrote:\n> I get very different plan chosen when my query is in a lateral subquery vs standalone --\n> it doesn't use a key when joining on a table, instead opting to do a hash join. Here is the query:\n> \n> \tselect distinct on (sub.entity_id, sub.note_id, sub.series_id)\n> \t entity_id, note_id, series_id\n> \tfrom\n> \t(\n> \t\tselect alloc.entity_id, alloc.note_id, alloc.series_id, alloc.amount, inv.name\n> \t\tfrom public.portfolio_allocations alloc\n> \t\tJOIN contributions contrib on contrib.id = alloc.note_id\n> \t\tJOIN investments inv on inv.id = contrib.investment_id\n> \t\twhere entity_id = '\\x5787f132f50f7b03002cf835' and \n> \t\talloc.allocated_on <= dates.date\n> \t) sub\n> \n> And wrapped inside the lateral:\n> \n> explain analyze\n> select *\n> from generate_series('2017-03-14 20:59:59.999'::TIMESTAMPTZ, current_timestamp::TIMESTAMP + INTERVAL '1 day', '24 hours') dates,\n> LATERAL (\n> \t... <SUB QUERY HERE> ...\n> ) lat\n> \n> Run by itself injecting a hard coded value for dates.date, I get the expected plan which uses a key index on contributions:\n\n[...]\n\n> -> Nested Loop (cost=0.17..14.23 rows=2 width=52) (actual time=0.022..0.028 rows=2 loops=1)\n> -> Index Scan using portfolio_allocations_entity_id_allocated_on_idx on portfolio_allocations alloc (cost=0.09..6.05 rows=2 width=39) (actual time=0.012..0.014 \n> Index Cond: ((entity_id = '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= '2017-03-14 20:59:59.999+00'::timestamp with time zone))\n> -> Index Scan using contributions_id_accrue_from_idx on contributions contrib (cost=0.08..4.09 rows=1 width=26) (actual time=0.005..0.005 rows=1 loops=2)\n> Index Cond: (id = alloc.note_id)\n\n[...]\n\n> But run in the lateral, it doesn't use the index:\n\n[...]\n\n> -> Hash Join (cost=10775.83..20355.61 rows=5724 width=52) (actual time=1.657..5.980 rows=6713 loops=267)\n> Hash Cond: (alloc.note_id = contrib.id)\n> -> Bitmap Heap Scan on portfolio_allocations alloc (cost=69.82..9628.13 rows=5724 width=39) (actual time=1.010..2.278 rows=6713 loops=267)\n> Recheck Cond: ((entity_id = '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= date(dates.dates)))\n> Heap Blocks: exact=118074\n> -> Bitmap Index Scan on portfolio_allocations_entity_id_allocated_on_idx (cost=0.00..69.53 rows=5724 width=0) (actual time=0.956..0.956 rows=6713 lo\n> Index Cond: ((entity_id = '\\x5787f132f50f7b03002cf835'::bytea) AND (allocated_on <= date(dates.dates)))\n> -> Hash (cost=9464.85..9464.85 rows=354617 width=26) (actual time=169.792..169.792 rows=354617 loops=1)\n> Buckets: 524288 Batches: 1 Memory Usage: 24296kB\n> -> Seq Scan on contributions contrib (cost=0.00..9464.85 rows=354617 width=26) (actual time=0.007..83.246 rows=354617 loops=1)\n\n[...]\n\n> I have a few questions here:\n> - Why doesn't it use the primary key index in either case?\n\nI don't know about the first query; perhaps the primary key index is fragmented.\nCompare the size of the indexes on disk.\nIn the second query a sequential scan is used because PostgreSQL chooses a hash join.\nThat choice is made because the index scans returns 6713 rows rather than the 2\nfrom the first query, probably because the date is different.\n\n> - Why isn't it choosing portfolio_allocations_pnsa, which seems like it would prevent it from having to sort?\n\nIn a bitmap index scan, the table is scanned in physical order, so the result\nis not sorted in index order.\nI don't know if PostgreSQL is smart enough to figure out that it could use an index\nscan and preserve the order through the joins to obviate the sort.\nYou could try to set enable_bitmapscan=off and see if things are different then.\nPerhaps the slower index scan would outweigh the advantage of avoiding the sort.\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Wed, 06 Dec 2017 09:20:38 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Different plan chosen when in lateral subquery" } ]
[ { "msg_contents": "I could reproduce part of the things I described earlier in this thread. A\nguy named Andriy Senyshyn mailed me after reading this thread here (he\ncould somehow not join the mailing list) and observed a difference when\nissuing \"SET ROLE\" as user postgres and as a non-superuser.\n\nWhen I connect as superuser postgres to mydb and execute a \"SET ROLE\"\nthings are pretty fast:\n\n\n$ PGOPTIONS='-c client-min-messages=DEBUG5' psql -U postgres mydb\nDEBUG: CommitTransaction\nDEBUG: name: unnamed; blockState: STARTED; state: INPROGR,\nxid/subid/cid: 0/1/0, nestlvl: 1, children:\npsql (9.6.6)\nType \"help\" for help.\n\nmagicline=# \\timing\nTiming is on.\nmagicline=# SET ROLE tenant1337;\nDEBUG: StartTransactionCommand\nDEBUG: StartTransaction\nDEBUG: name: unnamed; blockState: DEFAULT; state: INPROGR,\nxid/subid/cid: 0/1/0, nestlvl: 1, children:\nDEBUG: ProcessUtility\nDEBUG: CommitTransactionCommand\nDEBUG: CommitTransaction\nDEBUG: name: unnamed; blockState: STARTED; state: INPROGR,\nxid/subid/cid: 0/1/0, nestlvl: 1, children:\nSET\nTime: 0.968 ms\nmagicline=>\n\n\nWhen I connect as user admin (non-superuser with NOINHERIT attribute) to\nmydb, the first \"SET ROLE\" statement is always quite slow in comparison to\nthe former \"SET ROLE\" statement executed by superuser postgres:\n\n\n$ PGOPTIONS='-c client-min-messages=DEBUG5' psql -U admin mydb\nDEBUG: CommitTransaction\nDEBUG: name: unnamed; blockState: STARTED; state: INPROGR,\nxid/subid/cid: 0/1/0, nestlvl: 1, children:\npsql (9.6.6)\nType \"help\" for help.\n\nmagicline=> \\timing\nTiming is on.\nmagicline=> SET ROLE tenant1337;\nDEBUG: StartTransactionCommand\nDEBUG: StartTransaction\nDEBUG: name: unnamed; blockState: DEFAULT; state: INPROGR,\nxid/subid/cid: 0/1/0, nestlvl: 1, children:\nDEBUG: ProcessUtility\nDEBUG: rehashing catalog cache id 8 for pg_auth_members; 17 tups, 8 buckets\nDEBUG: rehashing catalog cache id 8 for pg_auth_members; 33 tups, 16\nbuckets\nDEBUG: rehashing catalog cache id 8 for pg_auth_members; 65 tups, 32\nbuckets\nDEBUG: rehashing catalog cache id 8 for pg_auth_members; 129 tups, 64\nbuckets\nDEBUG: rehashing catalog cache id 8 for pg_auth_members; 257 tups, 128\nbuckets\nDEBUG: rehashing catalog cache id 8 for pg_auth_members; 513 tups, 256\nbuckets\nDEBUG: rehashing catalog cache id 8 for pg_auth_members; 1025 tups, 512\nbuckets\nDEBUG: CommitTransactionCommand\nDEBUG: CommitTransaction\nDEBUG: name: unnamed; blockState: STARTED; state: INPROGR,\nxid/subid/cid: 0/1/0, nestlvl: 1, children:\nSET\nTime: 31.858 ms\nmagicline=>\n\n\nSubsequent \"SET ROLE\" calls in the above session of user admin are pretty\nfast (below 1 ms).\n\nI further wonder what those log statements \"rehashing catalog cache...\" do\nand if they are the cause of the slow execution.\n\nSo this does not reproduce my observed query times >2000ms but is maybe a\nhint for other things that might be worth looking into.\n\nRegards,\nUlf\n\n2017-11-08 10:31 GMT+01:00 Ulf Lohbrügge <[email protected]>:\n\n> 2017-11-08 0:45 GMT+01:00 Tom Lane <[email protected]>:\n>\n>> =?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n>> > I just ran \"check_postgres.pl --action=bloat\" and got the following\n>> output:\n>> > ...\n>> > Looks fine, doesn't it?\n>>\n>> A possible explanation is that something is taking an exclusive lock\n>> on some system catalog and holding it for a second or two. If so,\n>> turning on log_lock_waits might provide some useful info.\n>>\n>> regards, tom lane\n>>\n>\n> I just checked my configuration and found out that \"log_lock_waits\" was\n> already enabled.\n>\n> Unfortunately there is no log output of locks when those long running \"SET\n> ROLE\" statements occur.\n>\n> Regards,\n> Ulf\n>\n>\n\nI could reproduce part of the things I described earlier in this thread. A guy named Andriy Senyshyn mailed me after reading this thread here (he could somehow not join the mailing list) and observed a difference when issuing \"SET ROLE\" as user postgres and as a non-superuser.When I connect as superuser postgres to mydb and execute a \"SET ROLE\" things are pretty fast:$ PGOPTIONS='-c client-min-messages=DEBUG5' psql -U postgres mydbDEBUG:  CommitTransactionDEBUG:  name: unnamed; blockState:       STARTED; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:psql (9.6.6)Type \"help\" for help.magicline=# \\timingTiming is on.magicline=# SET ROLE tenant1337;DEBUG:  StartTransactionCommandDEBUG:  StartTransactionDEBUG:  name: unnamed; blockState:       DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:DEBUG:  ProcessUtilityDEBUG:  CommitTransactionCommandDEBUG:  CommitTransactionDEBUG:  name: unnamed; blockState:       STARTED; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:SETTime: 0.968 msmagicline=>When I connect as user admin (non-superuser with NOINHERIT attribute) to mydb, the first \"SET ROLE\" statement is always quite slow in comparison to the former \"SET ROLE\" statement executed by superuser postgres:$ PGOPTIONS='-c client-min-messages=DEBUG5' psql -U admin mydbDEBUG:  CommitTransactionDEBUG:  name: unnamed; blockState:       STARTED; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:psql (9.6.6)Type \"help\" for help.magicline=> \\timingTiming is on.magicline=> SET ROLE tenant1337;DEBUG:  StartTransactionCommandDEBUG:  StartTransactionDEBUG:  name: unnamed; blockState:       DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:DEBUG:  ProcessUtilityDEBUG:  rehashing catalog cache id 8 for pg_auth_members; 17 tups, 8 bucketsDEBUG:  rehashing catalog cache id 8 for pg_auth_members; 33 tups, 16 bucketsDEBUG:  rehashing catalog cache id 8 for pg_auth_members; 65 tups, 32 bucketsDEBUG:  rehashing catalog cache id 8 for pg_auth_members; 129 tups, 64 bucketsDEBUG:  rehashing catalog cache id 8 for pg_auth_members; 257 tups, 128 bucketsDEBUG:  rehashing catalog cache id 8 for pg_auth_members; 513 tups, 256 bucketsDEBUG:  rehashing catalog cache id 8 for pg_auth_members; 1025 tups, 512 bucketsDEBUG:  CommitTransactionCommandDEBUG:  CommitTransactionDEBUG:  name: unnamed; blockState:       STARTED; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:SETTime: 31.858 msmagicline=>Subsequent \"SET ROLE\" calls in the above session of user admin are pretty fast (below 1 ms).I further wonder what those log statements \"rehashing catalog cache...\" do and if they are the cause of the slow execution.So this does not reproduce my observed query times >2000ms but is maybe a hint for other things that might be worth looking into.Regards,Ulf2017-11-08 10:31 GMT+01:00 Ulf Lohbrügge <[email protected]>:2017-11-08 0:45 GMT+01:00 Tom Lane <[email protected]>:=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n> I just ran \"check_postgres.pl --action=bloat\" and got the following output:\n> ...\n> Looks fine, doesn't it?\n\nA possible explanation is that something is taking an exclusive lock\non some system catalog and holding it for a second or two.  If so,\nturning on log_lock_waits might provide some useful info.\n\n                        regards, tom laneI just checked my configuration and found out that \"log_lock_waits\" was already enabled.Unfortunately there is no log output of locks when those long running \"SET ROLE\" statements occur.Regards,Ulf", "msg_date": "Thu, 7 Dec 2017 13:54:15 +0100", "msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Slow execution of SET ROLE,\n SET search_path and RESET ROLE" }, { "msg_contents": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n> I could reproduce part of the things I described earlier in this thread. A\n> guy named Andriy Senyshyn mailed me after reading this thread here (he\n> could somehow not join the mailing list) and observed a difference when\n> issuing \"SET ROLE\" as user postgres and as a non-superuser.\n\nThis isn't particularly surprising in itself. When we know that the\nsession user is a superuser, SET ROLE just succeeds immediately.\nOtherwise we have to determine whether the SET is allowed, ie, is\nthe session user a member of the specified role.\n\nIt looks like the first time such a question is asked within a session,\nwe build and cache a list of all the roles the session user is a member\nof (directly or indirectly). That's what's taking the time here ---\napparently in your test case, the \"admin\" role is a member of a whole lot\nof roles?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 07 Dec 2017 11:01:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow execution of SET ROLE,\n SET search_path and RESET ROLE" }, { "msg_contents": "2017-12-07 17:01 GMT+01:00 Tom Lane <[email protected]>:\n\n> =?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n> > I could reproduce part of the things I described earlier in this thread.\n> A\n> > guy named Andriy Senyshyn mailed me after reading this thread here (he\n> > could somehow not join the mailing list) and observed a difference when\n> > issuing \"SET ROLE\" as user postgres and as a non-superuser.\n>\n> This isn't particularly surprising in itself. When we know that the\n> session user is a superuser, SET ROLE just succeeds immediately.\n> Otherwise we have to determine whether the SET is allowed, ie, is\n> the session user a member of the specified role.\n>\n> It looks like the first time such a question is asked within a session,\n> we build and cache a list of all the roles the session user is a member\n> of (directly or indirectly). That's what's taking the time here ---\n> apparently in your test case, the \"admin\" role is a member of a whole lot\n> of roles?\n>\n\nYes, the user \"admin\" is member of more than 1k roles.\n\nSo this cache will not invalidate during the lifetime of the session unless\na new role is added, I guess?\n\nIs there any locking involved when this cache gets invalidated? Could this\nbe a source for my earlier observed slow executions?\n\nRegards,\nUlf\n\n2017-12-07 17:01 GMT+01:00 Tom Lane <[email protected]>:=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n> I could reproduce part of the things I described earlier in this thread. A\n> guy named Andriy Senyshyn mailed me after reading this thread here (he\n> could somehow not join the mailing list) and observed a difference when\n> issuing \"SET ROLE\" as user postgres and as a non-superuser.\n\nThis isn't particularly surprising in itself.  When we know that the\nsession user is a superuser, SET ROLE just succeeds immediately.\nOtherwise we have to determine whether the SET is allowed, ie, is\nthe session user a member of the specified role.\n\nIt looks like the first time such a question is asked within a session,\nwe build and cache a list of all the roles the session user is a member\nof (directly or indirectly).  That's what's taking the time here ---\napparently in your test case, the \"admin\" role is a member of a whole lot\nof roles?Yes, the user \"admin\" is member of more than 1k roles.So this cache will not invalidate during the lifetime of the session unless a new role is added, I guess?Is there any locking involved when this cache gets invalidated? Could this be a source for my earlier observed slow executions?Regards,Ulf", "msg_date": "Thu, 7 Dec 2017 17:15:34 +0100", "msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Slow execution of SET ROLE,\n SET search_path and RESET ROLE" }, { "msg_contents": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]> writes:\n> 2017-12-07 17:01 GMT+01:00 Tom Lane <[email protected]>:\n>> It looks like the first time such a question is asked within a session,\n>> we build and cache a list of all the roles the session user is a member\n>> of (directly or indirectly). That's what's taking the time here ---\n>> apparently in your test case, the \"admin\" role is a member of a whole lot\n>> of roles?\n\n> Yes, the user \"admin\" is member of more than 1k roles.\n\n> So this cache will not invalidate during the lifetime of the session unless\n> a new role is added, I guess?\n\nIt looks like any update to the role membership catalog (pg_auth_members)\ninvalidates that cache. So basically a \"GRANT role\" or \"REVOKE role\"\nwould do it.\n\n> Is there any locking involved when this cache gets invalidated? Could this\n> be a source for my earlier observed slow executions?\n\nThis particular aspect of things doesn't seem like such a problem to me,\nbut it's certainly possible that there are other aspects that get\nunreasonably slow when there are that many role memberships involved.\nDon't see what it'd have to do with SET SEARCH_PATH, though. Or RESET\nROLE; that doesn't require any permission checks, either.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 07 Dec 2017 11:38:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow execution of SET ROLE,\n SET search_path and RESET ROLE" } ]
[ { "msg_contents": "Hi experts!\n\nI read this nice article about Understanding EXPLAIN [1] weeks ago that\nopened my mind about the tool, but it seems no enough to explain a lot of\nplans that I see in this list.\n\nI often read responses to a plan that are not covered by the article.\n\nI need/want to know EXPLAIN better.\n\nCan you kindly advise me a good reading about advanced EXPLAIN?\n\nThank you!\n\n\n[1] http://www.dalibo.org/_media/understanding_explain.pdf\n\nFlávio Henrique\n\nHi experts!I read this nice article about Understanding EXPLAIN [1] weeks ago that opened my mind about the tool, but it seems no enough to explain a lot of plans that I see in this list. I often read responses to a plan that are not covered by the article. I need/want to know EXPLAIN better.Can you kindly advise me a good reading about advanced EXPLAIN?Thank you![1] http://www.dalibo.org/_media/understanding_explain.pdfFlávio Henrique", "msg_date": "Thu, 7 Dec 2017 23:12:16 -0200", "msg_from": "=?UTF-8?Q?Fl=C3=A1vio_Henrique?= <[email protected]>", "msg_from_op": true, "msg_subject": "Learning EXPLAIN" }, { "msg_contents": "Hi,\n\n2017-12-08 2:12 GMT+01:00 Flávio Henrique <[email protected]>:\n\n> Hi experts!\n>\n> I read this nice article about Understanding EXPLAIN [1] weeks ago that\n> opened my mind about the tool, but it seems no enough to explain a lot of\n> plans that I see in this list.\n>\n\nThanks.\n\nI often read responses to a plan that are not covered by the article.\n>\n> I need/want to know EXPLAIN better.\n>\n> Can you kindly advise me a good reading about advanced EXPLAIN?\n>\n>\nThere's not much out there. This document was written after reading this\nlist, viewing some talks (you may find a lot of them on youtube), and\nreading the code.\n\nI intend to update this document, since I learned quite more since 2012.\nThough I didn't find the time yet :-/\n\nAnyway, thanks.\n\n\n-- \nGuillaume.\n\nHi,2017-12-08 2:12 GMT+01:00 Flávio Henrique <[email protected]>:Hi experts!I read this nice article about Understanding EXPLAIN [1] weeks ago that opened my mind about the tool, but it seems no enough to explain a lot of plans that I see in this list. Thanks. I often read responses to a plan that are not covered by the article. I need/want to know EXPLAIN better.Can you kindly advise me a good reading about advanced EXPLAIN?There's not much out there. This document was written after reading this list, viewing some talks (you may find a lot of them on youtube), and reading the code.I intend to update this document, since I learned quite more since 2012. Though I didn't find the time yet :-/Anyway, thanks.-- Guillaume.", "msg_date": "Fri, 8 Dec 2017 14:20:16 +0100", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Learning EXPLAIN" }, { "msg_contents": "Em sex, 8 de dez de 2017 às 14:20, Guillaume Lelarge <[email protected]>\nescreveu:\n\n> Hi,\n>\n> 2017-12-08 2:12 GMT+01:00 Flávio Henrique <[email protected]>:\n>\n>> Hi experts!\n>>\n>> I read this nice article about Understanding EXPLAIN [1] weeks ago that\n>> opened my mind about the tool, but it seems no enough to explain a lot of\n>> plans that I see in this list.\n>>\n>\n> Thanks.\n>\n> I often read responses to a plan that are not covered by the article.\n>>\n>> I need/want to know EXPLAIN better.\n>>\n>> Can you kindly advise me a good reading about advanced EXPLAIN?\n>>\n>>\n> There's not much out there. This document was written after reading this\n> list, viewing some talks (you may find a lot of them on youtube), and\n> reading the code.\n>\n> I intend to update this document, since I learned quite more since 2012.\n> Though I didn't find the time yet :-/\n>\n> Anyway, thanks.\n>\n>\nHello all\n\nI would like to make clear that there are two \"Flavio Henrique\" on the\nlists, me beeing one of them, I'd like to say that I'm not the OP.\nA bit off-topic anyway, thanks for understanding.\n\nFlavio Gurgel\n\nEm sex, 8 de dez de 2017 às 14:20, Guillaume Lelarge <[email protected]> escreveu:Hi,2017-12-08 2:12 GMT+01:00 Flávio Henrique <[email protected]>:Hi experts!I read this nice article about Understanding EXPLAIN [1] weeks ago that opened my mind about the tool, but it seems no enough to explain a lot of plans that I see in this list. Thanks. I often read responses to a plan that are not covered by the article. I need/want to know EXPLAIN better.Can you kindly advise me a good reading about advanced EXPLAIN?There's not much out there. This document was written after reading this list, viewing some talks (you may find a lot of them on youtube), and reading the code.I intend to update this document, since I learned quite more since 2012. Though I didn't find the time yet :-/Anyway, thanks.Hello all I would like to make clear that there are two \"Flavio Henrique\" on the lists, me beeing one of them, I'd like to say that I'm not the OP.A bit off-topic anyway, thanks for understanding.Flavio Gurgel", "msg_date": "Fri, 08 Dec 2017 13:32:06 +0000", "msg_from": "Flavio Henrique Araque Gurgel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Learning EXPLAIN" }, { "msg_contents": "Dude,\n\nYou can rest assured that at least the Brazilians members will always know\nbased on your last name you are not the same :-).\n\nWhat's the point of explaining that anyways? Got curious.\n\nAs to what pertains to the topic:\n\nThis is another simple yet effective doc:\n\nhttps://wiki.postgresql.org/images/4/45/Explaining_EXPLAIN.pdf\n\nExecution plans are tricky and reading them accurately to make good\ndecisions require a lot of experience and awareness of the situation. It\ndoes not only require that you know how to read the tool itself but also\nknow how the DB and schemas have been designed, if stats are up to date,\nhow tables are populated, frequency and type of queries, adequate indexing\nin place, the hardware it sits on, etc.\n\nIt's a mix of science, broaden knowledge, perspicacity, and why not\nsay, it's an art.\n\nHave a great weekend.\n\n___________________________________________________________________________\nGustavo Velasquez\n+1 (256) 653-9725\n\n\nOn Fri, Dec 8, 2017 at 7:32 AM, Flavio Henrique Araque Gurgel <\[email protected]> wrote:\n\n>\n> Em sex, 8 de dez de 2017 às 14:20, Guillaume Lelarge <\n> [email protected]> escreveu:\n>\n>> Hi,\n>>\n>> 2017-12-08 2:12 GMT+01:00 Flávio Henrique <[email protected]>:\n>>\n>>> Hi experts!\n>>>\n>>> I read this nice article about Understanding EXPLAIN [1] weeks ago that\n>>> opened my mind about the tool, but it seems no enough to explain a lot of\n>>> plans that I see in this list.\n>>>\n>>\n>> Thanks.\n>>\n>> I often read responses to a plan that are not covered by the article.\n>>>\n>>> I need/want to know EXPLAIN better.\n>>>\n>>> Can you kindly advise me a good reading about advanced EXPLAIN?\n>>>\n>>>\n>> There's not much out there. This document was written after reading this\n>> list, viewing some talks (you may find a lot of them on youtube), and\n>> reading the code.\n>>\n>> I intend to update this document, since I learned quite more since 2012.\n>> Though I didn't find the time yet :-/\n>>\n>> Anyway, thanks.\n>>\n>>\n> Hello all\n>\n> I would like to make clear that there are two \"Flavio Henrique\" on the\n> lists, me beeing one of them, I'd like to say that I'm not the OP.\n> A bit off-topic anyway, thanks for understanding.\n>\n> Flavio Gurgel\n>\n>\n\nDude, You can rest assured that at least the Brazilians members will always know based on your last name you are not the same :-).What's the point of explaining that anyways? Got curious.As to what pertains to the topic:This is another simple yet effective doc:https://wiki.postgresql.org/images/4/45/Explaining_EXPLAIN.pdfExecution plans are tricky and reading them accurately to make good decisions require a lot of experience and awareness of the situation. It does not only require that you know how to read the tool itself but also know how the DB and schemas have been designed, if stats are up to date, how tables are populated, frequency and type of queries, adequate indexing in place, the hardware it sits on, etc.It's a mix of science, broaden knowledge, perspicacity, and why not say, it's an art.Have a great weekend.___________________________________________________________________________Gustavo Velasquez+1 (256) 653-9725\nOn Fri, Dec 8, 2017 at 7:32 AM, Flavio Henrique Araque Gurgel <[email protected]> wrote:Em sex, 8 de dez de 2017 às 14:20, Guillaume Lelarge <[email protected]> escreveu:Hi,2017-12-08 2:12 GMT+01:00 Flávio Henrique <[email protected]>:Hi experts!I read this nice article about Understanding EXPLAIN [1] weeks ago that opened my mind about the tool, but it seems no enough to explain a lot of plans that I see in this list. Thanks. I often read responses to a plan that are not covered by the article. I need/want to know EXPLAIN better.Can you kindly advise me a good reading about advanced EXPLAIN?There's not much out there. This document was written after reading this list, viewing some talks (you may find a lot of them on youtube), and reading the code.I intend to update this document, since I learned quite more since 2012. Though I didn't find the time yet :-/Anyway, thanks.Hello all I would like to make clear that there are two \"Flavio Henrique\" on the lists, me beeing one of them, I'd like to say that I'm not the OP.A bit off-topic anyway, thanks for understanding.Flavio Gurgel", "msg_date": "Fri, 8 Dec 2017 10:44:50 -0600", "msg_from": "Gustavo Velasquez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Learning EXPLAIN" }, { "msg_contents": "What about the many-part explanation posted on the blog that accompanies\nexplain.depesz.com. Here is the first installment. I seem to remember that\nthere are 5 or 6 installments.\n\nhttps://www.depesz.com/2013/04/16/explaining-the-unexplainable/\n\nOn Fri, Dec 8, 2017 at 8:44 AM, Gustavo Velasquez <[email protected]>\nwrote:\n\n> Dude,\n>\n> You can rest assured that at least the Brazilians members will always know\n> based on your last name you are not the same :-).\n>\n> What's the point of explaining that anyways? Got curious.\n>\n> As to what pertains to the topic:\n>\n> This is another simple yet effective doc:\n>\n> https://wiki.postgresql.org/images/4/45/Explaining_EXPLAIN.pdf\n>\n> Execution plans are tricky and reading them accurately to make good\n> decisions require a lot of experience and awareness of the situation. It\n> does not only require that you know how to read the tool itself but also\n> know how the DB and schemas have been designed, if stats are up to date,\n> how tables are populated, frequency and type of queries, adequate indexing\n> in place, the hardware it sits on, etc.\n>\n> It's a mix of science, broaden knowledge, perspicacity, and why not\n> say, it's an art.\n>\n> Have a great weekend.\n>\n> ____________________________________________________________\n> _______________\n> Gustavo Velasquez\n> +1 (256) 653-9725 <(256)%20653-9725>\n>\n>\n> On Fri, Dec 8, 2017 at 7:32 AM, Flavio Henrique Araque Gurgel <\n> [email protected]> wrote:\n>\n>>\n>> Em sex, 8 de dez de 2017 às 14:20, Guillaume Lelarge <\n>> [email protected]> escreveu:\n>>\n>>> Hi,\n>>>\n>>> 2017-12-08 2:12 GMT+01:00 Flávio Henrique <[email protected]>:\n>>>\n>>>> Hi experts!\n>>>>\n>>>> I read this nice article about Understanding EXPLAIN [1] weeks ago that\n>>>> opened my mind about the tool, but it seems no enough to explain a lot of\n>>>> plans that I see in this list.\n>>>>\n>>>\n>>> Thanks.\n>>>\n>>> I often read responses to a plan that are not covered by the article.\n>>>>\n>>>> I need/want to know EXPLAIN better.\n>>>>\n>>>> Can you kindly advise me a good reading about advanced EXPLAIN?\n>>>>\n>>>>\n>>> There's not much out there. This document was written after reading this\n>>> list, viewing some talks (you may find a lot of them on youtube), and\n>>> reading the code.\n>>>\n>>> I intend to update this document, since I learned quite more since 2012.\n>>> Though I didn't find the time yet :-/\n>>>\n>>> Anyway, thanks.\n>>>\n>>>\n>> Hello all\n>>\n>> I would like to make clear that there are two \"Flavio Henrique\" on the\n>> lists, me beeing one of them, I'd like to say that I'm not the OP.\n>> A bit off-topic anyway, thanks for understanding.\n>>\n>> Flavio Gurgel\n>>\n>>\n>\n\nWhat about the many-part explanation posted on the blog that accompanies explain.depesz.com.  Here is the first installment. I seem to remember that there are 5 or 6 installments.https://www.depesz.com/2013/04/16/explaining-the-unexplainable/On Fri, Dec 8, 2017 at 8:44 AM, Gustavo Velasquez <[email protected]> wrote:Dude, You can rest assured that at least the Brazilians members will always know based on your last name you are not the same :-).What's the point of explaining that anyways? Got curious.As to what pertains to the topic:This is another simple yet effective doc:https://wiki.postgresql.org/images/4/45/Explaining_EXPLAIN.pdfExecution plans are tricky and reading them accurately to make good decisions require a lot of experience and awareness of the situation. It does not only require that you know how to read the tool itself but also know how the DB and schemas have been designed, if stats are up to date, how tables are populated, frequency and type of queries, adequate indexing in place, the hardware it sits on, etc.It's a mix of science, broaden knowledge, perspicacity, and why not say, it's an art.Have a great weekend.___________________________________________________________________________Gustavo Velasquez+1 (256) 653-9725\nOn Fri, Dec 8, 2017 at 7:32 AM, Flavio Henrique Araque Gurgel <[email protected]> wrote:Em sex, 8 de dez de 2017 às 14:20, Guillaume Lelarge <[email protected]> escreveu:Hi,2017-12-08 2:12 GMT+01:00 Flávio Henrique <[email protected]>:Hi experts!I read this nice article about Understanding EXPLAIN [1] weeks ago that opened my mind about the tool, but it seems no enough to explain a lot of plans that I see in this list. Thanks. I often read responses to a plan that are not covered by the article. I need/want to know EXPLAIN better.Can you kindly advise me a good reading about advanced EXPLAIN?There's not much out there. This document was written after reading this list, viewing some talks (you may find a lot of them on youtube), and reading the code.I intend to update this document, since I learned quite more since 2012. Though I didn't find the time yet :-/Anyway, thanks.Hello all I would like to make clear that there are two \"Flavio Henrique\" on the lists, me beeing one of them, I'd like to say that I'm not the OP.A bit off-topic anyway, thanks for understanding.Flavio Gurgel", "msg_date": "Fri, 8 Dec 2017 20:47:13 -0800", "msg_from": "Sam Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Learning EXPLAIN" }, { "msg_contents": "Ah, I see now that the article you linked to in the OP is written by the\nsame author and is maybe the very same content. If so, that sure seems\npretty comprehensive to me, though I've also been reading this list, off\nand on, for many years, which has probably filled a lot of the gaps left by\nthe original blog posts.\n\nOn Fri, Dec 8, 2017 at 8:47 PM, Sam Gendler <[email protected]>\nwrote:\n\n> What about the many-part explanation posted on the blog that accompanies\n> explain.depesz.com. Here is the first installment. I seem to remember\n> that there are 5 or 6 installments.\n>\n> https://www.depesz.com/2013/04/16/explaining-the-unexplainable/\n>\n> On Fri, Dec 8, 2017 at 8:44 AM, Gustavo Velasquez <[email protected]\n> > wrote:\n>\n>> Dude,\n>>\n>> You can rest assured that at least the Brazilians members will always\n>> know based on your last name you are not the same :-).\n>>\n>> What's the point of explaining that anyways? Got curious.\n>>\n>> As to what pertains to the topic:\n>>\n>> This is another simple yet effective doc:\n>>\n>> https://wiki.postgresql.org/images/4/45/Explaining_EXPLAIN.pdf\n>>\n>> Execution plans are tricky and reading them accurately to make good\n>> decisions require a lot of experience and awareness of the situation. It\n>> does not only require that you know how to read the tool itself but also\n>> know how the DB and schemas have been designed, if stats are up to date,\n>> how tables are populated, frequency and type of queries, adequate indexing\n>> in place, the hardware it sits on, etc.\n>>\n>> It's a mix of science, broaden knowledge, perspicacity, and why not\n>> say, it's an art.\n>>\n>> Have a great weekend.\n>>\n>> ____________________________________________________________\n>> _______________\n>> Gustavo Velasquez\n>> +1 (256) 653-9725 <(256)%20653-9725>\n>>\n>>\n>> On Fri, Dec 8, 2017 at 7:32 AM, Flavio Henrique Araque Gurgel <\n>> [email protected]> wrote:\n>>\n>>>\n>>> Em sex, 8 de dez de 2017 às 14:20, Guillaume Lelarge <\n>>> [email protected]> escreveu:\n>>>\n>>>> Hi,\n>>>>\n>>>> 2017-12-08 2:12 GMT+01:00 Flávio Henrique <[email protected]>:\n>>>>\n>>>>> Hi experts!\n>>>>>\n>>>>> I read this nice article about Understanding EXPLAIN [1] weeks ago\n>>>>> that opened my mind about the tool, but it seems no enough to explain a lot\n>>>>> of plans that I see in this list.\n>>>>>\n>>>>\n>>>> Thanks.\n>>>>\n>>>> I often read responses to a plan that are not covered by the article.\n>>>>>\n>>>>> I need/want to know EXPLAIN better.\n>>>>>\n>>>>> Can you kindly advise me a good reading about advanced EXPLAIN?\n>>>>>\n>>>>>\n>>>> There's not much out there. This document was written after reading\n>>>> this list, viewing some talks (you may find a lot of them on youtube), and\n>>>> reading the code.\n>>>>\n>>>> I intend to update this document, since I learned quite more since\n>>>> 2012. Though I didn't find the time yet :-/\n>>>>\n>>>> Anyway, thanks.\n>>>>\n>>>>\n>>> Hello all\n>>>\n>>> I would like to make clear that there are two \"Flavio Henrique\" on the\n>>> lists, me beeing one of them, I'd like to say that I'm not the OP.\n>>> A bit off-topic anyway, thanks for understanding.\n>>>\n>>> Flavio Gurgel\n>>>\n>>>\n>>\n>\n\nAh, I see now that the article you linked to in the OP is written by the same author and is maybe the very same content. If so, that sure seems pretty comprehensive to me, though I've also been reading this list, off and on, for many years, which has probably filled a lot of the gaps left by the original blog posts.On Fri, Dec 8, 2017 at 8:47 PM, Sam Gendler <[email protected]> wrote:What about the many-part explanation posted on the blog that accompanies explain.depesz.com.  Here is the first installment. I seem to remember that there are 5 or 6 installments.https://www.depesz.com/2013/04/16/explaining-the-unexplainable/On Fri, Dec 8, 2017 at 8:44 AM, Gustavo Velasquez <[email protected]> wrote:Dude, You can rest assured that at least the Brazilians members will always know based on your last name you are not the same :-).What's the point of explaining that anyways? Got curious.As to what pertains to the topic:This is another simple yet effective doc:https://wiki.postgresql.org/images/4/45/Explaining_EXPLAIN.pdfExecution plans are tricky and reading them accurately to make good decisions require a lot of experience and awareness of the situation. It does not only require that you know how to read the tool itself but also know how the DB and schemas have been designed, if stats are up to date, how tables are populated, frequency and type of queries, adequate indexing in place, the hardware it sits on, etc.It's a mix of science, broaden knowledge, perspicacity, and why not say, it's an art.Have a great weekend.___________________________________________________________________________Gustavo Velasquez+1 (256) 653-9725\nOn Fri, Dec 8, 2017 at 7:32 AM, Flavio Henrique Araque Gurgel <[email protected]> wrote:Em sex, 8 de dez de 2017 às 14:20, Guillaume Lelarge <[email protected]> escreveu:Hi,2017-12-08 2:12 GMT+01:00 Flávio Henrique <[email protected]>:Hi experts!I read this nice article about Understanding EXPLAIN [1] weeks ago that opened my mind about the tool, but it seems no enough to explain a lot of plans that I see in this list. Thanks. I often read responses to a plan that are not covered by the article. I need/want to know EXPLAIN better.Can you kindly advise me a good reading about advanced EXPLAIN?There's not much out there. This document was written after reading this list, viewing some talks (you may find a lot of them on youtube), and reading the code.I intend to update this document, since I learned quite more since 2012. Though I didn't find the time yet :-/Anyway, thanks.Hello all I would like to make clear that there are two \"Flavio Henrique\" on the lists, me beeing one of them, I'd like to say that I'm not the OP.A bit off-topic anyway, thanks for understanding.Flavio Gurgel", "msg_date": "Fri, 8 Dec 2017 20:52:36 -0800", "msg_from": "Sam Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Learning EXPLAIN" } ]
[ { "msg_contents": "Hi,\n\nI have a set of tables with fairly large number of columns, mostly int with\na few bigints and short char/varchar columns. I¹ve noticed that Postgres is\npretty slow at inserting data in such a table. I tried to tune every\npossible setting: using unlogged tables, increased shared_buffers, etc; even\nplaced the db cluster on ramfs and turned fsync off. The results are pretty\nmuch the same with the exception of using unlogged tables that improves\nperformance just a little bit.\n\nI have made a minimally reproducible test case consisting of a table with\n848 columns, inserting partial dataset of 100,000 rows with 240 columns. On\nmy dev VM the COPY FROM operation takes just shy of 3 seconds to complete,\nwhich is entirely unexpected for such a small dataset.\n\nHere¹s a tarball with test schema and data:\nhttp://nohuhu.org/copy_perf.tar.bz2; it¹s 338k compressed but expands to\n~50mb. Here¹s the result of profiling session with perf:\nhttps://pastebin.com/pjv7JqxD\n\n\n-- \nRegards,\nAlex.\n\n\n\n\nHi,I have a set of tables with fairly large number of columns, mostly int with a few bigints and short char/varchar columns. I’ve noticed that Postgres is pretty slow at inserting data in such a table. I tried to tune every possible setting: using unlogged tables, increased shared_buffers, etc; even placed the db cluster on ramfs and turned fsync off. The results are pretty much the same with the exception of using unlogged tables that improves performance just a little bit.I have made a minimally reproducible test case consisting of a table with 848 columns, inserting partial dataset of 100,000 rows with 240 columns. On my dev VM the COPY FROM operation takes just shy of 3 seconds to complete, which is entirely unexpected for such a small dataset. Here’s a tarball with test schema and data: http://nohuhu.org/copy_perf.tar.bz2; it’s 338k compressed but expands to ~50mb. Here’s the result of profiling session with perf: https://pastebin.com/pjv7JqxD-- Regards,Alex.", "msg_date": "Thu, 07 Dec 2017 20:21:45 -0800", "msg_from": "Alex Tokarev <[email protected]>", "msg_from_op": true, "msg_subject": "Table with large number of int columns, very slow COPY FROM" }, { "msg_contents": "\n\nOn 08.12.2017 05:21, Alex Tokarev wrote:\n> I have made a minimally reproducible test case consisting of a table \n> with 848 columns\n\nSuch a high number of columns is maybe a sign of a wrong table / \ndatabase design, why do you have such a lot of columns? How many indexes \ndo you have?\n\nRegards, Andreas\n\n", "msg_date": "Fri, 8 Dec 2017 08:20:40 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table with large number of int columns, very slow COPY FROM" }, { "msg_contents": "Hi,\n\nOn 2017-12-07 20:21:45 -0800, Alex Tokarev wrote:\n> I have a set of tables with fairly large number of columns, mostly int with\n> a few bigints and short char/varchar columns. I�ve noticed that Postgres is\n> pretty slow at inserting data in such a table. I tried to tune every\n> possible setting: using unlogged tables, increased shared_buffers, etc; even\n> placed the db cluster on ramfs and turned fsync off. The results are pretty\n> much the same with the exception of using unlogged tables that improves\n> performance just a little bit.\n\n> I have made a minimally reproducible test case consisting of a table with\n> 848 columns, inserting partial dataset of 100,000 rows with 240 columns. On\n> my dev VM the COPY FROM operation takes just shy of 3 seconds to complete,\n> which is entirely unexpected for such a small dataset.\n\nI don't find this to be this absurdly slow. On my laptop loading with a\ndevelopment checkout this takes 1223.950 ms. This is 20mio fields\nparsed/sec, rows with 69mio fields/sec inserted. Removing the TRUNCATE\nand running the COPYs concurrently scales well to a few clients, and\nonly stops because my laptop's SSD stops being able to keep up.\n\n\nThat said, I do think there's a few places that could stand some\nimprovement. Locally the profile shows up as:\n+ 15.38% postgres libc-2.25.so [.] __GI_____strtoll_l_internal\n+ 11.79% postgres postgres [.] heap_fill_tuple\n+ 8.00% postgres postgres [.] CopyFrom\n+ 7.40% postgres postgres [.] CopyReadLine\n+ 6.79% postgres postgres [.] ExecConstraints\n+ 6.68% postgres postgres [.] NextCopyFromRawFields\n+ 6.36% postgres postgres [.] heap_compute_data_size\n+ 6.02% postgres postgres [.] pg_atoi\n\nthe strtoll is libc functionality triggered by pg_atoi(), something I've\nseen show up in numerous profiles. I think it's probably time to have\nour own optimized version of it rather than relying on libcs.\n\nThat heap_fill_tuple(), which basically builds a tuple from the parsed\ndatums, takes time somewhat proportional to the number of columns in the\ntable seems hard to avoid, especially because this isn't something we\nwant to optimize for with the price of making more common workloads with\nfewer columns slower. But there seems quite some micro-optimization\npotential.\n\nThat ExecConstraints() shows up seems unsurprising, it has to walk\nthrough all the table's columns checking for constraints. We could\neasily optimize this so we have a separate datastructure listing\nconstraints, but that'd be slower in the very common case of more\nreasonable numbers of columns.\n\nThe copy implementation deserves some optimization too...\n\n> Here�s a tarball with test schema and data:\n> http://nohuhu.org/copy_perf.tar.bz2; it�s 338k compressed but expands to\n> ~50mb. Here�s the result of profiling session with perf:\n> https://pastebin.com/pjv7JqxD\n\nThanks!\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Fri, 8 Dec 2017 10:17:34 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table with large number of int columns, very slow COPY FROM" }, { "msg_contents": "Hi,\n\n\nOn 2017-12-08 10:17:34 -0800, Andres Freund wrote:\n> the strtoll is libc functionality triggered by pg_atoi(), something I've\n> seen show up in numerous profiles. I think it's probably time to have\n> our own optimized version of it rather than relying on libcs.\n\nAttached is a hand-rolled version. After quickly hacking up one from\nscratch, I noticed we already kind of have one for int64 (scanint8), so\nI changed the structure of this one to be relatively similar.\n\nIt's currently using the overflow logic from [1], but that's not\nfundamentally required, we could rely on fwrapv for this one too.\n\nThis one improves performance of the submitted workload from 1223.950ms\nto 1020.640ms (best of three). The profile's shape changes quite\nnoticeably:\n\nmaster:\n+ 15.38% postgres libc-2.25.so [.] __GI_____strtoll_l_internal\n+ 11.79% postgres postgres [.] heap_fill_tuple\n+ 8.00% postgres postgres [.] CopyFrom\n+ 7.40% postgres postgres [.] CopyReadLine\n+ 6.79% postgres postgres [.] ExecConstraints\n+ 6.68% postgres postgres [.] NextCopyFromRawFields\n+ 6.36% postgres postgres [.] heap_compute_data_size\n+ 6.02% postgres postgres [.] pg_atoi\npatch:\n+ 13.70% postgres postgres [.] heap_fill_tuple\n+ 10.46% postgres postgres [.] CopyFrom\n+ 9.31% postgres postgres [.] pg_strto32\n+ 8.39% postgres postgres [.] CopyReadLine\n+ 7.88% postgres postgres [.] ExecConstraints\n+ 7.63% postgres postgres [.] InputFunctionCall\n+ 7.41% postgres postgres [.] heap_compute_data_size\n+ 7.21% postgres postgres [.] pg_verify_mbstr\n+ 5.49% postgres postgres [.] NextCopyFromRawFields\n\n\nThis probably isn't going to resolve Alex's performance concerns\nmeaningfully, but seems quite worthwhile to do anyway.\n\nWe probably should have int8/16/64 version coded just as use the 32bit\nversion, but I decided to leave that out for now. Primarily interested\nin comments. Wonder a bit whether it's worth providing an 'errorOk'\nmode like scanint8 does, but surveying its callers suggests we should\nrather change them to not need it...\n\nGreetings,\n\nAndres Freund\n\n[1] http://archives.postgresql.org/message-id/20171030112751.mukkriz2rur2qkxc%40alap3.anarazel.de", "msg_date": "Fri, 8 Dec 2017 13:44:37 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Faster str to int conversion (was Table with large number of int\n columns, very slow COPY FROM)" }, { "msg_contents": "Hi,\n\nOn 2017-12-08 13:44:37 -0800, Andres Freund wrote:\n> On 2017-12-08 10:17:34 -0800, Andres Freund wrote:\n> > the strtoll is libc functionality triggered by pg_atoi(), something I've\n> > seen show up in numerous profiles. I think it's probably time to have\n> > our own optimized version of it rather than relying on libcs.\n> \n> Attached is a hand-rolled version. After quickly hacking up one from\n> scratch, I noticed we already kind of have one for int64 (scanint8), so\n> I changed the structure of this one to be relatively similar.\n> \n> It's currently using the overflow logic from [1], but that's not\n> fundamentally required, we could rely on fwrapv for this one too.\n> \n> This one improves performance of the submitted workload from 1223.950ms\n> to 1020.640ms (best of three). The profile's shape changes quite\n> noticeably:\n\nFWIW, here's a rebased version of this patch. Could probably be polished\nfurther. One might argue that we should do a bit more wide ranging\nchanges, to convert scanint8 and pg_atoi to be also unified. But it\nmight also just be worthwhile to apply without those, given the\nperformance benefit.\n\nAnybody have an opinion on that?\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 7 Jul 2018 13:01:58 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster str to int conversion (was Table with large number of int\n columns, very slow COPY FROM)" }, { "msg_contents": "On Sat, Jul 7, 2018 at 4:01 PM, Andres Freund <[email protected]> wrote:\n> FWIW, here's a rebased version of this patch. Could probably be polished\n> further. One might argue that we should do a bit more wide ranging\n> changes, to convert scanint8 and pg_atoi to be also unified. But it\n> might also just be worthwhile to apply without those, given the\n> performance benefit.\n\nWouldn't hurt to do that one too, but might be OK to just do this\nmuch. Questions:\n\n1. Why the error message changes? If there's a good reason, it should\nbe done as a separate commit, or at least well-documented in the\ncommit message.\n\n2. Does the likely/unlikely stuff make a noticeable difference?\n\n3. If this is a drop-in replacement for pg_atoi, why not just recode\npg_atoi this way -- or have it call this -- and leave the callers\nunchanged?\n\n4. Are we sure this is faster on all platforms, or could it work out\nthe other way on, say, BSD?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Wed, 18 Jul 2018 14:34:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster str to int conversion (was Table with large number of int\n columns, very slow COPY FROM)" }, { "msg_contents": "Hi,\n\nOn 2018-07-18 14:34:34 -0400, Robert Haas wrote:\n> On Sat, Jul 7, 2018 at 4:01 PM, Andres Freund <[email protected]> wrote:\n> > FWIW, here's a rebased version of this patch. Could probably be polished\n> > further. One might argue that we should do a bit more wide ranging\n> > changes, to convert scanint8 and pg_atoi to be also unified. But it\n> > might also just be worthwhile to apply without those, given the\n> > performance benefit.\n> \n> Wouldn't hurt to do that one too, but might be OK to just do this\n> much. Questions:\n> \n> 1. Why the error message changes? If there's a good reason, it should\n> be done as a separate commit, or at least well-documented in the\n> commit message.\n\nBecause there's a lot of \"invalid input syntax for type %s: \\\"%s\\\"\",\nerror messages, and we shouldn't force translators to have separate\nversion that inlines the first %s. But you're right, it'd be worthwhile\nto point that out in the commit message.\n\n\n> 2. Does the likely/unlikely stuff make a noticeable difference?\n\nYes. It's also largely a copy from existing code (scanint8), so I don't\nreally want to differ here.\n\n\n> 3. If this is a drop-in replacement for pg_atoi, why not just recode\n> pg_atoi this way -- or have it call this -- and leave the callers\n> unchanged?\n\nBecause pg_atoi supports a variable 'terminator'. Supporting that would\ncreate a bit slower code, without being particularly useful. I think\nthere's only a single in-core caller left after the patch\n(int2vectorin). There's a fair argument that that should just be\nopen-coded to handle the weird space parsing, but given there's probably\nexternal pg_atoi() callers, I'm not sure it's worth doing so?\n\nI don't think it's a good idea to continue to have pg_atoi as a wrapper\n- it takes a size argument, which makes efficient code hard.\n\n\n> 4. Are we sure this is faster on all platforms, or could it work out\n> the other way on, say, BSD?\n\nI'd be *VERY* surprised if any would be faster. It's not easy to write a\nfaster implmentation, than what I've proposed, and especially not so if\nyou use strtol() as the API (variable bases, a bit of locale support).\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Thu, 19 Jul 2018 13:32:12 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster str to int conversion (was Table with large number of int\n columns, very slow COPY FROM)" }, { "msg_contents": "On Thu, Jul 19, 2018 at 4:32 PM, Andres Freund <[email protected]> wrote:\n>> 1. Why the error message changes? If there's a good reason, it should\n>> be done as a separate commit, or at least well-documented in the\n>> commit message.\n>\n> Because there's a lot of \"invalid input syntax for type %s: \\\"%s\\\"\",\n> error messages, and we shouldn't force translators to have separate\n> version that inlines the first %s. But you're right, it'd be worthwhile\n> to point that out in the commit message.\n\nIt just seems weird that they're bundled together in one commit like this.\n\n>> 2. Does the likely/unlikely stuff make a noticeable difference?\n>\n> Yes. It's also largely a copy from existing code (scanint8), so I don't\n> really want to differ here.\n\nOK.\n\n>> 3. If this is a drop-in replacement for pg_atoi, why not just recode\n>> pg_atoi this way -- or have it call this -- and leave the callers\n>> unchanged?\n>\n> Because pg_atoi supports a variable 'terminator'.\n\nOK.\n\n>> 4. Are we sure this is faster on all platforms, or could it work out\n>> the other way on, say, BSD?\n>\n> I'd be *VERY* surprised if any would be faster. It's not easy to write a\n> faster implmentation, than what I've proposed, and especially not so if\n> you use strtol() as the API (variable bases, a bit of locale support).\n\nOK.\n\nNothing else from me...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Fri, 20 Jul 2018 08:27:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster str to int conversion (was Table with large number of int\n columns, very slow COPY FROM)" }, { "msg_contents": "Hi,\n\nOn 2018-07-20 08:27:34 -0400, Robert Haas wrote:\n> On Thu, Jul 19, 2018 at 4:32 PM, Andres Freund <[email protected]> wrote:\n> >> 1. Why the error message changes? If there's a good reason, it should\n> >> be done as a separate commit, or at least well-documented in the\n> >> commit message.\n> >\n> > Because there's a lot of \"invalid input syntax for type %s: \\\"%s\\\"\",\n> > error messages, and we shouldn't force translators to have separate\n> > version that inlines the first %s. But you're right, it'd be worthwhile\n> > to point that out in the commit message.\n> \n> It just seems weird that they're bundled together in one commit like this.\n\nI'll push it separately.\n\n> Nothing else from me...\n\nThanks for looking!\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Fri, 20 Jul 2018 09:45:10 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster str to int conversion (was Table with large number of int\n columns, very slow COPY FROM)" } ]
[ { "msg_contents": "Hi All\n\nCan anybody tell me if there is any performance concern around the use of \nPrepared Transactions in Postgres. I need to decide whether to enable an \nexternal transaction manager in our application, but I'm concerned about \nthe performance impact this could have.\n\nRegards\nRiaan Stander\n\n\n\n\n\n\n\nHi\nAll\nCan\nanybody tell me if there is any performance concern around the use of\nPrepared Transactions in Postgres. I need to decide whether to enable an\nexternal transaction manager in our application, but I'm concerned about\nthe performance impact this could have.\nRegards\nRiaan Stander", "msg_date": "Mon, 11 Dec 2017 02:39:40 +0200", "msg_from": "Riaan Stander <[email protected]>", "msg_from_op": true, "msg_subject": "Prepared Transactions" }, { "msg_contents": "Hello!\n\nYou need prepared transactions only if you need two-phase commit to provide distributed atomic transaction on multiple different databases.\nIf you not need distributed transactions - you not needed prepared transactions at all.\nBut if you need distributed transactions - here is no more choice regardless performance questions.\n\nAs say in documentation https://www.postgresql.org/docs/current/static/sql-prepare-transaction.html\n> Unless you're writing a transaction manager, you probably shouldn't be using PREPARE TRANSACTION.\n\nRegards, Sergei\n\n", "msg_date": "Mon, 11 Dec 2017 11:14:59 +0300", "msg_from": "Sergei Kornilov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Prepared Transactions" }, { "msg_contents": "Hi Riaan,\n You benefit from greater fault tolerance performance. Recovering from\na crash/network outage is quicker/easier.\n On the downside you might see a reduction in transactions per second.\n\n It's worth benchmarking. To see if the impact to tps is acceptable to\nlive with.\n\nJeremy\n\nOn Mon, 2017-12-11 at 11:14 +0300, Sergei Kornilov wrote:\n> Hello!\n> \n> You need prepared transactions only if you need two-phase commit to\n> provide distributed atomic transaction on multiple different\n> databases.\n> If you not need distributed transactions - you not needed prepared\n> transactions at all.\n> But if you need distributed transactions - here is no more choice\n> regardless performance questions.\n> \n> As say in documentation https://www.postgresql.org/docs/current/stati\n> c/sql-prepare-transaction.html\n> > Unless you're writing a transaction manager, you probably shouldn't\n> > be using PREPARE TRANSACTION.\n> \n> Regards, Sergei\n> \n\n", "msg_date": "Mon, 11 Dec 2017 12:13:23 +0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Prepared Transactions" } ]
[ { "msg_contents": "In my postgresql 9.6 instance I have 1 production database. When I query\nthe size of all databases :\n\ncombit=> Select\npg_database.datname,pg_size_pretty(pg_database_size(pg_database.datname))\nas size from pg_database;\n datname | size -----------+---------\n template0 | 7265 kB\n combit | 285 GB\n postgres | 7959 kB\n template1 | 7983 kB\n repmgr | 8135 kB(5 rows)\n\nWhen I check what are the big tables in my database (includes indexes) :\n\ncombit=> SELECT nspname || '.' || relname AS \"relation\",\ncombit-> pg_size_pretty(pg_total_relation_size(C.oid)) AS \"total_size\"\ncombit-> FROM pg_class C\ncombit-> LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\ncombit-> WHERE nspname NOT IN ('pg_catalog', 'information_schema')\ncombit-> AND C.relkind <> 'i'\ncombit-> AND nspname !~ '^pg_toast'\ncombit-> ORDER BY pg_total_relation_size(C.oid) DESC\ncombit-> LIMIT 20;\n relation | total_size -----------------------------+------------\n rep.ps_rf_inst_prod | 48 GB\n rep.nap_inter_x5 | 46 GB\n rep.man_x5 | 16 GB\n rep.tc_fint_x5 | 9695 MB\n rep.nap_ip_debit_x5 | 7645 MB\n rep.ip__billing | 5458 MB\n rep.ps_rd | 3417 MB\n rep.nap_ip_discount | 3147 MB\n rep.custo_x5 | 2154 MB\n rep.ip_service_discou_x5 | 1836 MB\n rep.tc_sub_rate__x5 | 294 MB\n\nThe total sum is not more than 120G.\n\nWhen I check the fs directly :\n\n[/data/base] : du -sk * | sort -n7284 133227868 133237892\n18156 166694298713364 16400\n\n[/data/base] :\n\n16400 is the oid of the combit database. As you can see the size of combit\non the fs is about 298G.\n\nI checked for dead tuples in the biggest tables :\n\ncombit=>select relname,n_dead_tup,last_autoanalyze,last_analyze,last_autovacuum,last_vacuum\nfrom pg_stat_user_tables order by n_live_tup desc limit4;\n\n -[ RECORD 1 ]----+------------------------------\n relname | ps_rf_inst_prod\n n_dead_tup | 0\n last_autoanalyze | 2017-12-04 09:00:16.585295+02\n last_analyze | 2017-12-05 16:08:31.218621+02\n last_autovacuum |\n last_vacuum |\n -[ RECORD 2 ]----+------------------------------\n relname | man_x5\n n_dead_tup | 0\n last_autoanalyze | 2017-12-05 06:02:07.189184+02\n last_analyze | 2017-12-05 16:12:58.130519+02\n last_autovacuum |\n last_vacuum |\n -[ RECORD 3 ]----+------------------------------\n relname | tc_fint_x5\n n_dead_tup | 0\n last_autoanalyze | 2017-12-05 06:04:06.698422+02\n last_analyze |\n last_autovacuum |\n last_vacuum |\n -[ RECORD 4 ]----+------------------------------\n relname | nap_inter_x5\n n_dead_tup | 0\n last_autoanalyze | 2017-12-04 08:54:16.764392+02\n last_analyze | 2017-12-05 16:10:23.411266+02\n last_autovacuum |\n last_vacuum |\n\nI run vacuum full on all 5 top tables 2 hours ago and it didnt free alot of\nspace...\n\nOn this database the only operations that happen are truncate , insert and\nselect. So how can it be that I had dead tuples on some of my tables ? If I\nonly run truncate,select,insert query tuples shouldnt be created..\n\nAnd the bigger question, Where are the missing 180G ?\n\nIn my postgresql 9.6 instance I have 1 production database. When I query the size of all databases :combit=> Select pg_database.datname,pg_size_pretty(pg_database_size(pg_database.datname)) as size from pg_database;\n datname | size \n-----------+---------\n template0 | 7265 kB\n combit | 285 GB\n postgres | 7959 kB\n template1 | 7983 kB\n repmgr | 8135 kB\n(5 rows)When I check what are the big tables in my database (includes indexes) :combit=> SELECT nspname || '.' || relname AS \"relation\",\ncombit-> pg_size_pretty(pg_total_relation_size(C.oid)) AS \"total_size\"\ncombit-> FROM pg_class C\ncombit-> LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)\ncombit-> WHERE nspname NOT IN ('pg_catalog', 'information_schema')\ncombit-> AND C.relkind <> 'i'\ncombit-> AND nspname !~ '^pg_toast'\ncombit-> ORDER BY pg_total_relation_size(C.oid) DESC\ncombit-> LIMIT 20;\n relation | total_size \n-----------------------------+------------\n rep.ps_rf_inst_prod | 48 GB\n rep.nap_inter_x5 | 46 GB\n rep.man_x5 | 16 GB\n rep.tc_fint_x5 | 9695 MB\n rep.nap_ip_debit_x5 | 7645 MB\n rep.ip__billing | 5458 MB\n rep.ps_rd | 3417 MB\n rep.nap_ip_discount | 3147 MB\n rep.custo_x5 | 2154 MB\n rep.ip_service_discou_x5 | 1836 MB\n rep.tc_sub_rate__x5 | 294 MBThe total sum is not more than 120G.When I check the fs directly :[/data/base] : du -sk * | sort -n\n7284 13322\n7868 13323\n7892 1\n8156 166694\n298713364 16400[/data/base] :16400 is the oid of the combit database. As you can see the size of combit on the fs is about 298G.I checked for dead tuples in the biggest tables :combit=>select relname,n_dead_tup,last_autoanalyze,last_analyze,last_autovacuum,last_vacuum from pg_stat_user_tables order by n_live_tup desc limit4;\n\n -[ RECORD 1 ]----+------------------------------\n relname | ps_rf_inst_prod\n n_dead_tup | 0\n last_autoanalyze | 2017-12-04 09:00:16.585295+02\n last_analyze | 2017-12-05 16:08:31.218621+02\n last_autovacuum | \n last_vacuum | \n -[ RECORD 2 ]----+------------------------------\n relname | man_x5\n n_dead_tup | 0\n last_autoanalyze | 2017-12-05 06:02:07.189184+02\n last_analyze | 2017-12-05 16:12:58.130519+02\n last_autovacuum | \n last_vacuum | \n -[ RECORD 3 ]----+------------------------------\n relname | tc_fint_x5\n n_dead_tup | 0\n last_autoanalyze | 2017-12-05 06:04:06.698422+02\n last_analyze | \n last_autovacuum | \n last_vacuum | \n -[ RECORD 4 ]----+------------------------------\n relname | nap_inter_x5\n n_dead_tup | 0\n last_autoanalyze | 2017-12-04 08:54:16.764392+02\n last_analyze | 2017-12-05 16:10:23.411266+02\n last_autovacuum | \n last_vacuum | I run vacuum full on all 5 top tables 2 hours ago and it didnt free alot of space...On this database the only operations that happen are truncate , insert and select. So how can it be that I had dead tuples on some of my tables ? If I only run truncate,select,insert query tuples shouldnt be created..And the bigger question, Where are the missing 180G ?", "msg_date": "Tue, 12 Dec 2017 17:15:06 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL database size is not reasonable" }, { "msg_contents": "On Tue, Dec 12, 2017 at 8:15 AM, Mariel Cherkassky <\[email protected]> wrote:\n\n> ​A​\n> nd the bigger question, Where are the missing 180G ?\n>\n> ​In the toaster probably...\n\nhttps://www.postgresql.org/docs/current/static/storage-toast.html\n\nBasically large data values are store in another table different than both\nthe main table and indexes.\n\nDavid J.\n\nOn Tue, Dec 12, 2017 at 8:15 AM, Mariel Cherkassky <[email protected]> wrote:​A​nd the bigger question, Where are the missing 180G ?\n​In the toaster probably...https://www.postgresql.org/docs/current/static/storage-toast.htmlBasically large data values are store in another table different than both the main table and indexes.David J.", "msg_date": "Tue, 12 Dec 2017 08:21:14 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL database size is not reasonable" }, { "msg_contents": "On Tue, Dec 12, 2017 at 8:15 AM, Mariel Cherkassky <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n​A​nd the bigger question, Where are the missing 180G ?\r\n\r\n​In the toaster probably...\r\n\r\n\r\n\r\nhttps://www.postgresql.org/docs/current/static/storage-toast.html\r\n\r\n\r\n\r\nBasically large data values are store in another table different than both the main table and indexes.\r\n\r\n\r\n\r\nDavid J.\r\n\r\n\r\nThe query also says C.relkind <> 'i' which means it’s excluding indexes. Also note that pg_catalog is excluded but LOB data would be stored in pg_catalog.pg_largeobject. That could account for some overlooked space as well.\r\n\r\nCraig\r\n\n\n\n\n\n\n\n\n\n \n\n\nOn Tue, Dec 12, 2017 at 8:15 AM, Mariel Cherkassky <[email protected]> wrote:\n\n\n\n\n\n\n\n\r\n​A​nd the bigger question, Where are the missing 180G ?\n\n\n\n\n\n\n\n​In the toaster probably...\n\n\n\n\n \n\n\n\n\nhttps://www.postgresql.org/docs/current/static/storage-toast.html\n\n\n\n\n \n\n\n\n\nBasically large data values are store in another table different than both the main table and indexes.\n\n\n\n\n \n\n\n\n\nDavid J.\n\n \nThe query also says\r\nC.relkind <> 'i' which means it’s excluding indexes.  Also note that\r\npg_catalog is excluded but LOB data would be stored in\r\npg_catalog.pg_largeobject.  That could account for some overlooked space as well.\n \nCraig", "msg_date": "Tue, 12 Dec 2017 15:44:23 +0000", "msg_from": "Craig McIlwee <[email protected]>", "msg_from_op": false, "msg_subject": "RE: PostgreSQL database size is not reasonable" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Tue, Dec 12, 2017 at 8:15 AM, Mariel Cherkassky <\n> [email protected]> wrote:\n>> And the bigger question, Where are the missing 180G ?\n\n> ​In the toaster probably...\n\npg_total_relation_size should have counted the toast tables,\nas well as the indexes, if memory serves.\n\nWhat I'm wondering about is the system catalogs, which Mariel's\nquery explicitly excluded. 180G would be awful darn large for\nthose, but maybe there's a bloat problem somewhere.\n\nOtherwise, try to identify the largest individual files in the\ndatabase directory ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 12 Dec 2017 10:49:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL database size is not reasonable" }, { "msg_contents": "The system catalogs located in the global directory but the global\ndirectory isnt so big(500K). As I mentioned, the base directory is huge and\nthe directory 16400 is the biggest inside. I checked some big files inside\nthe directory 16400 (which represents the commbit database) and for some\nthere *isnt an object that match* and for some there are. So, How can I\ncontinue ?\n\n\n2017-12-12 17:49 GMT+02:00 Tom Lane <[email protected]>:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Tue, Dec 12, 2017 at 8:15 AM, Mariel Cherkassky <\n> > [email protected]> wrote:\n> >> And the bigger question, Where are the missing 180G ?\n>\n> > ​In the toaster probably...\n>\n> pg_total_relation_size should have counted the toast tables,\n> as well as the indexes, if memory serves.\n>\n> What I'm wondering about is the system catalogs, which Mariel's\n> query explicitly excluded. 180G would be awful darn large for\n> those, but maybe there's a bloat problem somewhere.\n>\n> Otherwise, try to identify the largest individual files in the\n> database directory ...\n>\n> regards, tom lane\n>\n\nThe system catalogs located in the global directory but the global directory isnt so big(500K). As I mentioned, the base directory is huge and the directory 16400 is the biggest inside. I checked some big files inside the directory 16400 (which represents the commbit database) and for some there isnt an object that match and for some there are. So, How can I continue ?2017-12-12 17:49 GMT+02:00 Tom Lane <[email protected]>:\"David G. Johnston\" <[email protected]> writes:\n> On Tue, Dec 12, 2017 at 8:15 AM, Mariel Cherkassky <\n> [email protected]> wrote:\n>> And the bigger question, Where are the missing 180G ?\n\n> ​In the toaster probably...\n\npg_total_relation_size should have counted the toast tables,\nas well as the indexes, if memory serves.\n\nWhat I'm wondering about is the system catalogs, which Mariel's\nquery explicitly excluded.  180G would be awful darn large for\nthose, but maybe there's a bloat problem somewhere.\n\nOtherwise, try to identify the largest individual files in the\ndatabase directory ...\n\n                        regards, tom lane", "msg_date": "Tue, 12 Dec 2017 18:22:14 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL database size is not reasonable" }, { "msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> The system catalogs located in the global directory but the global\n> directory isnt so big(500K).\n\nYou're operating under false assumptions. Only catalogs marked\nrelisshared are in that directory, other ones are in the per-database\ndirectories.\n\nSomebody mentioned pg_largeobject upthread --- that would definitely\nbe a candidate to be big, if you're using large objects at all.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 12 Dec 2017 12:59:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL database size is not reasonable" } ]
[ { "msg_contents": "Hi,\n\nMy CPU utilization is going to 100% in PostgreSQL because of one unknown process /x3303400001 is running from postgres user.\n\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n19885 postgres 20 0 192684 3916 1420 S 99.3 0.1 5689:04 x3303400001\n\nThe same file is automatically created in Postgres Cluster also. I am using Postgresql-9.3.\n\nKindly suggest how can I resolve this issue.\n\nRegar\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nHi,\n \nMy CPU utilization is going to 100% in PostgreSQL because of one unknown process\n/x3303400001 is running from postgres user.\n \nPID  \nUSER     \n PR  NI    VIRT    RES    SHR S %CPU %MEM   TIME+   COMMAND\n19885\n postgres 20   0 \n192684  \n3916  \n1420\n S 99.3 \n0.1  \n5689:04\n  x3303400001 \n\n \nThe same file is automatically created in Postgres Cluster also. I am using Postgresql-9.3.\n \nKindly suggest how can I resolve this issue.\n \nRegar\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.", "msg_date": "Wed, 13 Dec 2017 10:12:20 +0000", "msg_from": "Dinesh Chandra 12108 <[email protected]>", "msg_from_op": true, "msg_subject": "CPU 100% usage caused by unknown postgres process.." }, { "msg_contents": "Dinesh Chandra 12108 wrote:\n> My CPU utilization is going to 100% in PostgreSQL because of one unknown process /x3303400001 is running from postgres user.\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 19885 postgres 20 0 192684 3916 1420 S 99.3 0.1 5689:04 x3303400001 \n> \n> The same file is automatically created in Postgres Cluster also. I am using Postgresql-9.3.\n> \n> Kindly suggest how can I resolve this issue.\n\nI don't know, but the same problem has been reported on Stackoverflow:\nhttps://stackoverflow.com/q/46617329/6464308\n\nIf your queries look similar, then you might indeed be the victim of an attack.\n\nFigure out where the function and the executable come from.\n\nIn case of doubt, disconnect the server from the network.\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Wed, 13 Dec 2017 11:36:11 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU 100% usage caused by unknown postgres process.." }, { "msg_contents": "On Wed, Dec 13, 2017 at 11:36:11AM +0100, Laurenz Albe wrote:\n> Dinesh Chandra 12108 wrote:\n> > My CPU utilization is going to 100% in PostgreSQL because of one unknown process /x3303400001 is running from postgres user.\n> > \n> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> > 19885 postgres 20 0 192684 3916 1420 S 99.3 0.1 5689:04 x3303400001 \n> > \n> > The same file is automatically created in Postgres Cluster also. I am using Postgresql-9.3.\n> > \n> > Kindly suggest how can I resolve this issue.\n> \n> I don't know, but the same problem has been reported on Stackoverflow:\n> https://stackoverflow.com/q/46617329/6464308\n> \n> If your queries look similar, then you might indeed be the victim of an attack.\n> \n> Figure out where the function and the executable come from.\n> \n> In case of doubt, disconnect the server from the network.\n\nLooks suspicious; I would look at (and save) things like these:\n\nls -l /proc/19885/exe\nls -l /proc/19885/fd\nls -l /proc/19885/cwd\n\nsudo lsof -n -p 19885\nsudo netstat -anpe |grep 19885\n\nStacktrace with gcore/gdb is a good idea.\nSave a copy of your log/postgres logfiles and try to figure out where it came\nfrom. Since an attacker seems to control the postgres process, your data may\nhave been compromized (leaked or tampered with).\n\nJustin\n\n", "msg_date": "Wed, 13 Dec 2017 06:19:52 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU 100% usage caused by unknown postgres process.." }, { "msg_contents": "\n\nOn 12/13/2017 01:19 PM, Justin Pryzby wrote:\n> On Wed, Dec 13, 2017 at 11:36:11AM +0100, Laurenz Albe wrote:\n>> Dinesh Chandra 12108 wrote:\n>>> My CPU utilization is going to 100% in PostgreSQL because of one unknown process /x3303400001 is running from postgres user.\n>>> \n>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n>>> 19885 postgres 20 0 192684 3916 1420 S 99.3 0.1 5689:04 x3303400001 \n>>> \n>>> The same file is automatically created in Postgres Cluster also. I am using Postgresql-9.3.\n>>> \n>>> Kindly suggest how can I resolve this issue.\n>>\n>> I don't know, but the same problem has been reported on Stackoverflow:\n>> https://stackoverflow.com/q/46617329/6464308\n>>\n>> If your queries look similar, then you might indeed be the victim of an attack.\n>>\n>> Figure out where the function and the executable come from.\n>>\n>> In case of doubt, disconnect the server from the network.\n> \n> Looks suspicious; I would look at (and save) things like these:\n> \n> ls -l /proc/19885/exe\n> ls -l /proc/19885/fd\n> ls -l /proc/19885/cwd\n> \n> sudo lsof -n -p 19885\n> sudo netstat -anpe |grep 19885\n> \n> Stacktrace with gcore/gdb is a good idea.\n> Save a copy of your log/postgres logfiles and try to figure out where it came\n> from. Since an attacker seems to control the postgres process, your data may\n> have been compromized (leaked or tampered with).\n> \n\nAny details about the x3303400001 file (is it a shell script or some\nkind of binary)?\n\nFWIW the queries (listed in the stackoverflow post) are running under\npostgres, which I assume is superuser. The backend has full access to\nthe data directory, of course, so it may create extra files (using\nadminpack extension, for example).\n\nIf that's the case (and if it's indeed an attack), it either means the\nattacker likely already has access to all the data. So presumably\nx3303400001 is doing something else at the OS level.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 13 Dec 2017 16:22:47 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU 100% usage caused by unknown postgres process.." } ]
[ { "msg_contents": "I want to do a JOIN against a subquery that is doing an aggregation. The\nquery itself is relatively straightforward, but has poor performance.\n\nHere it is:\nSELECT a.*, b.*\n FROM base AS a\n LEFT OUTER JOIN\n (SELECT other, COUNT(value), COUNT(DISTINCT value) FROM other GROUP\nBY other) AS b\n USING (other)\n WHERE id IN (4, 56, 102);\n\nIt's significantly faster, but more complicated (and repetitive), if I add\nthe following:\nWHERE other = ANY(ARRAY(SELECT DISTINCT other FROM base WHERE id IN (4, 56,\n102)))\n\nI tried adding the following:\nother IN (a.other)\nOr:\nother = a.other\nBut I get this error:\nERROR: invalid reference to FROM-clause entry for table \"a\"\n\nLINE 1: ...ue), COUNT(DISTINCT value) FROM other WHERE other=a.other GR...\n\n ^\n\nHINT: There is an entry for table \"a\", but it cannot be referenced from\nthis part of the query.\n\nIs there a way to do something like that simpler query so the subquery can\nget better performance by filtering only to what it needs instead of doing\nthe GROUP BY on the whole table?\n\nThanks,\nDave\n\nIn case it's helpful, here's the table definitions:\nCREATE TABLE base (id INTEGER PRIMARY KEY, value TEXT, other INTEGER);\nCREATE TABLE other (other INTEGER, value INTEGER);\n\nAnd the explain results:\nEXPLAIN ANALYZE SELECT a.*, b.* FROM base AS a LEFT OUTER JOIN (SELECT\nother, COUNT(value), COUNT(DISTINCT value) FROM other WHERE other =\nANY(ARRAY(SELECT DISTINCT other FROM base WHERE id IN (4, 56, 102))) GROUP\nBY other) AS b USING (other) WHERE id IN (4, 56, 102);\n\n QUERY PLAN\n\n\n-------------------------------------------------------------------------------------------------------------------------------------------\n\n Merge Right Join (cost=27619.21..27741.23 rows=3 width=33) (actual\ntime=105.045..115.539 rows=3 loops=1)\n\n Merge Cond: (other.other = a.other)\n\n -> GroupAggregate (cost=27602.28..27711.74 rows=1001 width=20) (actual\ntime=104.989..115.452 rows=3 loops=1)\n\n Group Key: other.other\n\n InitPlan 1 (returns $0)\n\n -> Unique (cost=16.93..16.95 rows=3 width=4) (actual\ntime=0.083..0.127 rows=3 loops=1)\n\n -> Sort (cost=16.93..16.94 rows=3 width=4) (actual\ntime=0.073..0.085 rows=3 loops=1)\n\n Sort Key: base.other\n\n Sort Method: quicksort Memory: 25kB\n\n -> Index Scan using base_pkey on base\n (cost=0.29..16.91 rows=3 width=4) (actual time=0.019..0.042 rows=3 loops=1)\n\n Index Cond: (id = ANY\n('{4,56,102}'::integer[]))\n\n -> Sort (cost=27585.34..27610.20 rows=9945 width=8) (actual\ntime=99.401..107.199 rows=3035 loops=1)\n\n Sort Key: other.other\n\n Sort Method: quicksort Memory: 239kB\n\n -> Seq Scan on other (cost=0.00..26925.00 rows=9945\nwidth=8) (actual time=0.708..90.738 rows=3035 loops=1)\n\n Filter: (other = ANY ($0))\n\n Rows Removed by Filter: 996965\n\n -> Sort (cost=16.93..16.94 rows=3 width=13) (actual time=0.044..0.051\nrows=3 loops=1)\n\n Sort Key: a.other\n\n Sort Method: quicksort Memory: 25kB\n\n -> Index Scan using base_pkey on base a (cost=0.29..16.91 rows=3\nwidth=13) (actual time=0.016..0.027 rows=3 loops=1)\n\n Index Cond: (id = ANY ('{4,56,102}'::integer[]))\n\n Planning time: 4.163 ms\n\n Execution time: 115.665 ms\n\n\nEXPLAIN ANALYZE SELECT a.*, b.* FROM base AS a LEFT OUTER JOIN (SELECT\nother, COUNT(value), COUNT(DISTINCT value) FROM other GROUP BY other) AS b\nUSING (other) WHERE id IN (4, 56, 102);\n\n QUERY PLAN\n\n\n------------------------------------------------------------------------------------------------------------------------------------\n\n Merge Right Join (cost=127786.02..137791.07 rows=3 width=60) (actual\ntime=7459.042..12060.805 rows=3 loops=1)\n\n Merge Cond: (other.other = a.other)\n\n -> GroupAggregate (cost=127763.19..137765.69 rows=200 width=20)\n(actual time=7143.486..12057.835 rows=830 loops=1)\n\n Group Key: other.other\n\n -> Sort (cost=127763.19..130263.31 rows=1000050 width=8) (actual\ntime=7137.594..9624.119 rows=829088 loops=1)\n\n Sort Key: other.other\n\n Sort Method: external merge Disk: 17576kB\n\n -> Seq Scan on other (cost=0.00..14425.50 rows=1000050\nwidth=8) (actual time=0.555..2727.461 rows=1000000 loops=1)\n\n -> Sort (cost=22.83..22.84 rows=3 width=40) (actual time=0.103..0.112\nrows=3 loops=1)\n\n Sort Key: a.other\n\n Sort Method: quicksort Memory: 25kB\n\n -> Bitmap Heap Scan on base a (cost=12.87..22.81 rows=3\nwidth=40) (actual time=0.048..0.064 rows=3 loops=1)\n\n Recheck Cond: (id = ANY ('{4,56,102}'::integer[]))\n\n Heap Blocks: exact=1\n\n -> Bitmap Index Scan on base_pkey (cost=0.00..12.87 rows=3\nwidth=0) (actual time=0.029..0.029 rows=3 loops=1)\n\n Index Cond: (id = ANY ('{4,56,102}'::integer[]))\n\n Planning time: 2.179 ms\n\n Execution time: 12080.172 ms\n\nI want to do a JOIN against a subquery that is doing an aggregation. The query itself is relatively straightforward, but has poor performance.Here it is:SELECT a.*, b.*    FROM base AS a    LEFT OUTER JOIN        (SELECT other, COUNT(value), COUNT(DISTINCT value) FROM other GROUP BY other) AS b    USING (other)    WHERE id IN (4, 56, 102);It's significantly faster, but more complicated (and repetitive), if I add the following:WHERE other = ANY(ARRAY(SELECT DISTINCT other FROM base WHERE id IN (4, 56, 102)))I tried adding the following:other IN (a.other)Or:other = a.otherBut I get this error:ERROR:  invalid reference to FROM-clause entry for table \"a\"LINE 1: ...ue), COUNT(DISTINCT value) FROM other WHERE other=a.other GR...                                                             ^HINT:  There is an entry for table \"a\", but it cannot be referenced from this part of the query.Is there a way to do something like that simpler query so the subquery can get better performance by filtering only to what it needs instead of doing the GROUP BY on the whole table?Thanks,DaveIn case it's helpful, here's the table definitions:CREATE TABLE base (id INTEGER PRIMARY KEY, value TEXT, other INTEGER);CREATE TABLE other (other INTEGER, value INTEGER);And the explain results:EXPLAIN ANALYZE SELECT a.*, b.* FROM base AS a LEFT OUTER JOIN (SELECT other, COUNT(value), COUNT(DISTINCT value) FROM other WHERE other = ANY(ARRAY(SELECT DISTINCT other FROM base WHERE id IN (4, 56, 102))) GROUP BY other) AS b USING (other) WHERE id IN (4, 56, 102);                                                                QUERY PLAN                                                                 ------------------------------------------------------------------------------------------------------------------------------------------- Merge Right Join  (cost=27619.21..27741.23 rows=3 width=33) (actual time=105.045..115.539 rows=3 loops=1)   Merge Cond: (other.other = a.other)   ->  GroupAggregate  (cost=27602.28..27711.74 rows=1001 width=20) (actual time=104.989..115.452 rows=3 loops=1)         Group Key: other.other         InitPlan 1 (returns $0)           ->  Unique  (cost=16.93..16.95 rows=3 width=4) (actual time=0.083..0.127 rows=3 loops=1)                 ->  Sort  (cost=16.93..16.94 rows=3 width=4) (actual time=0.073..0.085 rows=3 loops=1)                       Sort Key: base.other                       Sort Method: quicksort  Memory: 25kB                       ->  Index Scan using base_pkey on base  (cost=0.29..16.91 rows=3 width=4) (actual time=0.019..0.042 rows=3 loops=1)                             Index Cond: (id = ANY ('{4,56,102}'::integer[]))         ->  Sort  (cost=27585.34..27610.20 rows=9945 width=8) (actual time=99.401..107.199 rows=3035 loops=1)               Sort Key: other.other               Sort Method: quicksort  Memory: 239kB               ->  Seq Scan on other  (cost=0.00..26925.00 rows=9945 width=8) (actual time=0.708..90.738 rows=3035 loops=1)                     Filter: (other = ANY ($0))                     Rows Removed by Filter: 996965   ->  Sort  (cost=16.93..16.94 rows=3 width=13) (actual time=0.044..0.051 rows=3 loops=1)         Sort Key: a.other         Sort Method: quicksort  Memory: 25kB         ->  Index Scan using base_pkey on base a  (cost=0.29..16.91 rows=3 width=13) (actual time=0.016..0.027 rows=3 loops=1)               Index Cond: (id = ANY ('{4,56,102}'::integer[])) Planning time: 4.163 ms Execution time: 115.665 msEXPLAIN ANALYZE SELECT a.*, b.* FROM base AS a LEFT OUTER JOIN (SELECT other, COUNT(value), COUNT(DISTINCT value) FROM other GROUP BY other) AS b USING (other) WHERE id IN (4, 56, 102);                                                             QUERY PLAN                                                             ------------------------------------------------------------------------------------------------------------------------------------ Merge Right Join  (cost=127786.02..137791.07 rows=3 width=60) (actual time=7459.042..12060.805 rows=3 loops=1)   Merge Cond: (other.other = a.other)   ->  GroupAggregate  (cost=127763.19..137765.69 rows=200 width=20) (actual time=7143.486..12057.835 rows=830 loops=1)         Group Key: other.other         ->  Sort  (cost=127763.19..130263.31 rows=1000050 width=8) (actual time=7137.594..9624.119 rows=829088 loops=1)               Sort Key: other.other               Sort Method: external merge  Disk: 17576kB               ->  Seq Scan on other  (cost=0.00..14425.50 rows=1000050 width=8) (actual time=0.555..2727.461 rows=1000000 loops=1)   ->  Sort  (cost=22.83..22.84 rows=3 width=40) (actual time=0.103..0.112 rows=3 loops=1)         Sort Key: a.other         Sort Method: quicksort  Memory: 25kB         ->  Bitmap Heap Scan on base a  (cost=12.87..22.81 rows=3 width=40) (actual time=0.048..0.064 rows=3 loops=1)               Recheck Cond: (id = ANY ('{4,56,102}'::integer[]))               Heap Blocks: exact=1               ->  Bitmap Index Scan on base_pkey  (cost=0.00..12.87 rows=3 width=0) (actual time=0.029..0.029 rows=3 loops=1)                     Index Cond: (id = ANY ('{4,56,102}'::integer[])) Planning time: 2.179 ms Execution time: 12080.172 ms", "msg_date": "Mon, 18 Dec 2017 17:00:05 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "WHERE IN for JOIN subquery?" }, { "msg_contents": "On Mon, Dec 18, 2017 at 5:00 PM, Dave Johansen <[email protected]>\nwrote:\n\n>\n> other = a.other\n> But I get this error:\n> ERROR: invalid reference to FROM-clause entry for table \"a\"\n>\n> LINE 1: ...ue), COUNT(DISTINCT value) FROM other WHERE other=a.other GR...\n>\n> HINT: There is an entry for table \"a\", but it cannot be referenced from\n> this part of the query.\n>\n\nOne possible solution to this error is to add the word \"LATERAL\" before\nLEFT JOIN so that the right side of the join can reference variables from\nthe left side.\n\nDavid J.\n​\n\nOn Mon, Dec 18, 2017 at 5:00 PM, Dave Johansen <[email protected]> wrote:other = a.otherBut I get this error:ERROR:  invalid reference to FROM-clause entry for table \"a\"LINE 1: ...ue), COUNT(DISTINCT value) FROM other WHERE other=a.other GR...HINT:  There is an entry for table \"a\", but it cannot be referenced from this part of the query.One possible solution to this error is to add the word \"LATERAL\" before LEFT JOIN so that the right side of the join can reference variables from the left side.David J.​", "msg_date": "Mon, 18 Dec 2017 17:10:34 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WHERE IN for JOIN subquery?" }, { "msg_contents": "On Mon, Dec 18, 2017 at 5:10 PM, David G. Johnston <\[email protected]> wrote:\n\n> On Mon, Dec 18, 2017 at 5:00 PM, Dave Johansen <[email protected]>\n> wrote:\n>\n>>\n>> other = a.other\n>> But I get this error:\n>> ERROR: invalid reference to FROM-clause entry for table \"a\"\n>>\n>> LINE 1: ...ue), COUNT(DISTINCT value) FROM other WHERE other=a.other GR...\n>>\n>> HINT: There is an entry for table \"a\", but it cannot be referenced from\n>> this part of the query.\n>>\n>\n> One possible solution to this error is to add the word \"LATERAL\" before\n> LEFT JOIN so that the right side of the join can reference variables from\n> the left side.\n>\n\nThat appears to be what I was looking for.\nThanks,\nDave\n\nOn Mon, Dec 18, 2017 at 5:10 PM, David G. Johnston <[email protected]> wrote:On Mon, Dec 18, 2017 at 5:00 PM, Dave Johansen <[email protected]> wrote:other = a.otherBut I get this error:ERROR:  invalid reference to FROM-clause entry for table \"a\"LINE 1: ...ue), COUNT(DISTINCT value) FROM other WHERE other=a.other GR...HINT:  There is an entry for table \"a\", but it cannot be referenced from this part of the query.One possible solution to this error is to add the word \"LATERAL\" before LEFT JOIN so that the right side of the join can reference variables from the left side.That appears to be what I was looking for.Thanks,Dave", "msg_date": "Mon, 18 Dec 2017 17:29:15 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WHERE IN for JOIN subquery?" } ]
[ { "msg_contents": "Hi,\n\nWe operate an RDS postgres 9.5 instance and have periodic CPU spikes to\n100%. These spikes appear to be due to autoanalyze kicking on our larger\ntables.\n\nOur largest table has 75 million rows and the autoanalyze scale factor is\nset to 0.05.\n\nThe documentation I've read suggests that the analyze always operates on\nthe entire table and is not incremental. Given that supposition are there\nways to control cost(especially CPU) of the autoanalyze operation? Would a\nmore aggressive autoanalyze scale factor (0.01) help. With the current\nscale factor we see an autoanalyze once a week, query performance has been\nacceptable so far, which could imply that scale factor could be increased\nif necessary.\n\nThanks,\nHabib Nahas\n\nHi,We operate an RDS postgres 9.5 instance and have periodic CPU spikes to 100%. These spikes appear to be due to autoanalyze kicking on our larger tables.Our largest table has 75 million rows and the autoanalyze scale factor is set to 0.05. The documentation I've read suggests that the analyze always operates on the entire table and is not incremental. Given that supposition are there ways to control cost(especially CPU) of the autoanalyze operation? Would a more aggressive autoanalyze scale factor (0.01) help. With the current scale factor we see an autoanalyze once a week, query performance has been acceptable so far, which could imply that scale factor could be increased if necessary. Thanks,Habib Nahas", "msg_date": "Tue, 19 Dec 2017 08:47:52 -0800", "msg_from": "Habib Nahas <[email protected]>", "msg_from_op": true, "msg_subject": "Autoanalyze CPU usage" }, { "msg_contents": "On Tue, Dec 19, 2017 at 08:47:52AM -0800, Habib Nahas wrote:\n> We operate an RDS postgres 9.5 instance and have periodic CPU spikes to\n> 100%. These spikes appear to be due to autoanalyze kicking on our larger\n> tables.\n\nNot sure if it'll help you, but for our large, insert-only tables partitioned\nby time, I made several changes from default:\n \n - near the end of month, report queries for previous day's data had poor\n statistics, because autoanalyze scale factor defaults to 0.1, so a table\n analyzed on the 24th of the month won't be analyzed again until the 26th, so\n the histogram shows that there's zero rows for previous day, causing nested\n loop over thousands of rows.\n - for separate reasons, I increased statistics target on our key columns (up\n to 3000 for one column).\n - large stats target on large tables caused (auto)analyze to use large amount\n of RAM. Therefor I changed our largest tables from monthly partition\n granuliarity (YYYYMM) to daily (YYYYMMDD). That creates what's\n traditionally considered to be an excessive number of partitions (and very\n large pg_attribute/attrdef and pg_statistic tables), but avoids the huge RAM\n issue, and works for our purposes (and I hope the traditional advice for\n number of child tables is relaxed in upcoming versions, too).\n\nOne possibility is a cronjob to set deafult \"scale factor\" to a modest/default\nvalues (0.1) during business hours and an aggressive value (0.005) off-hours.\nYou could do similar with autovacuum_max_workers ... but beware if they're\ncausing high RAM use. I believe autovacuum workers try to \"play nice\" and the\ncost are shared between all workers. But I suspect that's not true for CPU\ncost or RAM use, so there's nothing stopping you from having 9 workers each\nlooping around 2+GB RAM and 100% CPU doing MCV/histogram computation.\n\nMaybe that's of some use.\n\nJustin\n\n", "msg_date": "Tue, 19 Dec 2017 11:09:58 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autoanalyze CPU usage" }, { "msg_contents": "\n\nOn 12/19/2017 05:47 PM, Habib Nahas wrote:\n> Hi,\n> \n> We operate an RDS postgres 9.5 instance and have periodic CPU spikes to\n> 100%. These spikes appear to be due to autoanalyze kicking on our larger\n> tables.\n> \n> Our largest table has 75 million rows and the autoanalyze scale factor\n> is set to 0.05. \n> \n> The documentation I've read suggests that the analyze always operates on\n> the entire table and is not incremental. Given that supposition are\n> there ways to control cost(especially CPU) of the autoanalyze operation?\n> Would a more aggressive autoanalyze scale factor (0.01) help. With the\n> current scale factor we see an autoanalyze once a week, query\n> performance has been acceptable so far, which could imply that scale\n> factor could be increased if necessary. \n> \n\nNo, reducing the scale factor to 0.01 will not help at all, it will\nactually make the issue worse. The only thing autoanalyze does is\nrunning ANALYZE, which *always* collects a fixed-size sample. Making it\nmore frequent will not reduce the amount of work done on each run.\n\nSo the first question is if you are not using the default (0.1), i.e.\nhave you reduced it to 0.05.\n\nThe other question is why it's so CPU-intensive. Are you using the\ndefault statistics_target value (100), or have you increased that too?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 19 Dec 2017 23:03:18 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autoanalyze CPU usage" }, { "msg_contents": "The autoanalyze factor is set to 0.05 for the db, and we have not changed\nthe default statistics target.\n\nThe CPU spike occurred between 13:05 - 13:15. last_autoanalyze for the\ntable shows a time of 12:49; last_autovacuum does not show any activity\naround this time for any table. Checkpoint logs are also normal around this\ntime. I'd like to understand if there are any other sources of activity I\nshould be checking for that would account for the spike.\n\nUser workload is throttled to avoid excess load on the db, so a query is\nunlikely to have caused the spike. But we can dig deeper if other causes\nare ruled out.\n\nThanks\n\nOn Tue, Dec 19, 2017 at 2:03 PM, Tomas Vondra <[email protected]>\nwrote:\n\n>\n>\n> On 12/19/2017 05:47 PM, Habib Nahas wrote:\n> > Hi,\n> >\n> > We operate an RDS postgres 9.5 instance and have periodic CPU spikes to\n> > 100%. These spikes appear to be due to autoanalyze kicking on our larger\n> > tables.\n> >\n> > Our largest table has 75 million rows and the autoanalyze scale factor\n> > is set to 0.05.\n> >\n> > The documentation I've read suggests that the analyze always operates on\n> > the entire table and is not incremental. Given that supposition are\n> > there ways to control cost(especially CPU) of the autoanalyze operation?\n> > Would a more aggressive autoanalyze scale factor (0.01) help. With the\n> > current scale factor we see an autoanalyze once a week, query\n> > performance has been acceptable so far, which could imply that scale\n> > factor could be increased if necessary.\n> >\n>\n> No, reducing the scale factor to 0.01 will not help at all, it will\n> actually make the issue worse. The only thing autoanalyze does is\n> running ANALYZE, which *always* collects a fixed-size sample. Making it\n> more frequent will not reduce the amount of work done on each run.\n>\n> So the first question is if you are not using the default (0.1), i.e.\n> have you reduced it to 0.05.\n>\n> The other question is why it's so CPU-intensive. Are you using the\n> default statistics_target value (100), or have you increased that too?\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nThe autoanalyze factor is set to 0.05 for the db, and we have not changed the default statistics target. The CPU spike occurred between 13:05 - 13:15. last_autoanalyze for the table shows a time of 12:49; last_autovacuum does not show any activity around this time for any table. Checkpoint logs are also normal around this time. I'd like to understand if there are any other sources of activity I should be checking for that would account for the spike. User workload is throttled to avoid excess load on the db, so a query is unlikely to have caused the spike. But we can dig deeper if other causes are ruled out. ThanksOn Tue, Dec 19, 2017 at 2:03 PM, Tomas Vondra <[email protected]> wrote:\n\nOn 12/19/2017 05:47 PM, Habib Nahas wrote:\n> Hi,\n>\n> We operate an RDS postgres 9.5 instance and have periodic CPU spikes to\n> 100%. These spikes appear to be due to autoanalyze kicking on our larger\n> tables.\n>\n> Our largest table has 75 million rows and the autoanalyze scale factor\n> is set to 0.05. \n>\n> The documentation I've read suggests that the analyze always operates on\n> the entire table and is not incremental. Given that supposition are\n> there ways to control cost(especially CPU) of the autoanalyze operation?\n> Would a more aggressive autoanalyze scale factor (0.01) help. With the\n> current scale factor we see an autoanalyze once a week, query\n> performance has been acceptable so far, which could imply that scale\n> factor could be increased if necessary. \n>\n\nNo, reducing the scale factor to 0.01 will not help at all, it will\nactually make the issue worse. The only thing autoanalyze does is\nrunning ANALYZE, which *always* collects a fixed-size sample. Making it\nmore frequent will not reduce the amount of work done on each run.\n\nSo the first question is if you are not using the default (0.1), i.e.\nhave you reduced it to 0.05.\n\nThe other question is why it's so CPU-intensive. Are you using the\ndefault statistics_target value (100), or have you increased that too?\n\nregards\n\n--\nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 19 Dec 2017 14:53:25 -0800", "msg_from": "Habib Nahas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autoanalyze CPU usage" }, { "msg_contents": "As it happens our larger tables operate as a business log and are also\ninsert only.\n\n- There is no partitioning at this time since we expect to have an\nautomated process to delete rows older than a certain date.\n- Analyzing doing off-hours sounds like a good idea; if there is no other\nway to determine effect on db we may end up doing that.\n- We have an open schema and heavily depend on jsonb, so I'm not sure if\nincreasing the statistics target will be helpful.\n\nThanks\n\nOn Tue, Dec 19, 2017 at 2:03 PM, Tomas Vondra <[email protected]>\nwrote:\n\n>\n>\n> On 12/19/2017 05:47 PM, Habib Nahas wrote:\n> > Hi,\n> >\n> > We operate an RDS postgres 9.5 instance and have periodic CPU spikes to\n> > 100%. These spikes appear to be due to autoanalyze kicking on our larger\n> > tables.\n> >\n> > Our largest table has 75 million rows and the autoanalyze scale factor\n> > is set to 0.05.\n> >\n> > The documentation I've read suggests that the analyze always operates on\n> > the entire table and is not incremental. Given that supposition are\n> > there ways to control cost(especially CPU) of the autoanalyze operation?\n> > Would a more aggressive autoanalyze scale factor (0.01) help. With the\n> > current scale factor we see an autoanalyze once a week, query\n> > performance has been acceptable so far, which could imply that scale\n> > factor could be increased if necessary.\n> >\n>\n> No, reducing the scale factor to 0.01 will not help at all, it will\n> actually make the issue worse. The only thing autoanalyze does is\n> running ANALYZE, which *always* collects a fixed-size sample. Making it\n> more frequent will not reduce the amount of work done on each run.\n>\n> So the first question is if you are not using the default (0.1), i.e.\n> have you reduced it to 0.05.\n>\n> The other question is why it's so CPU-intensive. Are you using the\n> default statistics_target value (100), or have you increased that too?\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nAs it happens our larger tables operate as a business log and are also insert only. - There is no partitioning at this time since we expect to have an automated process to delete rows older than a certain date. - Analyzing doing off-hours sounds like a good idea; if there is no other way to determine effect on db we may end up doing that.- We have an open schema and heavily depend on jsonb, so I'm not sure if increasing the statistics target will be helpful.ThanksOn Tue, Dec 19, 2017 at 2:03 PM, Tomas Vondra <[email protected]> wrote:\n\nOn 12/19/2017 05:47 PM, Habib Nahas wrote:\n> Hi,\n>\n> We operate an RDS postgres 9.5 instance and have periodic CPU spikes to\n> 100%. These spikes appear to be due to autoanalyze kicking on our larger\n> tables.\n>\n> Our largest table has 75 million rows and the autoanalyze scale factor\n> is set to 0.05. \n>\n> The documentation I've read suggests that the analyze always operates on\n> the entire table and is not incremental. Given that supposition are\n> there ways to control cost(especially CPU) of the autoanalyze operation?\n> Would a more aggressive autoanalyze scale factor (0.01) help. With the\n> current scale factor we see an autoanalyze once a week, query\n> performance has been acceptable so far, which could imply that scale\n> factor could be increased if necessary. \n>\n\nNo, reducing the scale factor to 0.01 will not help at all, it will\nactually make the issue worse. The only thing autoanalyze does is\nrunning ANALYZE, which *always* collects a fixed-size sample. Making it\nmore frequent will not reduce the amount of work done on each run.\n\nSo the first question is if you are not using the default (0.1), i.e.\nhave you reduced it to 0.05.\n\nThe other question is why it's so CPU-intensive. Are you using the\ndefault statistics_target value (100), or have you increased that too?\n\nregards\n\n--\nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 19 Dec 2017 14:53:59 -0800", "msg_from": "Habib Nahas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autoanalyze CPU usage" }, { "msg_contents": "On Tue, Dec 19, 2017 at 02:37:18PM -0800, Habib Nahas wrote:\n> As it happens our larger tables operate as a business log and are also\n> insert only.\n> \n> - There is no partitioning at this time since we expect to have an\n> automated process to delete rows older than a certain date.\n\nThis is a primary use case for partitioning ; bulk DROP rather than DELETE.\n\n> - Analyzing doing off-hours sounds like a good idea; if there is no other\n> way to determine effect on db we may end up doing that.\n\nYou can also implement a manual analyze job and hope to avoid autoanalyze.\n\n> - We have an open schema and heavily depend on jsonb, so I'm not sure if\n> increasing the statistics target will be helpful.\n\nIf the increased stats target isn't useful for that, I would recommend to\ndecrease it.\n\n-- \nJustin Pryzby\nSystem Administrator\nTelsasoft\n+1-952-707-8581\n\n", "msg_date": "Tue, 19 Dec 2017 16:55:57 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autoanalyze CPU usage" }, { "msg_contents": "Perhaps consider running manual vacuum analyze at low load times daily if you have that opportunity. This may stop autovacuums from hitting thresholds during high load times or do the normal/aggressive autovacuum tuning to make it more aggressive during low load times and less aggressive during high load times.\n\nSent from my iPad\n\n> On Dec 19, 2017, at 5:03 PM, Tomas Vondra <[email protected]> wrote:\n> \n> \n> \n>> On 12/19/2017 05:47 PM, Habib Nahas wrote:\n>> Hi,\n>> \n>> We operate an RDS postgres 9.5 instance and have periodic CPU spikes to\n>> 100%. These spikes appear to be due to autoanalyze kicking on our larger\n>> tables.\n>> \n>> Our largest table has 75 million rows and the autoanalyze scale factor\n>> is set to 0.05. \n>> \n>> The documentation I've read suggests that the analyze always operates on\n>> the entire table and is not incremental. Given that supposition are\n>> there ways to control cost(especially CPU) of the autoanalyze operation?\n>> Would a more aggressive autoanalyze scale factor (0.01) help. With the\n>> current scale factor we see an autoanalyze once a week, query\n>> performance has been acceptable so far, which could imply that scale\n>> factor could be increased if necessary. \n>> \n> \n> No, reducing the scale factor to 0.01 will not help at all, it will\n> actually make the issue worse. The only thing autoanalyze does is\n> running ANALYZE, which *always* collects a fixed-size sample. Making it\n> more frequent will not reduce the amount of work done on each run.\n> \n> So the first question is if you are not using the default (0.1), i.e.\n> have you reduced it to 0.05.\n> \n> The other question is why it's so CPU-intensive. Are you using the\n> default statistics_target value (100), or have you increased that too?\n> \n> regards\n> \n> -- \n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n> \n\n\n", "msg_date": "Tue, 19 Dec 2017 20:10:51 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autoanalyze CPU usage" }, { "msg_contents": "Habib Nahas wrote:\n> The CPU spike occurred between 13:05 - 13:15. last_autoanalyze for the table\n> shows a time of 12:49; last_autovacuum does not show any activity around\n> this time for any table. Checkpoint logs are also normal around this time.\n> I'd like to understand if there are any other sources of activity I\n> should be checking for that would account for the spike.\n\nlast_autoanalyze is set after autoanalyze is done, so that would suggest\nthat autoanalyze is not the problem.\n\nIt can be tough to figure out where the activity is coming from unless\ncou can catch it in the act. You could log all statements (though the amount\nof log may be prohibitive and can cripple performance), you could log\njust long running statements in the hope that these are at fault, you\ncould log connections and disconnections and hope to find the problem\nthat way. Maybe logging your applications can help too.\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Wed, 20 Dec 2017 08:15:39 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autoanalyze CPU usage" }, { "msg_contents": "On Tue, Dec 19, 2017 at 7:47 PM, Habib Nahas <[email protected]> wrote:\n\n> Hi,\n>\n> We operate an RDS postgres 9.5 instance and have periodic CPU spikes to\n> 100%. These spikes appear to be due to autoanalyze kicking on our larger\n> tables.\n>\n\nHow did you draw such conclusion? How did you find that autoanalyze is the\nreason of CPU spikes?\n\nOn Tue, Dec 19, 2017 at 7:47 PM, Habib Nahas <[email protected]> wrote:Hi,We operate an RDS postgres 9.5 instance and have periodic CPU spikes to 100%. These spikes appear to be due to autoanalyze kicking on our larger tables.How did you draw such conclusion? How did you find that autoanalyze is the reason of CPU spikes?", "msg_date": "Wed, 20 Dec 2017 11:24:09 +0300", "msg_from": "Nikolay Samokhvalov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autoanalyze CPU usage" }, { "msg_contents": "Thanks for confirming that it is the end timestamp, the doc wasn't quite\nclear if it was the start or end.\n\nThere is a gap in our monitoring that makes diagnosis of such events very\ndifficult after the fact. Something like a 10-sec periodic dump of\npg_stat_activity along with a similar dump of pg_top would have been very\nhelpful here.\n\n-Habib\n\n\n\nOn Tue, Dec 19, 2017 at 11:15 PM, Laurenz Albe <[email protected]>\nwrote:\n\n> Habib Nahas wrote:\n> > The CPU spike occurred between 13:05 - 13:15. last_autoanalyze for the\n> table\n> > shows a time of 12:49; last_autovacuum does not show any activity around\n> > this time for any table. Checkpoint logs are also normal around this\n> time.\n> > I'd like to understand if there are any other sources of activity I\n> > should be checking for that would account for the spike.\n>\n> last_autoanalyze is set after autoanalyze is done, so that would suggest\n> that autoanalyze is not the problem.\n>\n> It can be tough to figure out where the activity is coming from unless\n> cou can catch it in the act. You could log all statements (though the\n> amount\n> of log may be prohibitive and can cripple performance), you could log\n> just long running statements in the hope that these are at fault, you\n> could log connections and disconnections and hope to find the problem\n> that way. Maybe logging your applications can help too.\n>\n> Yours,\n> Laurenz Albe\n>\n\nThanks for confirming that it is the end timestamp, the doc wasn't quite clear if it was the start or end.  There is a gap in our monitoring that makes diagnosis of such events very difficult after the fact. Something like a 10-sec periodic dump of pg_stat_activity along with a similar dump of pg_top would have been very helpful here. -HabibOn Tue, Dec 19, 2017 at 11:15 PM, Laurenz Albe <[email protected]> wrote:Habib Nahas wrote:\n> The CPU spike occurred between 13:05 - 13:15. last_autoanalyze for the table\n> shows a time of 12:49; last_autovacuum does not show any activity around\n> this time for any table. Checkpoint logs are also normal around this time.\n> I'd like to understand if there are any other sources of activity I\n> should be checking for that would account for the spike.\n\nlast_autoanalyze is set after autoanalyze is done, so that would suggest\nthat autoanalyze is not the problem.\n\nIt can be tough to figure out where the activity is coming from unless\ncou can catch it in the act.  You could log all statements (though the amount\nof log may be prohibitive and can cripple performance), you could log\njust long running statements in the hope that these are at fault, you\ncould log connections and disconnections and hope to find the problem\nthat way.  Maybe logging your applications can help too.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 20 Dec 2017 06:38:59 -0800", "msg_from": "Habib Nahas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autoanalyze CPU usage" } ]
[ { "msg_contents": "\nHello! We have a large table 11GB ( about 37 million records ) and we need to alter a table - add a new column with default values is false. Also 'NOT NULL' is required.\nSo, first I've done:\n\nALTER TABLE clusters ALTER COLUMN \"is_paid\";\n\nafter that:\n\nUPDATE clusters SET is_paid = DEFAULT where ctime <= now() - interval '720h' AND is_paid != FALSE;\n\nEverything went ok. Then I tried to run it again for an interval of 1 years. And I got that no one can't see - the was no available space on a disk. The reason was WAL-files ate everything.\nMaster-server couldn't send some WAL-file to their replicas. Bandwidth wasn't enough.\n\nWell, I'm searching for a better idea to update the table.\nSolutions I found.\n1. Separate my UPDATE by chunks.\n2. Alter a table using a new temporary table, but it's not convenient for me because there is a lot of foreign keys and indexes.\n3. Hot-update. This is the most interesting case for me.\nSpeaking of HOT-update https://www.dbrnd.com/2016/03/postgresql-the-awesome-table-fillfactor-to-speedup-update-and-select-statement/\nThe article says: it might be useful for tables that change often and moreover It would be the best way to increase the speed of UPDATE.\nSo, my questions are will it work for all tuples? It says that - no https://www.dbrnd.com/2016/03/postgresql-alter-table-to-change-fillfactor-value/, but I could not find a confirmation in official postresql's documentation.\nWhy do I need to launch vacuum after updating?\nHow should I reduce the better fillfactor?\nWhat will be with WAL-files it this case?\nThank you!\n\n\n\nPostgreSQL 9.6\n\n-- \nTimokhin 'maf' Maxim\n\n", "msg_date": "Fri, 22 Dec 2017 19:46:03 +0300", "msg_from": "Timokhin Maxim <[email protected]>", "msg_from_op": true, "msg_subject": "Updating a large table" }, { "msg_contents": "Hello,\nDoes the tale have foreign keys, if not you could create another table may be unlogged and then do the changes you want via INSERT ... SELECT; and finally convert the unlogged table to logged table. \nIn addition to that there are several ways to increase data writing performance for example the following configuration settings have a impact on write performance: synchronous_commit, commit_delay, max_wal_size, wal_buffers and maintenance_work_mem. \nRegards \n On Friday, December 22, 2017, 6:59:43 PM GMT+1, Timokhin Maxim <[email protected]> wrote: \n \n \nHello! We have a large table 11GB ( about 37 million records ) and we need to alter a table - add a new column with default values is false. Also 'NOT NULL' is required.\nSo, first I've done:\n\nALTER TABLE clusters ALTER COLUMN \"is_paid\";\n\nafter that:\n\nUPDATE clusters SET is_paid = DEFAULT where ctime <= now() - interval '720h' AND is_paid != FALSE;\n\nEverything went ok. Then I tried to run it again for an interval of 1 years. And I got that no one can't see - the was no available space on a disk. The reason was WAL-files ate everything.\nMaster-server couldn't send some WAL-file to their replicas. Bandwidth wasn't enough.\n\nWell, I'm searching for a better idea to update the table.\nSolutions I found.\n1. Separate my UPDATE by chunks.\n2. Alter a table using a new temporary table, but it's not convenient for me because there is a lot of foreign keys and indexes.\n3. Hot-update. This is the most interesting case for me.\nSpeaking of HOT-update https://www.dbrnd.com/2016/03/postgresql-the-awesome-table-fillfactor-to-speedup-update-and-select-statement/\nThe article says: it might be useful for tables that change often and moreover It would be the best way to increase the speed of UPDATE.\nSo, my questions are will it work for all tuples? It says that - no https://www.dbrnd.com/2016/03/postgresql-alter-table-to-change-fillfactor-value/, but I could not find a confirmation in official postresql's documentation.\nWhy do I need to launch vacuum after updating?\nHow should I reduce the better fillfactor?\nWhat will be with WAL-files it this case?\nThank you!\n\n\n\nPostgreSQL 9.6\n\n-- \nTimokhin 'maf' Maxim\n\n \n\nHello,Does the tale have foreign keys, if not you could create another table may be unlogged and then do the changes you want via INSERT ... SELECT; and finally convert the unlogged table to logged table. In addition to that there are several ways to increase data writing performance for example the following configuration settings have a impact on write performance: synchronous_commit, commit_delay, max_wal_size, wal_buffers and maintenance_work_mem. Regards \n\n\n\n On Friday, December 22, 2017, 6:59:43 PM GMT+1, Timokhin Maxim <[email protected]> wrote:\n \n\n\nHello! We have a large table 11GB ( about 37 million records ) and we need to alter a table - add a new column with default values is false. Also 'NOT NULL' is required.So, first I've done:ALTER TABLE clusters ALTER COLUMN \"is_paid\";after that:UPDATE clusters SET is_paid = DEFAULT where ctime <= now() - interval '720h' AND is_paid != FALSE;Everything went ok. Then I tried to run it again for an interval of 1 years. And I got that no one can't see - the was no available space on a disk. The reason was WAL-files ate everything.Master-server couldn't send some WAL-file to their replicas. Bandwidth wasn't enough.Well, I'm searching for a better idea to update the table.Solutions I found.1. Separate my UPDATE by chunks.2. Alter a table using a new temporary table, but it's not convenient for me because there is a lot of foreign keys and indexes.3. Hot-update. This is the most interesting case for me.Speaking of HOT-update https://www.dbrnd.com/2016/03/postgresql-the-awesome-table-fillfactor-to-speedup-update-and-select-statement/The article says: it might be useful for tables that change often and moreover It would be the best way to increase the speed of UPDATE.So, my questions are will it work for all tuples? It says that - no https://www.dbrnd.com/2016/03/postgresql-alter-table-to-change-fillfactor-value/, but I could not find a confirmation in official postresql's documentation.Why do I need to launch vacuum after updating?How should I reduce the better fillfactor?What will be with WAL-files it this case?Thank you!PostgreSQL 9.6-- Timokhin 'maf' Maxim", "msg_date": "Sat, 23 Dec 2017 11:31:28 +0000 (UTC)", "msg_from": "salah jubeh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updating a large table" }, { "msg_contents": "\nOn 12/22/2017 05:46 PM, Timokhin Maxim wrote:\n> \n> Hello! We have a large table 11GB ( about 37 million records ) and we\n> need to alter a table - add a new column with default values is \n> false. Also 'NOT NULL' is required.\n>\n> So, first I've done:\n>\n> ALTER TABLE clusters ALTER COLUMN \"is_paid\";\n> \n\nThat seems somewhat incomplete ... what exactly did the ALTER do?\n\n> after that:\n> \n> UPDATE clusters SET is_paid = DEFAULT where ctime <= now() - interval '720h' AND is_paid != FALSE;\n> \n> Everything went ok. Then I tried to run it again for an interval of 1\n> years. And I got that no one can't see - the was no available space\n> on a disk. The reason was WAL-files ate everything.\n> Master-server couldn't send some WAL-file to their replicas. Bandwidth wasn't enough.\n> \n\nWell, then perhaps the best solution is to add more disk space and/or\nmake sure the network bandwidth is sufficient?\n\nIn any case, don't forget this may also need to update all indexes on\nthe table, because the new row versions will end up on different pages.\nSo while the table has 11GB, this update may need much more WAL space\nthan that.\n\n> Well, I'm searching for a better idea to update the table.\n> Solutions I found.\n> 1. Separate my UPDATE by chunks.\n\nIf this is a one-time change, this is probably the best option.\n\n> 2. Alter a table using a new temporary table, but it's not convenient\n> for me because there is a lot of foreign keys and indexes.\nRight.\n\n> 3. Hot-update. This is the most interesting case for me.\n> Speaking of HOT-update https://www.dbrnd.com/2016/03/postgresql-the-awesome-table-fillfactor-to-speedup-update-and-select-statement/\n> The article says: it might be useful for tables that change often and moreover It would be the best way to increase the speed of UPDATE.\n\nFirst of all, to make HOT possible there would have to be enough free\nspace on the pages. As you need to update the whole table, that means\neach table would have to be only 50% full. That's unlikely to be true,\nand you can't fix that at this point.\n\n> So, my questions are will it work for all tuples? It says that - no \n> https://www.dbrnd.com/2016/03/postgresql-alter-table-to-change- \n> fillfactor-value/, but I could not find a confirmation in official \n> postresql's documentation.\nNot sure I understand your question, but HOT can only happen when two\nconditions are met:\n\n1) the update does not change any indexed column\n\nThis is likely met, assuming you don't have an index on is_paid.\n\n2) there's enough space on the same page for the new row version\n\nThis is unlikely to be true, because the default fillfactor for tables\nis 90%. You may change fillfactor using ALTER TABLE, but that only\napplies to new data.\n\nMoreover, as the article says - this is useful for tables that change\noften. Which is not quite what one-time table rewrite does.\n\nSo HOT is not the solution you're looking for.\n\n> Why do I need to launch vacuum after updating?\n\nYou don't need to launch vacuum - autovacuum will take care of that\neventually. But you may do that, to do the cleanup when it's convenient\nfor you.\n\n> How should I reduce the better fillfactor?\n\nFor example to change fillfactor to 75% (i.e. 25% free space):\n\nALTER TABLE t SET (fillfactor = 75);\n\nBut as I said, it's not a solution for you.\n\n> What will be with WAL-files it this case?\n\nNot sure what you mean.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Sat, 23 Dec 2017 21:58:54 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updating a large table" }, { "msg_contents": "\nHello Tomas! Thank you for the useful answer!\n \n\n23.12.2017, 23:58, \"Tomas Vondra\" <[email protected]>:\n> On 12/22/2017 05:46 PM, Timokhin Maxim wrote:\n>>  Hello! We have a large table 11GB ( about 37 million records ) and we\n>>  need to alter a table - add a new column with default values is\n>>  false. Also 'NOT NULL' is required.\n>>\n>>  So, first I've done:\n>>\n>>  ALTER TABLE clusters ALTER COLUMN \"is_paid\";\n>\n> That seems somewhat incomplete ... what exactly did the ALTER do?\n\nI'll try to explain what exactly I meant.\nALTER TABLE clusters ADD COLUMN \"is_paid\" BOOLEAN NOT NULL DEFAULT FALSE;\nWhat exactly I need.\nBut that query would lock the whole table for about 40 minutes. I decided to separate it like:\n1. ALTER TABLE clusters ADD COLUMN \"is_paid\" BOOLEAN DEFAULT FALSE;\n2. UPDATE clusters SET is_paid = DEFAULT where ctime <= now() - interval '720h' AND is_paid != FALSE; ( This was needed as soon as possible )\n3. UPDATE another part by chunks \n4. set NOT NULL for the table.\n\nI was thinking about how to optimize the 3th step.\nWell, my solution was to write a script which runs two threads. The first one UPDATE \"is_paid\" by chunks, another one checks my metrics. If something is becoming wrong first thread stops until metrics are good.\n\nThank you, Tomas.\n\n>\n>>  after that:\n>>\n>>  UPDATE clusters SET is_paid = DEFAULT where ctime <= now() - interval '720h' AND is_paid != FALSE;\n>>\n>>  Everything went ok. Then I tried to run it again for an interval of 1\n>>  years. And I got that no one can't see - the was no available space\n>>  on a disk. The reason was WAL-files ate everything.\n>>  Master-server couldn't send some WAL-file to their replicas. Bandwidth wasn't enough.\n>\n> Well, then perhaps the best solution is to add more disk space and/or\n> make sure the network bandwidth is sufficient?\n>\n> In any case, don't forget this may also need to update all indexes on\n> the table, because the new row versions will end up on different pages.\n> So while the table has 11GB, this update may need much more WAL space\n> than that.\n>\nGot it, thank you!\n>>  Well, I'm searching for a better idea to update the table.\n>>  Solutions I found.\n>>  1. Separate my UPDATE by chunks.\n>\n> If this is a one-time change, this is probably the best option.\n>\nExactly, thank you!\n\n>>  2. Alter a table using a new temporary table, but it's not convenient\n>>  for me because there is a lot of foreign keys and indexes.\n>\n> Right.\n>\n>>  3. Hot-update. This is the most interesting case for me.\n>>  Speaking of HOT-update https://www.dbrnd.com/2016/03/postgresql-the-awesome-table-fillfactor-to-speedup-update-and-select-statement/\n>>  The article says: it might be useful for tables that change often and moreover It would be the best way to increase the speed of UPDATE.\n>\n> First of all, to make HOT possible there would have to be enough free\n> space on the pages. As you need to update the whole table, that means\n> each table would have to be only 50% full. That's unlikely to be true,\n> and you can't fix that at this point.\n>\n>>  So, my questions are will it work for all tuples? It says that - no\n>>  https://www.dbrnd.com/2016/03/postgresql-alter-table-to-change-\n>>  fillfactor-value/, but I could not find a confirmation in official\n>>  postresql's documentation.\n>\n> Not sure I understand your question, but HOT can only happen when two\n> conditions are met:\n>\n> 1) the update does not change any indexed column\n>\n> This is likely met, assuming you don't have an index on is_paid.\n>\n> 2) there's enough space on the same page for the new row version\n>\n> This is unlikely to be true, because the default fillfactor for tables\n> is 90%. You may change fillfactor using ALTER TABLE, but that only\n> applies to new data.\n>\n> Moreover, as the article says - this is useful for tables that change\n> often. Which is not quite what one-time table rewrite does.\n>\n> So HOT is not the solution you're looking for.\n>\n>>  Why do I need to launch vacuum after updating?\n>\n> You don't need to launch vacuum - autovacuum will take care of that\n> eventually. But you may do that, to do the cleanup when it's convenient\n> for you.\n>\n>>  How should I reduce the better fillfactor?\n>\n> For example to change fillfactor to 75% (i.e. 25% free space):\n>\n> ALTER TABLE t SET (fillfactor = 75);\n>\n> But as I said, it's not a solution for you.\n>\n>>  What will be with WAL-files it this case?\n>\n> Not sure what you mean.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 09 Jan 2018 15:18:48 +0300", "msg_from": "Timokhin Maxim <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Updating a large table" }, { "msg_contents": "Hello\n\n> 1. ALTER TABLE clusters ADD COLUMN \"is_paid\" BOOLEAN DEFAULT FALSE;\nthis is wrong. To avoid large table lock you need DEFAULT NULL:\nALTER TABLE clusters ADD COLUMN \"is_paid\" BOOLEAN DEFAULT NULL;\nDefault null changes only system catalog, default with any non-null value will rewrite all rows. After adding column you can set default value - it applied only for future inserts:\nALTER TABLE clusters ALTER COLUMN \"is_paid\" SET DEFAULT FALSE;\n\nAnd then you can update all old rows in table by small chunks. Finally, when here is no NULL values you can set not null:\nALTER TABLE clusters ALTER COLUMN \"is_paid\" SET NOT NULL;\nBut unfortunately this locks table for some time - smaller what rewrite time, but time of full seqscan. I hope my patch [1] will be merged and not null can be set in future by temporary adding check constraint (not valid, then validate) - which not require large table lock\n\n[1] https://www.postgresql.org/message-id/flat/[email protected]#[email protected]\n\nRegards, Sergei\n\n", "msg_date": "Tue, 09 Jan 2018 15:53:44 +0300", "msg_from": "Sergei Kornilov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updating a large table" }, { "msg_contents": "Hello, Sergey!\n\n09.01.2018, 15:53, \"Sergei Kornilov\" <[email protected]>:\n> Hello\n>\n>>  1. ALTER TABLE clusters ADD COLUMN \"is_paid\" BOOLEAN DEFAULT FALSE;\n>\n> this is wrong. To avoid large table lock you need DEFAULT NULL:\n> ALTER TABLE clusters ADD COLUMN \"is_paid\" BOOLEAN DEFAULT NULL;\n> Default null changes only system catalog, default with any non-null value will rewrite all rows. After adding column you can set default value - it applied only for future inserts:\n> ALTER TABLE clusters ALTER COLUMN \"is_paid\" SET DEFAULT FALSE;\n>\n> And then you can update all old rows in table by small chunks. Finally, when here is no NULL values you can set not null:\nWhat you wrote are exactly I'm doing. Moreover, I'm checking current metrics to avoid previously problems.\n\n> ALTER TABLE clusters ALTER COLUMN \"is_paid\" SET NOT NULL;\n> But unfortunately this locks table for some time - smaller what rewrite time, but time of full seqscan. I hope my patch [1] will be merged and not null can be set in future by temporary adding check constraint (not valid, then validate) - which not require large table lock\nHope your commit will be merged. It will be realy useful.\n>\n> [1] https://www.postgresql.org/message-id/flat/[email protected]#[email protected]\n>\n> Regards, Sergei\n\n-- \nTimokhin 'maf' Maxim\n\n", "msg_date": "Tue, 09 Jan 2018 19:31:17 +0300", "msg_from": "Timokhin Maxim <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Updating a large table" } ]
[ { "msg_contents": "Hi there,\n\nWe are testing a new application to try to find performance issues.\n\nAWS RDS m4.large 500GB storage (SSD)\n\nOne table only, called Messages:\n\nUuid\nCountry (ISO)\nRole (Text)\nUser id (Text)\nGroupId (integer)\nChannel (text)\nTitle (Text)\nPayload (JSON, up to 20kb)\nStarts_in (UTC)\nExpires_in (UTC)\nSeen (boolean)\nDeleted (boolean)\nLastUpdate (UTC)\nCreated_by (UTC)\nCreated_in (UTC)\n\nIndexes:\n\nUUID (PK)\nUserID + Country (main index)\nLastUpdate\nGroupID\n\n\nWe inserted 160MM rows, around 2KB each. No partitioning.\n\nInsert started at around 3.000 inserts per second, but (as expected)\nstarted to slow down as the number of rows increased. In the end we got\naround 500 inserts per second.\n\nQueries by Userd_ID + Country took less than 2 seconds, but while the batch\ninsert was running the queries took over 20 seconds!!!\n\nWe had 20 Lambda getting messages from SQS and bulk inserting them into\nPostgresql.\n\nThe insert performance is important, but we would slow it down if needed in\norder to ensure a more flat query performance. (Below 2 seconds). Each\nquery (userId + country) returns around 100 diferent messages, which are\nfiltered and order by the synchronous Lambda function. So we don't do any\nspecial filtering, sorting, ordering or full text search in Postgres. In\nsome ways we use it more like a glorified file system. :)\n\nWe are going to limit the number of lambda workers to 1 or 2, and then run\nsome queries concurrently to see if the query performance is not affect too\nmuch. We aim to get at least 50 queries per second (returning 100 messages\neach) under 2 seconds, even when there is millions of messages on SQS being\ninserted into PG.\n\nWe haven't done any performance tuning in the DB.\n\nWith all that said, the question is:\n\nWhat can be done to ensure good query performance (UserID+ country) even\nwhen the bulk insert is running (low priority).\n\nWe are limited to use AWS RDS at the moment.\n\nCheers\n\nHi there,We are testing a new application to try to find performance issues.AWS RDS m4.large 500GB storage (SSD)One table only, called Messages:UuidCountry  (ISO)Role (Text)User id  (Text)GroupId (integer)Channel (text)Title (Text)Payload (JSON, up to 20kb)Starts_in (UTC)Expires_in (UTC)Seen (boolean)Deleted (boolean)LastUpdate (UTC)Created_by (UTC)Created_in (UTC)Indexes:UUID (PK)UserID + Country (main index)LastUpdate GroupID We inserted 160MM rows, around 2KB each. No partitioning.Insert started at around  3.000 inserts per second, but (as expected) started to slow down as the number of rows increased.  In the end we got around 500 inserts per second.Queries by Userd_ID + Country took less than 2 seconds, but while the batch insert was running the queries took over 20 seconds!!!We had 20 Lambda getting messages from SQS and bulk inserting them into Postgresql. The insert performance is important, but we would slow it down if needed in order to ensure a more flat query performance. (Below 2 seconds). Each query (userId + country) returns around 100 diferent messages, which are filtered and order by the synchronous Lambda function. So we don't do any special filtering, sorting, ordering or full text search in Postgres. In some ways we use it more like a glorified file system. :)We are going to limit the number of lambda workers to 1 or 2, and then run some queries concurrently to see if the query performance is not affect too much. We aim to get at least 50 queries per second (returning 100 messages each) under 2 seconds, even when there is millions of messages on SQS being inserted into PG.We haven't done any performance tuning in the DB. With all that said, the question is:What can be done to ensure good query performance (UserID+ country) even when the bulk insert is running (low priority).We are limited to use AWS RDS at the moment.Cheers", "msg_date": "Sun, 24 Dec 2017 17:51:18 -0200", "msg_from": "Jean Baro <[email protected]>", "msg_from_op": true, "msg_subject": "Batch insert heavily affecting query performance." }, { "msg_contents": "Are the inserts being done through one connection or multiple connections concurrently?\n\nSent from my iPhone\n\n> On Dec 24, 2017, at 2:51 PM, Jean Baro <[email protected]> wrote:\n> \n> Hi there,\n> \n> We are testing a new application to try to find performance issues.\n> \n> AWS RDS m4.large 500GB storage (SSD)\n> \n> One table only, called Messages:\n> \n> Uuid\n> Country (ISO)\n> Role (Text)\n> User id (Text)\n> GroupId (integer)\n> Channel (text)\n> Title (Text)\n> Payload (JSON, up to 20kb)\n> Starts_in (UTC)\n> Expires_in (UTC)\n> Seen (boolean)\n> Deleted (boolean)\n> LastUpdate (UTC)\n> Created_by (UTC)\n> Created_in (UTC)\n> \n> Indexes:\n> \n> UUID (PK)\n> UserID + Country (main index)\n> LastUpdate \n> GroupID \n> \n> \n> We inserted 160MM rows, around 2KB each. No partitioning.\n> \n> Insert started at around 3.000 inserts per second, but (as expected) started to slow down as the number of rows increased. In the end we got around 500 inserts per second.\n> \n> Queries by Userd_ID + Country took less than 2 seconds, but while the batch insert was running the queries took over 20 seconds!!!\n> \n> We had 20 Lambda getting messages from SQS and bulk inserting them into Postgresql. \n> \n> The insert performance is important, but we would slow it down if needed in order to ensure a more flat query performance. (Below 2 seconds). Each query (userId + country) returns around 100 diferent messages, which are filtered and order by the synchronous Lambda function. So we don't do any special filtering, sorting, ordering or full text search in Postgres. In some ways we use it more like a glorified file system. :)\n> \n> We are going to limit the number of lambda workers to 1 or 2, and then run some queries concurrently to see if the query performance is not affect too much. We aim to get at least 50 queries per second (returning 100 messages each) under 2 seconds, even when there is millions of messages on SQS being inserted into PG.\n> \n> We haven't done any performance tuning in the DB. \n> \n> With all that said, the question is:\n> \n> What can be done to ensure good query performance (UserID+ country) even when the bulk insert is running (low priority).\n> \n> We are limited to use AWS RDS at the moment.\n> \n> Cheers\n> \n> \n\n\n", "msg_date": "Sun, 24 Dec 2017 18:52:07 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "Multiple connections, but we are going to test it with only one. Would it\nmake any difference?\n\nThanks\n\n\n\nEm 24 de dez de 2017 21:52, \"[email protected]\" <[email protected]>\nescreveu:\n\n> Are the inserts being done through one connection or multiple connections\n> concurrently?\n>\n> Sent from my iPhone\n>\n> > On Dec 24, 2017, at 2:51 PM, Jean Baro <[email protected]> wrote:\n> >\n> > Hi there,\n> >\n> > We are testing a new application to try to find performance issues.\n> >\n> > AWS RDS m4.large 500GB storage (SSD)\n> >\n> > One table only, called Messages:\n> >\n> > Uuid\n> > Country (ISO)\n> > Role (Text)\n> > User id (Text)\n> > GroupId (integer)\n> > Channel (text)\n> > Title (Text)\n> > Payload (JSON, up to 20kb)\n> > Starts_in (UTC)\n> > Expires_in (UTC)\n> > Seen (boolean)\n> > Deleted (boolean)\n> > LastUpdate (UTC)\n> > Created_by (UTC)\n> > Created_in (UTC)\n> >\n> > Indexes:\n> >\n> > UUID (PK)\n> > UserID + Country (main index)\n> > LastUpdate\n> > GroupID\n> >\n> >\n> > We inserted 160MM rows, around 2KB each. No partitioning.\n> >\n> > Insert started at around 3.000 inserts per second, but (as expected)\n> started to slow down as the number of rows increased. In the end we got\n> around 500 inserts per second.\n> >\n> > Queries by Userd_ID + Country took less than 2 seconds, but while the\n> batch insert was running the queries took over 20 seconds!!!\n> >\n> > We had 20 Lambda getting messages from SQS and bulk inserting them into\n> Postgresql.\n> >\n> > The insert performance is important, but we would slow it down if needed\n> in order to ensure a more flat query performance. (Below 2 seconds). Each\n> query (userId + country) returns around 100 diferent messages, which are\n> filtered and order by the synchronous Lambda function. So we don't do any\n> special filtering, sorting, ordering or full text search in Postgres. In\n> some ways we use it more like a glorified file system. :)\n> >\n> > We are going to limit the number of lambda workers to 1 or 2, and then\n> run some queries concurrently to see if the query performance is not affect\n> too much. We aim to get at least 50 queries per second (returning 100\n> messages each) under 2 seconds, even when there is millions of messages on\n> SQS being inserted into PG.\n> >\n> > We haven't done any performance tuning in the DB.\n> >\n> > With all that said, the question is:\n> >\n> > What can be done to ensure good query performance (UserID+ country) even\n> when the bulk insert is running (low priority).\n> >\n> > We are limited to use AWS RDS at the moment.\n> >\n> > Cheers\n> >\n> >\n>\n>\n\nMultiple connections, but we are going to test it with only one. Would it make any difference?Thanks Em 24 de dez de 2017 21:52, \"[email protected]\" <[email protected]> escreveu:Are the inserts being done through one connection or multiple connections concurrently?\n\nSent from my iPhone\n\n> On Dec 24, 2017, at 2:51 PM, Jean Baro <[email protected]> wrote:\n>\n> Hi there,\n>\n> We are testing a new application to try to find performance issues.\n>\n> AWS RDS m4.large 500GB storage (SSD)\n>\n> One table only, called Messages:\n>\n> Uuid\n> Country  (ISO)\n> Role (Text)\n> User id  (Text)\n> GroupId (integer)\n> Channel (text)\n> Title (Text)\n> Payload (JSON, up to 20kb)\n> Starts_in (UTC)\n> Expires_in (UTC)\n> Seen (boolean)\n> Deleted (boolean)\n> LastUpdate (UTC)\n> Created_by (UTC)\n> Created_in (UTC)\n>\n> Indexes:\n>\n> UUID (PK)\n> UserID + Country (main index)\n> LastUpdate\n> GroupID\n>\n>\n> We inserted 160MM rows, around 2KB each. No partitioning.\n>\n> Insert started at around  3.000 inserts per second, but (as expected) started to slow down as the number of rows increased.  In the end we got around 500 inserts per second.\n>\n> Queries by Userd_ID + Country took less than 2 seconds, but while the batch insert was running the queries took over 20 seconds!!!\n>\n> We had 20 Lambda getting messages from SQS and bulk inserting them into Postgresql.\n>\n> The insert performance is important, but we would slow it down if needed in order to ensure a more flat query performance. (Below 2 seconds). Each query (userId + country) returns around 100 diferent messages, which are filtered and order by the synchronous Lambda function. So we don't do any special filtering, sorting, ordering or full text search in Postgres. In some ways we use it more like a glorified file system. :)\n>\n> We are going to limit the number of lambda workers to 1 or 2, and then run some queries concurrently to see if the query performance is not affect too much. We aim to get at least 50 queries per second (returning 100 messages each) under 2 seconds, even when there is millions of messages on SQS being inserted into PG.\n>\n> We haven't done any performance tuning in the DB.\n>\n> With all that said, the question is:\n>\n> What can be done to ensure good query performance (UserID+ country) even when the bulk insert is running (low priority).\n>\n> We are limited to use AWS RDS at the moment.\n>\n> Cheers\n>\n>", "msg_date": "Sun, 24 Dec 2017 22:09:44 -0200", "msg_from": "Jean Baro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "Yes it would/does make a difference! When you do it with one connection \nyou should see a big performance gain. Delayed, granted, extend locks \n(locktype=extend) can happen due to many concurrent connections trying \nto insert into the same table at the same time. Each insert request \nresults in an extend lock (8k extension), which blocks other writers. \nWhat normally happens is the these extend locks happen so fast that you \nhardly ever see them in the pg_locks table, except in the case where \nmany concurrent connections are trying to do a lot of inserts into the \nsame table. The following query will show if this is the case:\n\nselect * from pg_locks where granted = false and locktype = 'extend';\n\n> Jean Baro <mailto:[email protected]>\n> Sunday, December 24, 2017 7:09 PM\n> Multiple connections, but we are going to test it with only one. Would \n> it make any difference?\n>\n> Thanks\n>\n>\n>\n> [email protected] <mailto:[email protected]>\n> Sunday, December 24, 2017 6:52 PM\n> Are the inserts being done through one connection or multiple \n> connections concurrently?\n>\n> Sent from my iPhone\n>\n>\n>\n> Jean Baro <mailto:[email protected]>\n> Sunday, December 24, 2017 2:51 PM\n> Hi there,\n>\n> We are testing a new application to try to find performance issues.\n>\n> AWS RDS m4.large 500GB storage (SSD)\n>\n> One table only, called Messages:\n>\n> Uuid\n> Country (ISO)\n> Role (Text)\n> User id (Text)\n> GroupId (integer)\n> Channel (text)\n> Title (Text)\n> Payload (JSON, up to 20kb)\n> Starts_in (UTC)\n> Expires_in (UTC)\n> Seen (boolean)\n> Deleted (boolean)\n> LastUpdate (UTC)\n> Created_by (UTC)\n> Created_in (UTC)\n>\n> Indexes:\n>\n> UUID (PK)\n> UserID + Country (main index)\n> LastUpdate\n> GroupID\n>\n>\n> We inserted 160MM rows, around 2KB each. No partitioning.\n>\n> Insert started at around 3.000 inserts per second, but (as expected) \n> started to slow down as the number of rows increased. In the end we \n> got around 500 inserts per second.\n>\n> Queries by Userd_ID + Country took less than 2 seconds, but while the \n> batch insert was running the queries took over 20 seconds!!!\n>\n> We had 20 Lambda getting messages from SQS and bulk inserting them \n> into Postgresql.\n>\n> The insert performance is important, but we would slow it down if \n> needed in order to ensure a more flat query performance. (Below 2 \n> seconds). Each query (userId + country) returns around 100 diferent \n> messages, which are filtered and order by the synchronous Lambda \n> function. So we don't do any special filtering, sorting, ordering or \n> full text search in Postgres. In some ways we use it more like a \n> glorified file system. :)\n>\n> We are going to limit the number of lambda workers to 1 or 2, and then \n> run some queries concurrently to see if the query performance is not \n> affect too much. We aim to get at least 50 queries per second \n> (returning 100 messages each) under 2 seconds, even when there is \n> millions of messages on SQS being inserted into PG.\n>\n> We haven't done any performance tuning in the DB.\n>\n> With all that said, the question is:\n>\n> What can be done to ensure good query performance (UserID+ country) \n> even when the bulk insert is running (low priority).\n>\n> We are limited to use AWS RDS at the moment.\n>\n> Cheers\n>\n>\n\n\n\n\nYes it would/does make a \ndifference!  When you do it with one connection you should see a big \nperformance gain.  Delayed, granted, extend locks (locktype=extend) can \nhappen due to many concurrent connections trying to insert into the same\n table at the same time. Each insert request results in an extend lock \n(8k extension), which blocks other writers. What normally happens is the\n these extend locks happen so fast that you hardly ever see them in the \npg_locks table, except in the case where many concurrent connections are\n trying to do a lot of inserts into the same table. The following query \nwill show if this is the case:\n\nselect * from pg_locks where granted = false and locktype = 'extend';\n\n\n\n \nJean Baro Sunday,\n December 24, 2017 7:09 PM \nMultiple \nconnections, but we are going to test it with only one. Would it make \nany difference?Thanks \n\n \[email protected] Sunday,\n December 24, 2017 6:52 PM \nAre the inserts being done\n through one connection or multiple connections concurrently?Sent\n from my iPhone\n \nJean Baro Sunday,\n December 24, 2017 2:51 PM \nHi there,We are testing a new application to\n try to find performance issues.AWS RDS m4.large 500GB storage (SSD)One table only, called Messages:UuidCountry  (ISO)Role (Text)User id  (Text)GroupId (integer)Channel (text)Title (Text)Payload (JSON, up to 20kb)Starts_in (UTC)Expires_in (UTC)Seen (boolean)Deleted (boolean)LastUpdate (UTC)Created_by (UTC)Created_in (UTC)Indexes:UUID \n(PK)UserID + Country (main index)LastUpdate GroupID We \ninserted 160MM rows, around 2KB each. No partitioning.Insert started at around  3.000 \ninserts per second, but (as expected) started to slow down as the number\n of rows increased.  In the end we got around 500 inserts per second.Queries by Userd_ID + Country took\n less than 2 seconds, but while the batch insert was running the queries\n took over 20 seconds!!!We\n had 20 Lambda getting messages from SQS and bulk inserting them into \nPostgresql. The insert \nperformance is important, but we would slow it down if needed in order \nto ensure a more flat query performance. (Below 2 seconds). Each query \n(userId + country) returns around 100 diferent messages, which are \nfiltered and order by the synchronous Lambda function. So we don't do \nany special filtering, sorting, ordering or full text search in \nPostgres. In some ways we use it more like a glorified file system. :)We are going to limit the number \nof lambda workers to 1 or 2, and then run some queries concurrently to \nsee if the query performance is not affect too much. We aim to get at \nleast 50 queries per second (returning 100 messages each) under 2 \nseconds, even when there is millions of messages on SQS being inserted \ninto PG.We haven't done \nany performance tuning in the DB. With all that said, the question is:What can be done to ensure good query performance (UserID+ \ncountry) even when the bulk insert is running (low priority).We are limited to use AWS RDS at \nthe moment.Cheers", "msg_date": "Sun, 24 Dec 2017 20:30:27 -0500", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "I had an opportunity to perform insertion of 700MM rows into Aurora\nPostgresql, for which performance insights are available. Turns out, that\nthere are two stages of insert slowdown - first happens when max WAL\nbuffers limit reached, second happens around 1 hour after.\n\nThe first stage cuts insert performance twice, and WALWrite lock is main\nbottleneck. I think WAL just can't sync changes log that fast, so it waits\nwhile older log entries are flushed. This creates both read and write IO.\n\nThe second stage is unique to Aurora/RDS and is characterized by excessive\nread data locks and total read IO. I couldn't figure out why does it read\nso much in a write only process, and AWS support didn't answer yet.\n\nSo, for you, try to throttle inserts so WAL is never overfilled and you\ndon't experience WALWrite locks, and then increase wal buffers to max.\n\n24 груд. 2017 р. 21:51 \"Jean Baro\" <[email protected]> пише:\n\nHi there,\n\nWe are testing a new application to try to find performance issues.\n\nAWS RDS m4.large 500GB storage (SSD)\n\nOne table only, called Messages:\n\nUuid\nCountry (ISO)\nRole (Text)\nUser id (Text)\nGroupId (integer)\nChannel (text)\nTitle (Text)\nPayload (JSON, up to 20kb)\nStarts_in (UTC)\nExpires_in (UTC)\nSeen (boolean)\nDeleted (boolean)\nLastUpdate (UTC)\nCreated_by (UTC)\nCreated_in (UTC)\n\nIndexes:\n\nUUID (PK)\nUserID + Country (main index)\nLastUpdate\nGroupID\n\n\nWe inserted 160MM rows, around 2KB each. No partitioning.\n\nInsert started at around 3.000 inserts per second, but (as expected)\nstarted to slow down as the number of rows increased. In the end we got\naround 500 inserts per second.\n\nQueries by Userd_ID + Country took less than 2 seconds, but while the batch\ninsert was running the queries took over 20 seconds!!!\n\nWe had 20 Lambda getting messages from SQS and bulk inserting them into\nPostgresql.\n\nThe insert performance is important, but we would slow it down if needed in\norder to ensure a more flat query performance. (Below 2 seconds). Each\nquery (userId + country) returns around 100 diferent messages, which are\nfiltered and order by the synchronous Lambda function. So we don't do any\nspecial filtering, sorting, ordering or full text search in Postgres. In\nsome ways we use it more like a glorified file system. :)\n\nWe are going to limit the number of lambda workers to 1 or 2, and then run\nsome queries concurrently to see if the query performance is not affect too\nmuch. We aim to get at least 50 queries per second (returning 100 messages\neach) under 2 seconds, even when there is millions of messages on SQS being\ninserted into PG.\n\nWe haven't done any performance tuning in the DB.\n\nWith all that said, the question is:\n\nWhat can be done to ensure good query performance (UserID+ country) even\nwhen the bulk insert is running (low priority).\n\nWe are limited to use AWS RDS at the moment.\n\nCheers\n\nI had an opportunity to perform insertion of 700MM rows into Aurora Postgresql, for which performance insights are available. Turns out, that there are two stages of insert slowdown - first happens when max WAL buffers limit reached, second happens around 1 hour after.The first stage cuts insert performance twice, and WALWrite lock is main bottleneck. I think WAL just can't sync changes log that fast, so it waits while older log entries are flushed. This creates both read and write IO.The second stage is unique to Aurora/RDS and is characterized by excessive read data locks and total read IO. I couldn't figure out why does it read so much in a write only process, and AWS support didn't answer yet.So, for you, try to throttle inserts so WAL is never overfilled and you don't experience WALWrite locks, and then increase wal buffers to max.24 груд. 2017 р. 21:51 \"Jean Baro\" <[email protected]> пише:Hi there,We are testing a new application to try to find performance issues.AWS RDS m4.large 500GB storage (SSD)One table only, called Messages:UuidCountry  (ISO)Role (Text)User id  (Text)GroupId (integer)Channel (text)Title (Text)Payload (JSON, up to 20kb)Starts_in (UTC)Expires_in (UTC)Seen (boolean)Deleted (boolean)LastUpdate (UTC)Created_by (UTC)Created_in (UTC)Indexes:UUID (PK)UserID + Country (main index)LastUpdate GroupID We inserted 160MM rows, around 2KB each. No partitioning.Insert started at around  3.000 inserts per second, but (as expected) started to slow down as the number of rows increased.  In the end we got around 500 inserts per second.Queries by Userd_ID + Country took less than 2 seconds, but while the batch insert was running the queries took over 20 seconds!!!We had 20 Lambda getting messages from SQS and bulk inserting them into Postgresql. The insert performance is important, but we would slow it down if needed in order to ensure a more flat query performance. (Below 2 seconds). Each query (userId + country) returns around 100 diferent messages, which are filtered and order by the synchronous Lambda function. So we don't do any special filtering, sorting, ordering or full text search in Postgres. In some ways we use it more like a glorified file system. :)We are going to limit the number of lambda workers to 1 or 2, and then run some queries concurrently to see if the query performance is not affect too much. We aim to get at least 50 queries per second (returning 100 messages each) under 2 seconds, even when there is millions of messages on SQS being inserted into PG.We haven't done any performance tuning in the DB. With all that said, the question is:What can be done to ensure good query performance (UserID+ country) even when the bulk insert is running (low priority).We are limited to use AWS RDS at the moment.Cheers", "msg_date": "Mon, 25 Dec 2017 04:59:27 +0200", "msg_from": "Danylo Hlynskyi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "Thanks for the clarification guys.\n\nIt will be super useful. After trying this I'll post the results!\n\nMerry Christmas!\n\nEm 25 de dez de 2017 00:59, \"Danylo Hlynskyi\" <[email protected]>\nescreveu:\n\n> I had an opportunity to perform insertion of 700MM rows into Aurora\n> Postgresql, for which performance insights are available. Turns out, that\n> there are two stages of insert slowdown - first happens when max WAL\n> buffers limit reached, second happens around 1 hour after.\n>\n> The first stage cuts insert performance twice, and WALWrite lock is main\n> bottleneck. I think WAL just can't sync changes log that fast, so it waits\n> while older log entries are flushed. This creates both read and write IO.\n>\n> The second stage is unique to Aurora/RDS and is characterized by excessive\n> read data locks and total read IO. I couldn't figure out why does it read\n> so much in a write only process, and AWS support didn't answer yet.\n>\n> So, for you, try to throttle inserts so WAL is never overfilled and you\n> don't experience WALWrite locks, and then increase wal buffers to max.\n>\n> 24 груд. 2017 р. 21:51 \"Jean Baro\" <[email protected]> пише:\n>\n> Hi there,\n>\n> We are testing a new application to try to find performance issues.\n>\n> AWS RDS m4.large 500GB storage (SSD)\n>\n> One table only, called Messages:\n>\n> Uuid\n> Country (ISO)\n> Role (Text)\n> User id (Text)\n> GroupId (integer)\n> Channel (text)\n> Title (Text)\n> Payload (JSON, up to 20kb)\n> Starts_in (UTC)\n> Expires_in (UTC)\n> Seen (boolean)\n> Deleted (boolean)\n> LastUpdate (UTC)\n> Created_by (UTC)\n> Created_in (UTC)\n>\n> Indexes:\n>\n> UUID (PK)\n> UserID + Country (main index)\n> LastUpdate\n> GroupID\n>\n>\n> We inserted 160MM rows, around 2KB each. No partitioning.\n>\n> Insert started at around 3.000 inserts per second, but (as expected)\n> started to slow down as the number of rows increased. In the end we got\n> around 500 inserts per second.\n>\n> Queries by Userd_ID + Country took less than 2 seconds, but while the\n> batch insert was running the queries took over 20 seconds!!!\n>\n> We had 20 Lambda getting messages from SQS and bulk inserting them into\n> Postgresql.\n>\n> The insert performance is important, but we would slow it down if needed\n> in order to ensure a more flat query performance. (Below 2 seconds). Each\n> query (userId + country) returns around 100 diferent messages, which are\n> filtered and order by the synchronous Lambda function. So we don't do any\n> special filtering, sorting, ordering or full text search in Postgres. In\n> some ways we use it more like a glorified file system. :)\n>\n> We are going to limit the number of lambda workers to 1 or 2, and then run\n> some queries concurrently to see if the query performance is not affect too\n> much. We aim to get at least 50 queries per second (returning 100 messages\n> each) under 2 seconds, even when there is millions of messages on SQS being\n> inserted into PG.\n>\n> We haven't done any performance tuning in the DB.\n>\n> With all that said, the question is:\n>\n> What can be done to ensure good query performance (UserID+ country) even\n> when the bulk insert is running (low priority).\n>\n> We are limited to use AWS RDS at the moment.\n>\n> Cheers\n>\n>\n>\n>\n\nThanks for the clarification guys.It will be super useful. After trying this I'll post the results!Merry Christmas!Em 25 de dez de 2017 00:59, \"Danylo Hlynskyi\" <[email protected]> escreveu:I had an opportunity to perform insertion of 700MM rows into Aurora Postgresql, for which performance insights are available. Turns out, that there are two stages of insert slowdown - first happens when max WAL buffers limit reached, second happens around 1 hour after.The first stage cuts insert performance twice, and WALWrite lock is main bottleneck. I think WAL just can't sync changes log that fast, so it waits while older log entries are flushed. This creates both read and write IO.The second stage is unique to Aurora/RDS and is characterized by excessive read data locks and total read IO. I couldn't figure out why does it read so much in a write only process, and AWS support didn't answer yet.So, for you, try to throttle inserts so WAL is never overfilled and you don't experience WALWrite locks, and then increase wal buffers to max.24 груд. 2017 р. 21:51 \"Jean Baro\" <[email protected]> пише:Hi there,We are testing a new application to try to find performance issues.AWS RDS m4.large 500GB storage (SSD)One table only, called Messages:UuidCountry  (ISO)Role (Text)User id  (Text)GroupId (integer)Channel (text)Title (Text)Payload (JSON, up to 20kb)Starts_in (UTC)Expires_in (UTC)Seen (boolean)Deleted (boolean)LastUpdate (UTC)Created_by (UTC)Created_in (UTC)Indexes:UUID (PK)UserID + Country (main index)LastUpdate GroupID We inserted 160MM rows, around 2KB each. No partitioning.Insert started at around  3.000 inserts per second, but (as expected) started to slow down as the number of rows increased.  In the end we got around 500 inserts per second.Queries by Userd_ID + Country took less than 2 seconds, but while the batch insert was running the queries took over 20 seconds!!!We had 20 Lambda getting messages from SQS and bulk inserting them into Postgresql. The insert performance is important, but we would slow it down if needed in order to ensure a more flat query performance. (Below 2 seconds). Each query (userId + country) returns around 100 diferent messages, which are filtered and order by the synchronous Lambda function. So we don't do any special filtering, sorting, ordering or full text search in Postgres. In some ways we use it more like a glorified file system. :)We are going to limit the number of lambda workers to 1 or 2, and then run some queries concurrently to see if the query performance is not affect too much. We aim to get at least 50 queries per second (returning 100 messages each) under 2 seconds, even when there is millions of messages on SQS being inserted into PG.We haven't done any performance tuning in the DB. With all that said, the question is:What can be done to ensure good query performance (UserID+ country) even when the bulk insert is running (low priority).We are limited to use AWS RDS at the moment.Cheers", "msg_date": "Mon, 25 Dec 2017 01:10:28 -0200", "msg_from": "Jean Baro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "Hello,\n\nWe are still seeing queries (by UserID + UserCountry) taking over 2\nseconds, even when there is no batch insert going on at the same time.\n\nEach query returns from 100 to 200 messagens, which would be a 400kb pay\nload, which is super tiny.\n\nI don't know what else I can do with the limitations (m4.large), 167MM\nrows, almost 500GB database and 29GB of indexes (all indexes).\n\nI am probably to optimistic, but I was expecting queries (up to 50 queries\nper second) to return (99th) under 500ms or even less, as the index is\nsimple, there is no aggregation or join involves.\n\nAny suggestion?\n\nThe table structure:\nCREATE TABLE public.card\n(\n id character(36) NOT NULL,\n user_id character varying(40) NOT NULL,\n user_country character(2) NOT NULL,\n user_channel character varying(40),\n user_role character varying(40),\n created_by_system_key character(36) NOT NULL,\n created_by_username character varying(40),\n created_at timestamp with time zone NOT NULL,\n last_modified_at timestamp with time zone NOT NULL,\n date_start timestamp with time zone NOT NULL,\n date_end timestamp with time zone NOT NULL,\n payload json NOT NULL,\n tags character varying(500),\n menu character varying(50),\n deleted boolean NOT NULL,\n campaign character varying(500) NOT NULL,\n correlation_id character varying(50),\n PRIMARY KEY (id)\n);\n\nCREATE INDEX idx_user_country\n ON public.card USING btree\n (user_id COLLATE pg_catalog.\"default\", user_country COLLATE\npg_catalog.\"default\");\n\nCREATE INDEX idx_last_modified_at\n ON public.card USING btree\n (last_modified_at ASC NULLS LAST);\n\nCREATE INDEX idx_campaign\n ON public.card USING btree\n (campaign ASC NULLS LAST)\n\nThe EXPLAIN\n\n'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460\nwidth=922)'\n' Index Cond: (((user_id)::text = '4684'::text) AND (user_country =\n'BR'::bpchar))'\n\n\n\nEm 25 de dez de 2017 01:10, \"Jean Baro\" <[email protected]> escreveu:\n\n> Thanks for the clarification guys.\n>\n> It will be super useful. After trying this I'll post the results!\n>\n> Merry Christmas!\n>\n> Em 25 de dez de 2017 00:59, \"Danylo Hlynskyi\" <[email protected]>\n> escreveu:\n>\n>> I had an opportunity to perform insertion of 700MM rows into Aurora\n>> Postgresql, for which performance insights are available. Turns out, that\n>> there are two stages of insert slowdown - first happens when max WAL\n>> buffers limit reached, second happens around 1 hour after.\n>>\n>> The first stage cuts insert performance twice, and WALWrite lock is main\n>> bottleneck. I think WAL just can't sync changes log that fast, so it waits\n>> while older log entries are flushed. This creates both read and write IO.\n>>\n>> The second stage is unique to Aurora/RDS and is characterized by\n>> excessive read data locks and total read IO. I couldn't figure out why does\n>> it read so much in a write only process, and AWS support didn't answer yet.\n>>\n>> So, for you, try to throttle inserts so WAL is never overfilled and you\n>> don't experience WALWrite locks, and then increase wal buffers to max.\n>>\n>> 24 груд. 2017 р. 21:51 \"Jean Baro\" <[email protected]> пише:\n>>\n>> Hi there,\n>>\n>> We are testing a new application to try to find performance issues.\n>>\n>> AWS RDS m4.large 500GB storage (SSD)\n>>\n>> One table only, called Messages:\n>>\n>> Uuid\n>> Country (ISO)\n>> Role (Text)\n>> User id (Text)\n>> GroupId (integer)\n>> Channel (text)\n>> Title (Text)\n>> Payload (JSON, up to 20kb)\n>> Starts_in (UTC)\n>> Expires_in (UTC)\n>> Seen (boolean)\n>> Deleted (boolean)\n>> LastUpdate (UTC)\n>> Created_by (UTC)\n>> Created_in (UTC)\n>>\n>> Indexes:\n>>\n>> UUID (PK)\n>> UserID + Country (main index)\n>> LastUpdate\n>> GroupID\n>>\n>>\n>> We inserted 160MM rows, around 2KB each. No partitioning.\n>>\n>> Insert started at around 3.000 inserts per second, but (as expected)\n>> started to slow down as the number of rows increased. In the end we got\n>> around 500 inserts per second.\n>>\n>> Queries by Userd_ID + Country took less than 2 seconds, but while the\n>> batch insert was running the queries took over 20 seconds!!!\n>>\n>> We had 20 Lambda getting messages from SQS and bulk inserting them into\n>> Postgresql.\n>>\n>> The insert performance is important, but we would slow it down if needed\n>> in order to ensure a more flat query performance. (Below 2 seconds). Each\n>> query (userId + country) returns around 100 diferent messages, which are\n>> filtered and order by the synchronous Lambda function. So we don't do any\n>> special filtering, sorting, ordering or full text search in Postgres. In\n>> some ways we use it more like a glorified file system. :)\n>>\n>> We are going to limit the number of lambda workers to 1 or 2, and then\n>> run some queries concurrently to see if the query performance is not affect\n>> too much. We aim to get at least 50 queries per second (returning 100\n>> messages each) under 2 seconds, even when there is millions of messages on\n>> SQS being inserted into PG.\n>>\n>> We haven't done any performance tuning in the DB.\n>>\n>> With all that said, the question is:\n>>\n>> What can be done to ensure good query performance (UserID+ country) even\n>> when the bulk insert is running (low priority).\n>>\n>> We are limited to use AWS RDS at the moment.\n>>\n>> Cheers\n>>\n>>\n>>\n>>\n\nHello,We are still seeing queries  (by UserID + UserCountry) taking over 2 seconds, even when there is no batch insert going on at the same time.Each query returns from 100 to 200 messagens, which would be a 400kb pay load, which is super tiny.I don't know what else I can do with the limitations (m4.large), 167MM rows, almost 500GB database and 29GB of indexes (all indexes).I am probably to optimistic, but I was expecting queries (up to 50 queries per second) to return  (99th) under 500ms or even less, as the index is simple, there is no aggregation or join involves.Any suggestion?The table structure:CREATE TABLE public.card(    id character(36) NOT NULL,    user_id character varying(40) NOT NULL,    user_country character(2) NOT NULL,    user_channel character varying(40),    user_role character varying(40),    created_by_system_key character(36) NOT NULL,    created_by_username character varying(40),    created_at timestamp with time zone NOT NULL,    last_modified_at timestamp with time zone NOT NULL,    date_start timestamp with time zone NOT NULL,    date_end timestamp with time zone NOT NULL,    payload json NOT NULL,    tags character varying(500),    menu character varying(50),    deleted boolean NOT NULL,    campaign character varying(500) NOT NULL,    correlation_id character varying(50),    PRIMARY KEY (id));CREATE INDEX idx_user_country    ON public.card USING btree    (user_id COLLATE pg_catalog.\"default\", user_country COLLATE pg_catalog.\"default\"); CREATE INDEX idx_last_modified_at    ON public.card USING btree    (last_modified_at ASC NULLS LAST); CREATE INDEX idx_campaign    ON public.card USING btree    (campaign ASC NULLS LAST)The EXPLAIN'Index Scan using idx_user_country on card  (cost=0.57..1854.66 rows=460 width=922)''  Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'Em 25 de dez de 2017 01:10, \"Jean Baro\" <[email protected]> escreveu:Thanks for the clarification guys.It will be super useful. After trying this I'll post the results!Merry Christmas!Em 25 de dez de 2017 00:59, \"Danylo Hlynskyi\" <[email protected]> escreveu:I had an opportunity to perform insertion of 700MM rows into Aurora Postgresql, for which performance insights are available. Turns out, that there are two stages of insert slowdown - first happens when max WAL buffers limit reached, second happens around 1 hour after.The first stage cuts insert performance twice, and WALWrite lock is main bottleneck. I think WAL just can't sync changes log that fast, so it waits while older log entries are flushed. This creates both read and write IO.The second stage is unique to Aurora/RDS and is characterized by excessive read data locks and total read IO. I couldn't figure out why does it read so much in a write only process, and AWS support didn't answer yet.So, for you, try to throttle inserts so WAL is never overfilled and you don't experience WALWrite locks, and then increase wal buffers to max.24 груд. 2017 р. 21:51 \"Jean Baro\" <[email protected]> пише:Hi there,We are testing a new application to try to find performance issues.AWS RDS m4.large 500GB storage (SSD)One table only, called Messages:UuidCountry  (ISO)Role (Text)User id  (Text)GroupId (integer)Channel (text)Title (Text)Payload (JSON, up to 20kb)Starts_in (UTC)Expires_in (UTC)Seen (boolean)Deleted (boolean)LastUpdate (UTC)Created_by (UTC)Created_in (UTC)Indexes:UUID (PK)UserID + Country (main index)LastUpdate GroupID We inserted 160MM rows, around 2KB each. No partitioning.Insert started at around  3.000 inserts per second, but (as expected) started to slow down as the number of rows increased.  In the end we got around 500 inserts per second.Queries by Userd_ID + Country took less than 2 seconds, but while the batch insert was running the queries took over 20 seconds!!!We had 20 Lambda getting messages from SQS and bulk inserting them into Postgresql. The insert performance is important, but we would slow it down if needed in order to ensure a more flat query performance. (Below 2 seconds). Each query (userId + country) returns around 100 diferent messages, which are filtered and order by the synchronous Lambda function. So we don't do any special filtering, sorting, ordering or full text search in Postgres. In some ways we use it more like a glorified file system. :)We are going to limit the number of lambda workers to 1 or 2, and then run some queries concurrently to see if the query performance is not affect too much. We aim to get at least 50 queries per second (returning 100 messages each) under 2 seconds, even when there is millions of messages on SQS being inserted into PG.We haven't done any performance tuning in the DB. With all that said, the question is:What can be done to ensure good query performance (UserID+ country) even when the bulk insert is running (low priority).We are limited to use AWS RDS at the moment.Cheers", "msg_date": "Wed, 27 Dec 2017 13:13:31 -0200", "msg_from": "Jean Baro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "On Wed, Dec 27, 2017 at 10:13 AM, Jean Baro <[email protected]> wrote:\n\n> Hello,\n>\n> We are still seeing queries (by UserID + UserCountry) taking over 2\n> seconds, even when there is no batch insert going on at the same time.\n>\n> Each query returns from 100 to 200 messagens, which would be a 400kb pay\n> load, which is super tiny.\n>\n> I don't know what else I can do with the limitations (m4.large), 167MM\n> rows, almost 500GB database and 29GB of indexes (all indexes).\n>\n> I am probably to optimistic, but I was expecting queries (up to 50 queries\n> per second) to return (99th) under 500ms or even less, as the index is\n> simple, there is no aggregation or join involves.\n>\n\n> Any suggestion?\n>\n\n\nAlthough you aren't querying by it, if your id column is actually a UUID,\nas a best practice I strongly recommend switching the column type to uuid.\nIf you do query by the primary key, a uuid query will be much faster than a\nchar or varchar column query.\n\nYou'll need to submit a more complete explain plan than what you have below.\n Try using:\n explain (analyze, costs, verbose, buffers) select ...\n\n\n\n> The table structure:\n> CREATE TABLE public.card\n> (\n> id character(36) NOT NULL,\n> user_id character varying(40) NOT NULL,\n> user_country character(2) NOT NULL,\n> user_channel character varying(40),\n> user_role character varying(40),\n> created_by_system_key character(36) NOT NULL,\n> created_by_username character varying(40),\n> created_at timestamp with time zone NOT NULL,\n> last_modified_at timestamp with time zone NOT NULL,\n> date_start timestamp with time zone NOT NULL,\n> date_end timestamp with time zone NOT NULL,\n> payload json NOT NULL,\n> tags character varying(500),\n> menu character varying(50),\n> deleted boolean NOT NULL,\n> campaign character varying(500) NOT NULL,\n> correlation_id character varying(50),\n> PRIMARY KEY (id)\n> );\n>\n> CREATE INDEX idx_user_country\n> ON public.card USING btree\n> (user_id COLLATE pg_catalog.\"default\", user_country COLLATE\n> pg_catalog.\"default\");\n>\n> CREATE INDEX idx_last_modified_at\n> ON public.card USING btree\n> (last_modified_at ASC NULLS LAST);\n>\n> CREATE INDEX idx_campaign\n> ON public.card USING btree\n> (campaign ASC NULLS LAST)\n>\n> The EXPLAIN\n>\n> 'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460\n> width=922)'\n> ' Index Cond: (((user_id)::text = '4684'::text) AND (user_country =\n> 'BR'::bpchar))'\n>\n>\n>\n> Em 25 de dez de 2017 01:10, \"Jean Baro\" <[email protected]> escreveu:\n>\n>> Thanks for the clarification guys.\n>>\n>> It will be super useful. After trying this I'll post the results!\n>>\n>> Merry Christmas!\n>>\n>> Em 25 de dez de 2017 00:59, \"Danylo Hlynskyi\" <[email protected]>\n>> escreveu:\n>>\n>>> I had an opportunity to perform insertion of 700MM rows into Aurora\n>>> Postgresql, for which performance insights are available. Turns out, that\n>>> there are two stages of insert slowdown - first happens when max WAL\n>>> buffers limit reached, second happens around 1 hour after.\n>>>\n>>> The first stage cuts insert performance twice, and WALWrite lock is main\n>>> bottleneck. I think WAL just can't sync changes log that fast, so it waits\n>>> while older log entries are flushed. This creates both read and write IO.\n>>>\n>>> The second stage is unique to Aurora/RDS and is characterized by\n>>> excessive read data locks and total read IO. I couldn't figure out why does\n>>> it read so much in a write only process, and AWS support didn't answer yet.\n>>>\n>>> So, for you, try to throttle inserts so WAL is never overfilled and you\n>>> don't experience WALWrite locks, and then increase wal buffers to max.\n>>>\n>>> 24 груд. 2017 р. 21:51 \"Jean Baro\" <[email protected]> пише:\n>>>\n>>> Hi there,\n>>>\n>>> We are testing a new application to try to find performance issues.\n>>>\n>>> AWS RDS m4.large 500GB storage (SSD)\n>>>\n>>> One table only, called Messages:\n>>>\n>>> Uuid\n>>> Country (ISO)\n>>> Role (Text)\n>>> User id (Text)\n>>> GroupId (integer)\n>>> Channel (text)\n>>> Title (Text)\n>>> Payload (JSON, up to 20kb)\n>>> Starts_in (UTC)\n>>> Expires_in (UTC)\n>>> Seen (boolean)\n>>> Deleted (boolean)\n>>> LastUpdate (UTC)\n>>> Created_by (UTC)\n>>> Created_in (UTC)\n>>>\n>>> Indexes:\n>>>\n>>> UUID (PK)\n>>> UserID + Country (main index)\n>>> LastUpdate\n>>> GroupID\n>>>\n>>>\n>>> We inserted 160MM rows, around 2KB each. No partitioning.\n>>>\n>>> Insert started at around 3.000 inserts per second, but (as expected)\n>>> started to slow down as the number of rows increased. In the end we got\n>>> around 500 inserts per second.\n>>>\n>>> Queries by Userd_ID + Country took less than 2 seconds, but while the\n>>> batch insert was running the queries took over 20 seconds!!!\n>>>\n>>> We had 20 Lambda getting messages from SQS and bulk inserting them into\n>>> Postgresql.\n>>>\n>>> The insert performance is important, but we would slow it down if needed\n>>> in order to ensure a more flat query performance. (Below 2 seconds). Each\n>>> query (userId + country) returns around 100 diferent messages, which are\n>>> filtered and order by the synchronous Lambda function. So we don't do any\n>>> special filtering, sorting, ordering or full text search in Postgres. In\n>>> some ways we use it more like a glorified file system. :)\n>>>\n>>> We are going to limit the number of lambda workers to 1 or 2, and then\n>>> run some queries concurrently to see if the query performance is not affect\n>>> too much. We aim to get at least 50 queries per second (returning 100\n>>> messages each) under 2 seconds, even when there is millions of messages on\n>>> SQS being inserted into PG.\n>>>\n>>> We haven't done any performance tuning in the DB.\n>>>\n>>> With all that said, the question is:\n>>>\n>>> What can be done to ensure good query performance (UserID+ country) even\n>>> when the bulk insert is running (low priority).\n>>>\n>>> We are limited to use AWS RDS at the moment.\n>>>\n>>> Cheers\n>>>\n>>>\n>>>\n>>>\n\nOn Wed, Dec 27, 2017 at 10:13 AM, Jean Baro <[email protected]> wrote:Hello,We are still seeing queries  (by UserID + UserCountry) taking over 2 seconds, even when there is no batch insert going on at the same time.Each query returns from 100 to 200 messagens, which would be a 400kb pay load, which is super tiny.I don't know what else I can do with the limitations (m4.large), 167MM rows, almost 500GB database and 29GB of indexes (all indexes).I am probably to optimistic, but I was expecting queries (up to 50 queries per second) to return  (99th) under 500ms or even less, as the index is simple, there is no aggregation or join involves.Any suggestion?Although you aren't querying by it, if your id column is actually a UUID, as a best practice I strongly recommend switching the column type to uuid.  If you do query by the primary key, a uuid query will be much faster than a char or varchar column query.You'll need to submit a more complete explain plan than what you have below.  Try using:       explain (analyze, costs, verbose, buffers) select ...The table structure:CREATE TABLE public.card(    id character(36) NOT NULL,    user_id character varying(40) NOT NULL,    user_country character(2) NOT NULL,    user_channel character varying(40),    user_role character varying(40),    created_by_system_key character(36) NOT NULL,    created_by_username character varying(40),    created_at timestamp with time zone NOT NULL,    last_modified_at timestamp with time zone NOT NULL,    date_start timestamp with time zone NOT NULL,    date_end timestamp with time zone NOT NULL,    payload json NOT NULL,    tags character varying(500),    menu character varying(50),    deleted boolean NOT NULL,    campaign character varying(500) NOT NULL,    correlation_id character varying(50),    PRIMARY KEY (id));CREATE INDEX idx_user_country    ON public.card USING btree    (user_id COLLATE pg_catalog.\"default\", user_country COLLATE pg_catalog.\"default\"); CREATE INDEX idx_last_modified_at    ON public.card USING btree    (last_modified_at ASC NULLS LAST); CREATE INDEX idx_campaign    ON public.card USING btree    (campaign ASC NULLS LAST)The EXPLAIN'Index Scan using idx_user_country on card  (cost=0.57..1854.66 rows=460 width=922)''  Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'Em 25 de dez de 2017 01:10, \"Jean Baro\" <[email protected]> escreveu:Thanks for the clarification guys.It will be super useful. After trying this I'll post the results!Merry Christmas!Em 25 de dez de 2017 00:59, \"Danylo Hlynskyi\" <[email protected]> escreveu:I had an opportunity to perform insertion of 700MM rows into Aurora Postgresql, for which performance insights are available. Turns out, that there are two stages of insert slowdown - first happens when max WAL buffers limit reached, second happens around 1 hour after.The first stage cuts insert performance twice, and WALWrite lock is main bottleneck. I think WAL just can't sync changes log that fast, so it waits while older log entries are flushed. This creates both read and write IO.The second stage is unique to Aurora/RDS and is characterized by excessive read data locks and total read IO. I couldn't figure out why does it read so much in a write only process, and AWS support didn't answer yet.So, for you, try to throttle inserts so WAL is never overfilled and you don't experience WALWrite locks, and then increase wal buffers to max.24 груд. 2017 р. 21:51 \"Jean Baro\" <[email protected]> пише:Hi there,We are testing a new application to try to find performance issues.AWS RDS m4.large 500GB storage (SSD)One table only, called Messages:UuidCountry  (ISO)Role (Text)User id  (Text)GroupId (integer)Channel (text)Title (Text)Payload (JSON, up to 20kb)Starts_in (UTC)Expires_in (UTC)Seen (boolean)Deleted (boolean)LastUpdate (UTC)Created_by (UTC)Created_in (UTC)Indexes:UUID (PK)UserID + Country (main index)LastUpdate GroupID We inserted 160MM rows, around 2KB each. No partitioning.Insert started at around  3.000 inserts per second, but (as expected) started to slow down as the number of rows increased.  In the end we got around 500 inserts per second.Queries by Userd_ID + Country took less than 2 seconds, but while the batch insert was running the queries took over 20 seconds!!!We had 20 Lambda getting messages from SQS and bulk inserting them into Postgresql. The insert performance is important, but we would slow it down if needed in order to ensure a more flat query performance. (Below 2 seconds). Each query (userId + country) returns around 100 diferent messages, which are filtered and order by the synchronous Lambda function. So we don't do any special filtering, sorting, ordering or full text search in Postgres. In some ways we use it more like a glorified file system. :)We are going to limit the number of lambda workers to 1 or 2, and then run some queries concurrently to see if the query performance is not affect too much. We aim to get at least 50 queries per second (returning 100 messages each) under 2 seconds, even when there is millions of messages on SQS being inserted into PG.We haven't done any performance tuning in the DB. With all that said, the question is:What can be done to ensure good query performance (UserID+ country) even when the bulk insert is running (low priority).We are limited to use AWS RDS at the moment.Cheers", "msg_date": "Wed, 27 Dec 2017 10:38:02 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "Hi Jean,\n\n \n\nI’ve used Postgres on a regular EC2 instance (an m4.xlarge), storing complex genomic data, hundreds of millions of rows in a table and “normal” queries that used an index returned in 50-100ms, depending on the query…so this isn’t a Postgres issue per se. \n\n \n\nYour table and index structures look ok, although in PG, use the “text” datatype instead of varchar, it is the optimized type for storing string data of any size (even a 2 char country code). Since you have 2 such columns that you’ve indexed and are querying for, there is a chance you’ll see an improvement. \n\n \n\nI have not yet used Aurora or RDS for any large data…it sure seems like the finger could be pointing there, but it isn’t clear what mechanism in Aurora could be creating the slowness.\n\n \n\nIs there a possibility of you creating the same db on a normal EC2 instance with PG installed and running the same test? There is nothing else obvious about your data/structure that could result in such terrible performance.\n\n \n\nMike Sofen\n\n \n\nFrom: Jean Baro [mailto:[email protected]] \nSent: Wednesday, December 27, 2017 7:14 AM\n\n\n\nHello,\n\n \n\nWe are still seeing queries (by UserID + UserCountry) taking over 2 seconds, even when there is no batch insert going on at the same time.\n\n \n\nEach query returns from 100 to 200 messagens, which would be a 400kb pay load, which is super tiny.\n\n \n\nI don't know what else I can do with the limitations (m4.large), 167MM rows, almost 500GB database and 29GB of indexes (all indexes).\n\n \n\nI am probably to optimistic, but I was expecting queries (up to 50 queries per second) to return (99th) under 500ms or even less, as the index is simple, there is no aggregation or join involves.\n\n \n\nAny suggestion?\n\n \n\nThe table structure:\n\nCREATE TABLE public.card\n\n(\n\n id character(36) NOT NULL,\n\n user_id character varying(40) NOT NULL,\n\n user_country character(2) NOT NULL,\n\n user_channel character varying(40),\n\n user_role character varying(40),\n\n created_by_system_key character(36) NOT NULL,\n\n created_by_username character varying(40),\n\n created_at timestamp with time zone NOT NULL,\n\n last_modified_at timestamp with time zone NOT NULL,\n\n date_start timestamp with time zone NOT NULL,\n\n date_end timestamp with time zone NOT NULL,\n\n payload json NOT NULL,\n\n tags character varying(500),\n\n menu character varying(50),\n\n deleted boolean NOT NULL,\n\n campaign character varying(500) NOT NULL,\n\n correlation_id character varying(50),\n\n PRIMARY KEY (id)\n\n);\n\n \n\nCREATE INDEX idx_user_country\n\n ON public.card USING btree\n\n (user_id COLLATE pg_catalog.\"default\", user_country COLLATE pg_catalog.\"default\");\n\n \n\nCREATE INDEX idx_last_modified_at\n\n ON public.card USING btree\n\n (last_modified_at ASC NULLS LAST);\n\n \n\nCREATE INDEX idx_campaign\n\n ON public.card USING btree\n\n (campaign ASC NULLS LAST)\n\n \n\nThe EXPLAIN\n\n \n\n'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460 width=922)'\n\n' Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'\n\n \n\n \n\n \n\nEm 25 de dez de 2017 01:10, \"Jean Baro\" <[email protected] <mailto:[email protected]> > escreveu:\n\nThanks for the clarification guys.\n\n \n\nIt will be super useful. After trying this I'll post the results!\n\n \n\nMerry Christmas! \n\n \n\n\nHi Jean, I’ve used Postgres on a regular EC2 instance (an m4.xlarge), storing complex genomic data, hundreds of millions of rows in a table and “normal” queries that used an index returned in 50-100ms, depending on the query…so this isn’t a Postgres issue per se.   Your table and index structures look ok, although in PG, use the “text” datatype instead of varchar, it is the optimized type for storing string data of any size (even a 2 char country code).  Since you have 2 such columns that you’ve indexed and are querying for, there is a chance you’ll see an improvement.   I have not yet used Aurora or RDS for any large data…it sure seems like the finger could be pointing there, but it isn’t clear what mechanism in Aurora could be creating the slowness. Is there a possibility of you creating the same db on a normal EC2 instance with PG installed and running the same test?  There is nothing else obvious about your data/structure that could result in such terrible performance. Mike Sofen From: Jean Baro [mailto:[email protected]] Sent: Wednesday, December 27, 2017 7:14 AMHello, We are still seeing queries  (by UserID + UserCountry) taking over 2 seconds, even when there is no batch insert going on at the same time. Each query returns from 100 to 200 messagens, which would be a 400kb pay load, which is super tiny. I don't know what else I can do with the limitations (m4.large), 167MM rows, almost 500GB database and 29GB of indexes (all indexes). I am probably to optimistic, but I was expecting queries (up to 50 queries per second) to return  (99th) under 500ms or even less, as the index is simple, there is no aggregation or join involves. Any suggestion? The table structure:CREATE TABLE public.card(    id character(36) NOT NULL,    user_id character varying(40) NOT NULL,    user_country character(2) NOT NULL,    user_channel character varying(40),    user_role character varying(40),    created_by_system_key character(36) NOT NULL,    created_by_username character varying(40),    created_at timestamp with time zone NOT NULL,    last_modified_at timestamp with time zone NOT NULL,    date_start timestamp with time zone NOT NULL,    date_end timestamp with time zone NOT NULL,    payload json NOT NULL,    tags character varying(500),    menu character varying(50),    deleted boolean NOT NULL,    campaign character varying(500) NOT NULL,    correlation_id character varying(50),    PRIMARY KEY (id)); CREATE INDEX idx_user_country    ON public.card USING btree    (user_id COLLATE pg_catalog.\"default\", user_country COLLATE pg_catalog.\"default\"); CREATE INDEX idx_last_modified_at    ON public.card USING btree    (last_modified_at ASC NULLS LAST); CREATE INDEX idx_campaign    ON public.card USING btree    (campaign ASC NULLS LAST) The EXPLAIN 'Index Scan using idx_user_country on card  (cost=0.57..1854.66 rows=460 width=922)''  Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'   Em 25 de dez de 2017 01:10, \"Jean Baro\" <[email protected]> escreveu:Thanks for the clarification guys. It will be super useful. After trying this I'll post the results! Merry Christmas!", "msg_date": "Wed, 27 Dec 2017 07:58:38 -0800", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Batch insert heavily affecting query performance." }, { "msg_contents": ">\n> The EXPLAIN\n>\n> 'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460\n> width=922)'\n> ' Index Cond: (((user_id)::text = '4684'::text) AND (user_country =\n> 'BR'::bpchar))'\n>\n\nShow 3 runs of the full explain analyze plan on given condition so that we\ncan also see cold vs warm cache performance.\n\nThere is definitely something wrong as there is no way a query like that\nshould take 500ms. Your instinct is correct there.\n\nThe EXPLAIN'Index Scan using idx_user_country on card  (cost=0.57..1854.66 rows=460 width=922)''  Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'Show 3 runs of the full explain analyze plan on given condition so that we can also see cold vs warm cache performance.There is definitely something wrong as there is no way a query like that should take 500ms.  Your instinct is correct there.", "msg_date": "Wed, 27 Dec 2017 10:02:31 -0600", "msg_from": "Jeremy Finzel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "Thanks Rick,\n\nWe are now partitioning the DB (one table) into 100 sets of data.\n\nAs soon as we finish this new experiment we will provide a better EXPLAIN\nas you suggested. :)\n\nEm 27 de dez de 2017 13:38, \"Rick Otten\" <[email protected]>\nescreveu:\n\n\n\nOn Wed, Dec 27, 2017 at 10:13 AM, Jean Baro <[email protected]> wrote:\n\n> Hello,\n>\n> We are still seeing queries (by UserID + UserCountry) taking over 2\n> seconds, even when there is no batch insert going on at the same time.\n>\n> Each query returns from 100 to 200 messagens, which would be a 400kb pay\n> load, which is super tiny.\n>\n> I don't know what else I can do with the limitations (m4.large), 167MM\n> rows, almost 500GB database and 29GB of indexes (all indexes).\n>\n> I am probably to optimistic, but I was expecting queries (up to 50 queries\n> per second) to return (99th) under 500ms or even less, as the index is\n> simple, there is no aggregation or join involves.\n>\n\n> Any suggestion?\n>\n\n\nAlthough you aren't querying by it, if your id column is actually a UUID,\nas a best practice I strongly recommend switching the column type to uuid.\nIf you do query by the primary key, a uuid query will be much faster than a\nchar or varchar column query.\n\nYou'll need to submit a more complete explain plan than what you have below.\n Try using:\n explain (analyze, costs, verbose, buffers) select ...\n\n\n\n> The table structure:\n> CREATE TABLE public.card\n> (\n> id character(36) NOT NULL,\n> user_id character varying(40) NOT NULL,\n> user_country character(2) NOT NULL,\n> user_channel character varying(40),\n> user_role character varying(40),\n> created_by_system_key character(36) NOT NULL,\n> created_by_username character varying(40),\n> created_at timestamp with time zone NOT NULL,\n> last_modified_at timestamp with time zone NOT NULL,\n> date_start timestamp with time zone NOT NULL,\n> date_end timestamp with time zone NOT NULL,\n> payload json NOT NULL,\n> tags character varying(500),\n> menu character varying(50),\n> deleted boolean NOT NULL,\n> campaign character varying(500) NOT NULL,\n> correlation_id character varying(50),\n> PRIMARY KEY (id)\n> );\n>\n> CREATE INDEX idx_user_country\n> ON public.card USING btree\n> (user_id COLLATE pg_catalog.\"default\", user_country COLLATE\n> pg_catalog.\"default\");\n>\n> CREATE INDEX idx_last_modified_at\n> ON public.card USING btree\n> (last_modified_at ASC NULLS LAST);\n>\n> CREATE INDEX idx_campaign\n> ON public.card USING btree\n> (campaign ASC NULLS LAST)\n>\n> The EXPLAIN\n>\n> 'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460\n> width=922)'\n> ' Index Cond: (((user_id)::text = '4684'::text) AND (user_country =\n> 'BR'::bpchar))'\n>\n>\n>\n> Em 25 de dez de 2017 01:10, \"Jean Baro\" <[email protected]> escreveu:\n>\n>> Thanks for the clarification guys.\n>>\n>> It will be super useful. After trying this I'll post the results!\n>>\n>> Merry Christmas!\n>>\n>> Em 25 de dez de 2017 00:59, \"Danylo Hlynskyi\" <[email protected]>\n>> escreveu:\n>>\n>>> I had an opportunity to perform insertion of 700MM rows into Aurora\n>>> Postgresql, for which performance insights are available. Turns out, that\n>>> there are two stages of insert slowdown - first happens when max WAL\n>>> buffers limit reached, second happens around 1 hour after.\n>>>\n>>> The first stage cuts insert performance twice, and WALWrite lock is main\n>>> bottleneck. I think WAL just can't sync changes log that fast, so it waits\n>>> while older log entries are flushed. This creates both read and write IO.\n>>>\n>>> The second stage is unique to Aurora/RDS and is characterized by\n>>> excessive read data locks and total read IO. I couldn't figure out why does\n>>> it read so much in a write only process, and AWS support didn't answer yet.\n>>>\n>>> So, for you, try to throttle inserts so WAL is never overfilled and you\n>>> don't experience WALWrite locks, and then increase wal buffers to max.\n>>>\n>>> 24 груд. 2017 р. 21:51 \"Jean Baro\" <[email protected]> пише:\n>>>\n>>> Hi there,\n>>>\n>>> We are testing a new application to try to find performance issues.\n>>>\n>>> AWS RDS m4.large 500GB storage (SSD)\n>>>\n>>> One table only, called Messages:\n>>>\n>>> Uuid\n>>> Country (ISO)\n>>> Role (Text)\n>>> User id (Text)\n>>> GroupId (integer)\n>>> Channel (text)\n>>> Title (Text)\n>>> Payload (JSON, up to 20kb)\n>>> Starts_in (UTC)\n>>> Expires_in (UTC)\n>>> Seen (boolean)\n>>> Deleted (boolean)\n>>> LastUpdate (UTC)\n>>> Created_by (UTC)\n>>> Created_in (UTC)\n>>>\n>>> Indexes:\n>>>\n>>> UUID (PK)\n>>> UserID + Country (main index)\n>>> LastUpdate\n>>> GroupID\n>>>\n>>>\n>>> We inserted 160MM rows, around 2KB each. No partitioning.\n>>>\n>>> Insert started at around 3.000 inserts per second, but (as expected)\n>>> started to slow down as the number of rows increased. In the end we got\n>>> around 500 inserts per second.\n>>>\n>>> Queries by Userd_ID + Country took less than 2 seconds, but while the\n>>> batch insert was running the queries took over 20 seconds!!!\n>>>\n>>> We had 20 Lambda getting messages from SQS and bulk inserting them into\n>>> Postgresql.\n>>>\n>>> The insert performance is important, but we would slow it down if needed\n>>> in order to ensure a more flat query performance. (Below 2 seconds). Each\n>>> query (userId + country) returns around 100 diferent messages, which are\n>>> filtered and order by the synchronous Lambda function. So we don't do any\n>>> special filtering, sorting, ordering or full text search in Postgres. In\n>>> some ways we use it more like a glorified file system. :)\n>>>\n>>> We are going to limit the number of lambda workers to 1 or 2, and then\n>>> run some queries concurrently to see if the query performance is not affect\n>>> too much. We aim to get at least 50 queries per second (returning 100\n>>> messages each) under 2 seconds, even when there is millions of messages on\n>>> SQS being inserted into PG.\n>>>\n>>> We haven't done any performance tuning in the DB.\n>>>\n>>> With all that said, the question is:\n>>>\n>>> What can be done to ensure good query performance (UserID+ country) even\n>>> when the bulk insert is running (low priority).\n>>>\n>>> We are limited to use AWS RDS at the moment.\n>>>\n>>> Cheers\n>>>\n>>>\n>>>\n>>>\n\nThanks Rick,We are now partitioning the DB (one table) into 100 sets of data.As soon as we finish this new experiment we will provide a better EXPLAIN as you suggested. :)Em 27 de dez de 2017 13:38, \"Rick Otten\" <[email protected]> escreveu:On Wed, Dec 27, 2017 at 10:13 AM, Jean Baro <[email protected]> wrote:Hello,We are still seeing queries  (by UserID + UserCountry) taking over 2 seconds, even when there is no batch insert going on at the same time.Each query returns from 100 to 200 messagens, which would be a 400kb pay load, which is super tiny.I don't know what else I can do with the limitations (m4.large), 167MM rows, almost 500GB database and 29GB of indexes (all indexes).I am probably to optimistic, but I was expecting queries (up to 50 queries per second) to return  (99th) under 500ms or even less, as the index is simple, there is no aggregation or join involves.Any suggestion?Although you aren't querying by it, if your id column is actually a UUID, as a best practice I strongly recommend switching the column type to uuid.  If you do query by the primary key, a uuid query will be much faster than a char or varchar column query.You'll need to submit a more complete explain plan than what you have below.  Try using:       explain (analyze, costs, verbose, buffers) select ...The table structure:CREATE TABLE public.card(    id character(36) NOT NULL,    user_id character varying(40) NOT NULL,    user_country character(2) NOT NULL,    user_channel character varying(40),    user_role character varying(40),    created_by_system_key character(36) NOT NULL,    created_by_username character varying(40),    created_at timestamp with time zone NOT NULL,    last_modified_at timestamp with time zone NOT NULL,    date_start timestamp with time zone NOT NULL,    date_end timestamp with time zone NOT NULL,    payload json NOT NULL,    tags character varying(500),    menu character varying(50),    deleted boolean NOT NULL,    campaign character varying(500) NOT NULL,    correlation_id character varying(50),    PRIMARY KEY (id));CREATE INDEX idx_user_country    ON public.card USING btree    (user_id COLLATE pg_catalog.\"default\", user_country COLLATE pg_catalog.\"default\"); CREATE INDEX idx_last_modified_at    ON public.card USING btree    (last_modified_at ASC NULLS LAST); CREATE INDEX idx_campaign    ON public.card USING btree    (campaign ASC NULLS LAST)The EXPLAIN'Index Scan using idx_user_country on card  (cost=0.57..1854.66 rows=460 width=922)''  Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'Em 25 de dez de 2017 01:10, \"Jean Baro\" <[email protected]> escreveu:Thanks for the clarification guys.It will be super useful. After trying this I'll post the results!Merry Christmas!Em 25 de dez de 2017 00:59, \"Danylo Hlynskyi\" <[email protected]> escreveu:I had an opportunity to perform insertion of 700MM rows into Aurora Postgresql, for which performance insights are available. Turns out, that there are two stages of insert slowdown - first happens when max WAL buffers limit reached, second happens around 1 hour after.The first stage cuts insert performance twice, and WALWrite lock is main bottleneck. I think WAL just can't sync changes log that fast, so it waits while older log entries are flushed. This creates both read and write IO.The second stage is unique to Aurora/RDS and is characterized by excessive read data locks and total read IO. I couldn't figure out why does it read so much in a write only process, and AWS support didn't answer yet.So, for you, try to throttle inserts so WAL is never overfilled and you don't experience WALWrite locks, and then increase wal buffers to max.24 груд. 2017 р. 21:51 \"Jean Baro\" <[email protected]> пише:Hi there,We are testing a new application to try to find performance issues.AWS RDS m4.large 500GB storage (SSD)One table only, called Messages:UuidCountry  (ISO)Role (Text)User id  (Text)GroupId (integer)Channel (text)Title (Text)Payload (JSON, up to 20kb)Starts_in (UTC)Expires_in (UTC)Seen (boolean)Deleted (boolean)LastUpdate (UTC)Created_by (UTC)Created_in (UTC)Indexes:UUID (PK)UserID + Country (main index)LastUpdate GroupID We inserted 160MM rows, around 2KB each. No partitioning.Insert started at around  3.000 inserts per second, but (as expected) started to slow down as the number of rows increased.  In the end we got around 500 inserts per second.Queries by Userd_ID + Country took less than 2 seconds, but while the batch insert was running the queries took over 20 seconds!!!We had 20 Lambda getting messages from SQS and bulk inserting them into Postgresql. The insert performance is important, but we would slow it down if needed in order to ensure a more flat query performance. (Below 2 seconds). Each query (userId + country) returns around 100 diferent messages, which are filtered and order by the synchronous Lambda function. So we don't do any special filtering, sorting, ordering or full text search in Postgres. In some ways we use it more like a glorified file system. :)We are going to limit the number of lambda workers to 1 or 2, and then run some queries concurrently to see if the query performance is not affect too much. We aim to get at least 50 queries per second (returning 100 messages each) under 2 seconds, even when there is millions of messages on SQS being inserted into PG.We haven't done any performance tuning in the DB. With all that said, the question is:What can be done to ensure good query performance (UserID+ country) even when the bulk insert is running (low priority).We are limited to use AWS RDS at the moment.Cheers", "msg_date": "Wed, 27 Dec 2017 14:23:55 -0200", "msg_from": "Jean Baro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "On Sun, Dec 24, 2017 at 11:51 AM, Jean Baro <[email protected]> wrote:\n\n> Hi there,\n>\n> We are testing a new application to try to find performance issues.\n>\n> AWS RDS m4.large 500GB storage (SSD)\n>\n\nIs that general purpose SSD, or provisioned IOPS SSD? If provisioned, what\nis the level of provisioning?\n\nCheers,\n\nJeff\n\nOn Sun, Dec 24, 2017 at 11:51 AM, Jean Baro <[email protected]> wrote:Hi there,We are testing a new application to try to find performance issues.AWS RDS m4.large 500GB storage (SSD)Is that general purpose SSD, or provisioned IOPS SSD?  If provisioned, what is the level of provisioning?Cheers,Jeff", "msg_date": "Wed, 27 Dec 2017 08:23:55 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "Thanks Jeremy,\n\nWe will provide a more complete EXPLAIN as other people have suggested.\n\nI am glad we might end up with a much better performance (currently each\nquery takes around 2 seconds!).\n\nCheers\n\n\nEm 27 de dez de 2017 14:02, \"Jeremy Finzel\" <[email protected]> escreveu:\n\n\n\n> The EXPLAIN\n>\n> 'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460\n> width=922)'\n> ' Index Cond: (((user_id)::text = '4684'::text) AND (user_country =\n> 'BR'::bpchar))'\n>\n\nShow 3 runs of the full explain analyze plan on given condition so that we\ncan also see cold vs warm cache performance.\n\nThere is definitely something wrong as there is no way a query like that\nshould take 500ms. Your instinct is correct there.\n\nThanks Jeremy,We will provide a more complete EXPLAIN as other people have suggested. I am glad we might end up with a much better performance (currently each query takes around 2 seconds!).CheersEm 27 de dez de 2017 14:02, \"Jeremy Finzel\" <[email protected]> escreveu:The EXPLAIN'Index Scan using idx_user_country on card  (cost=0.57..1854.66 rows=460 width=922)''  Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'Show 3 runs of the full explain analyze plan on given condition so that we can also see cold vs warm cache performance.There is definitely something wrong as there is no way a query like that should take 500ms.  Your instinct is correct there.", "msg_date": "Wed, 27 Dec 2017 14:34:32 -0200", "msg_from": "Jean Baro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "Thanks Mike,\n\nWe are using the standard RDS instance m4.large, it's not Aurora, which is\na much more powerful server (according to AWS).\n\nYes, we could install it on EC2, but it would take some extra effort from\nour side, it can be an investment though in case it will help us finding\nthe bottle neck, BUT after tuning the database it must run on RDS for\nproduction use. As the company I work for demands we run microseconds DB as\na managed service (RDS in this case).\n\nMike, what can we expect to see if we run PG on EC2? More logging? More\ntuning options? Let me know what your intention is so that I can convince\nother people on the team. But keep in mind in the end that payload should\nrun on RDS m4.large (500gb to 1TB of general purpose SSD).\n\nAgain, thanks a lot!\n\nEm 27 de dez de 2017 13:59, \"Mike Sofen\" <[email protected]> escreveu:\n\nHi Jean,\n\n\n\nI’ve used Postgres on a regular EC2 instance (an m4.xlarge), storing\ncomplex genomic data, hundreds of millions of rows in a table and “normal”\nqueries that used an index returned in 50-100ms, depending on the query…so\nthis isn’t a Postgres issue per se.\n\n\n\nYour table and index structures look ok, although in PG, use the “text”\ndatatype instead of varchar, it is the optimized type for storing string\ndata of any size (even a 2 char country code). Since you have 2 such\ncolumns that you’ve indexed and are querying for, there is a chance you’ll\nsee an improvement.\n\n\n\nI have not yet used Aurora or RDS for any large data…it sure seems like the\nfinger could be pointing there, but it isn’t clear what mechanism in Aurora\ncould be creating the slowness.\n\n\n\nIs there a possibility of you creating the same db on a normal EC2 instance\nwith PG installed and running the same test? There is nothing else obvious\nabout your data/structure that could result in such terrible performance.\n\n\n\nMike Sofen\n\n\n\n*From:* Jean Baro [mailto:[email protected]]\n*Sent:* Wednesday, December 27, 2017 7:14 AM\n\nHello,\n\n\n\nWe are still seeing queries (by UserID + UserCountry) taking over 2\nseconds, even when there is no batch insert going on at the same time.\n\n\n\nEach query returns from 100 to 200 messagens, which would be a 400kb pay\nload, which is super tiny.\n\n\n\nI don't know what else I can do with the limitations (m4.large), 167MM\nrows, almost 500GB database and 29GB of indexes (all indexes).\n\n\n\nI am probably to optimistic, but I was expecting queries (up to 50 queries\nper second) to return (99th) under 500ms or even less, as the index is\nsimple, there is no aggregation or join involves.\n\n\n\nAny suggestion?\n\n\n\nThe table structure:\n\nCREATE TABLE public.card\n\n(\n\n id character(36) NOT NULL,\n\n user_id character varying(40) NOT NULL,\n\n user_country character(2) NOT NULL,\n\n user_channel character varying(40),\n\n user_role character varying(40),\n\n created_by_system_key character(36) NOT NULL,\n\n created_by_username character varying(40),\n\n created_at timestamp with time zone NOT NULL,\n\n last_modified_at timestamp with time zone NOT NULL,\n\n date_start timestamp with time zone NOT NULL,\n\n date_end timestamp with time zone NOT NULL,\n\n payload json NOT NULL,\n\n tags character varying(500),\n\n menu character varying(50),\n\n deleted boolean NOT NULL,\n\n campaign character varying(500) NOT NULL,\n\n correlation_id character varying(50),\n\n PRIMARY KEY (id)\n\n);\n\n\n\nCREATE INDEX idx_user_country\n\n ON public.card USING btree\n\n (user_id COLLATE pg_catalog.\"default\", user_country COLLATE\npg_catalog.\"default\");\n\n\n\nCREATE INDEX idx_last_modified_at\n\n ON public.card USING btree\n\n (last_modified_at ASC NULLS LAST);\n\n\n\nCREATE INDEX idx_campaign\n\n ON public.card USING btree\n\n (campaign ASC NULLS LAST)\n\n\n\nThe EXPLAIN\n\n\n\n'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460\nwidth=922)'\n\n' Index Cond: (((user_id)::text = '4684'::text) AND (user_country =\n'BR'::bpchar))'\n\n\n\n\n\n\n\nEm 25 de dez de 2017 01:10, \"Jean Baro\" <[email protected]> escreveu:\n\nThanks for the clarification guys.\n\n\n\nIt will be super useful. After trying this I'll post the results!\n\n\n\nMerry Christmas!\n\nThanks Mike, We are using the standard RDS instance m4.large, it's not Aurora, which is a much more powerful server  (according to AWS).Yes, we could install it on EC2, but it would take some extra effort from our side, it can be an investment though in case it will help us finding the bottle neck, BUT after tuning the database it must run on RDS for production use. As the company I work for demands we run microseconds DB as a managed service (RDS in this case).Mike, what can we expect to see if we run PG on EC2? More logging? More tuning options? Let me know what your intention is so that I can convince other people on the team. But keep in mind in the end that payload should run on RDS m4.large (500gb to 1TB of general purpose SSD).Again, thanks a lot!Em 27 de dez de 2017 13:59, \"Mike Sofen\" <[email protected]> escreveu:Hi Jean, I’ve used Postgres on a regular EC2 instance (an m4.xlarge), storing complex genomic data, hundreds of millions of rows in a table and “normal” queries that used an index returned in 50-100ms, depending on the query…so this isn’t a Postgres issue per se.   Your table and index structures look ok, although in PG, use the “text” datatype instead of varchar, it is the optimized type for storing string data of any size (even a 2 char country code).  Since you have 2 such columns that you’ve indexed and are querying for, there is a chance you’ll see an improvement.   I have not yet used Aurora or RDS for any large data…it sure seems like the finger could be pointing there, but it isn’t clear what mechanism in Aurora could be creating the slowness. Is there a possibility of you creating the same db on a normal EC2 instance with PG installed and running the same test?  There is nothing else obvious about your data/structure that could result in such terrible performance. Mike Sofen From: Jean Baro [mailto:[email protected]] Sent: Wednesday, December 27, 2017 7:14 AMHello, We are still seeing queries  (by UserID + UserCountry) taking over 2 seconds, even when there is no batch insert going on at the same time. Each query returns from 100 to 200 messagens, which would be a 400kb pay load, which is super tiny. I don't know what else I can do with the limitations (m4.large), 167MM rows, almost 500GB database and 29GB of indexes (all indexes). I am probably to optimistic, but I was expecting queries (up to 50 queries per second) to return  (99th) under 500ms or even less, as the index is simple, there is no aggregation or join involves. Any suggestion? The table structure:CREATE TABLE public.card(    id character(36) NOT NULL,    user_id character varying(40) NOT NULL,    user_country character(2) NOT NULL,    user_channel character varying(40),    user_role character varying(40),    created_by_system_key character(36) NOT NULL,    created_by_username character varying(40),    created_at timestamp with time zone NOT NULL,    last_modified_at timestamp with time zone NOT NULL,    date_start timestamp with time zone NOT NULL,    date_end timestamp with time zone NOT NULL,    payload json NOT NULL,    tags character varying(500),    menu character varying(50),    deleted boolean NOT NULL,    campaign character varying(500) NOT NULL,    correlation_id character varying(50),    PRIMARY KEY (id)); CREATE INDEX idx_user_country    ON public.card USING btree    (user_id COLLATE pg_catalog.\"default\", user_country COLLATE pg_catalog.\"default\"); CREATE INDEX idx_last_modified_at    ON public.card USING btree    (last_modified_at ASC NULLS LAST); CREATE INDEX idx_campaign    ON public.card USING btree    (campaign ASC NULLS LAST) The EXPLAIN 'Index Scan using idx_user_country on card  (cost=0.57..1854.66 rows=460 width=922)''  Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'   Em 25 de dez de 2017 01:10, \"Jean Baro\" <[email protected]> escreveu:Thanks for the clarification guys. It will be super useful. After trying this I'll post the results! Merry Christmas!", "msg_date": "Wed, 27 Dec 2017 14:36:10 -0200", "msg_from": "Jean Baro <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Batch insert heavily affecting query performance." }, { "msg_contents": "General purpose, 500GB but we are planing to increase it to 1TB before\ngoing into production.\n\n500GB 1.500 iops (some burst of 3.000 iops)\n\n1TB 3.000 iops\n\nEm 27 de dez de 2017 14:23, \"Jeff Janes\" <[email protected]> escreveu:\n\n> On Sun, Dec 24, 2017 at 11:51 AM, Jean Baro <[email protected]> wrote:\n>\n>> Hi there,\n>>\n>> We are testing a new application to try to find performance issues.\n>>\n>> AWS RDS m4.large 500GB storage (SSD)\n>>\n>\n> Is that general purpose SSD, or provisioned IOPS SSD? If provisioned,\n> what is the level of provisioning?\n>\n> Cheers,\n>\n> Jeff\n>\n\nGeneral purpose, 500GB but we are planing to increase it to 1TB before going into production.500GB 1.500 iops  (some burst of 3.000 iops)1TB 3.000 iopsEm 27 de dez de 2017 14:23, \"Jeff Janes\" <[email protected]> escreveu:On Sun, Dec 24, 2017 at 11:51 AM, Jean Baro <[email protected]> wrote:Hi there,We are testing a new application to try to find performance issues.AWS RDS m4.large 500GB storage (SSD)Is that general purpose SSD, or provisioned IOPS SSD?  If provisioned, what is the level of provisioning?Cheers,Jeff", "msg_date": "Wed, 27 Dec 2017 14:37:21 -0200", "msg_from": "Jean Baro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "Sorry guys,\n\nThe performance problem is not caused by PG.\n\n'Index Scan using idx_user_country on public.old_card (cost=0.57..1854.66\nrows=460 width=922) (actual time=3.442..76.606 rows=200 loops=1)'\n' Output: id, user_id, user_country, user_channel, user_role,\ncreated_by_system_key, created_by_username, created_at, last_modified_at,\ndate_start, date_end, payload, tags, menu, deleted, campaign,\ncorrelation_id'\n' Index Cond: (((old_card.user_id)::text = '1234'::text) AND\n(old_card.user_country = 'BR'::bpchar))'\n' Buffers: shared hit=11 read=138 written=35'\n'Planning time: 7.748 ms'\n'Execution time: 76.755 ms'\n\n77ms on an 8GB database with 167MM rows and almost 500GB in size is\namazing!!\n\nNow we are investigating other bottlenecks, is it the creation of a new\nconnection to PG (no connection poller at the moment, like PGBouncer), is\nit the Lambda start up time? Is it the network performance between PG and\nLambda?\n\nI am sorry for wasting your time guys, it helped us to find the problem\nthough, even if it wasn't a PG problem.\n\nBTW, what a performance! I am impressed.\n\nThanks PG community!\n\nEm 27 de dez de 2017 14:34, \"Jean Baro\" <[email protected]> escreveu:\n\n> Thanks Jeremy,\n>\n> We will provide a more complete EXPLAIN as other people have suggested.\n>\n> I am glad we might end up with a much better performance (currently each\n> query takes around 2 seconds!).\n>\n> Cheers\n>\n>\n> Em 27 de dez de 2017 14:02, \"Jeremy Finzel\" <[email protected]> escreveu:\n>\n>\n>\n>> The EXPLAIN\n>>\n>> 'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460\n>> width=922)'\n>> ' Index Cond: (((user_id)::text = '4684'::text) AND (user_country =\n>> 'BR'::bpchar))'\n>>\n>\n> Show 3 runs of the full explain analyze plan on given condition so that we\n> can also see cold vs warm cache performance.\n>\n> There is definitely something wrong as there is no way a query like that\n> should take 500ms. Your instinct is correct there.\n>\n>\n>\n\nSorry guys,The performance problem is not caused by PG. 'Index Scan using idx_user_country on public.old_card  (cost=0.57..1854.66 rows=460 width=922) (actual time=3.442..76.606 rows=200 loops=1)''  Output: id, user_id, user_country, user_channel, user_role, created_by_system_key, created_by_username, created_at, last_modified_at, date_start, date_end, payload, tags, menu, deleted, campaign, correlation_id''  Index Cond: (((old_card.user_id)::text = '1234'::text) AND (old_card.user_country = 'BR'::bpchar))''  Buffers: shared hit=11 read=138 written=35''Planning time: 7.748 ms''Execution time: 76.755 ms'77ms on an 8GB database with 167MM rows and almost 500GB in size is amazing!!Now we are investigating other bottlenecks, is it the creation of a new connection to PG  (no connection poller at the moment, like PGBouncer), is it the Lambda start up time? Is it the network performance  between PG and Lambda?I am sorry for wasting your time guys, it helped us to find the problem though, even if it wasn't a PG problem. BTW, what a performance! I am impressed. Thanks PG community! Em 27 de dez de 2017 14:34, \"Jean Baro\" <[email protected]> escreveu:Thanks Jeremy,We will provide a more complete EXPLAIN as other people have suggested. I am glad we might end up with a much better performance (currently each query takes around 2 seconds!).CheersEm 27 de dez de 2017 14:02, \"Jeremy Finzel\" <[email protected]> escreveu:The EXPLAIN'Index Scan using idx_user_country on card  (cost=0.57..1854.66 rows=460 width=922)''  Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'Show 3 runs of the full explain analyze plan on given condition so that we can also see cold vs warm cache performance.There is definitely something wrong as there is no way a query like that should take 500ms.  Your instinct is correct there.", "msg_date": "Wed, 27 Dec 2017 15:02:56 -0200", "msg_from": "Jean Baro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "On 27/12/17 18:02, Jean Baro wrote:\n> Sorry guys,\n>\n> The performance problem is not caused by PG.\n>\n> 'Index Scan using idx_user_country on public.old_card  \n> (cost=0.57..1854.66 rows=460 width=922) (actual time=3.442..76.606 \n> rows=200 loops=1)'\n> '  Output: id, user_id, user_country, user_channel, user_role, \n> created_by_system_key, created_by_username, created_at, \n> last_modified_at, date_start, date_end, payload, tags, menu, deleted, \n> campaign, correlation_id'\n> '  Index Cond: (((old_card.user_id)::text = '1234'::text) AND \n> (old_card.user_country = 'BR'::bpchar))'\n> '  Buffers: shared hit=11 read=138 written=35'\n> 'Planning time: 7.748 ms'\n> 'Execution time: 76.755 ms'\n>\n> 77ms on an 8GB database with 167MM rows and almost 500GB in size is \n> amazing!!\n\n\n     gp2 disks are of *variable* performance. Once you exhaust the I/O \ncredits, you are capped to a baseline IOPS that are proportional to the \nsize. I guess you would experience low performance in this scenario \nsince your disk is not big. And actually performance numbers with gp2 \ndisks are unreliable as you don't know in which credit status you are.\n\n     Benchmark with provisioned iops to get a right picture of the \ndesired performance.\n\n\n     Cheers,\n\n     Álvaro\n\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres\n\n\n\n\n\n>\n> Now we are investigating other bottlenecks, is it the creation of a \n> new connection to PG  (no connection poller at the moment, like \n> PGBouncer), is it the Lambda start up time? Is it the network \n> performance  between PG and Lambda?\n>\n> I am sorry for wasting your time guys, it helped us to find the \n> problem though, even if it wasn't a PG problem.\n>\n> BTW, what a performance! I am impressed.\n>\n> Thanks PG community!\n>\n> Em 27 de dez de 2017 14:34, \"Jean Baro\" <[email protected] \n> <mailto:[email protected]>> escreveu:\n>\n> Thanks Jeremy,\n>\n> We will provide a more complete EXPLAIN as other people have\n> suggested.\n>\n> I am glad we might end up with a much better performance\n> (currently each query takes around 2 seconds!).\n>\n> Cheers\n>\n>\n> Em 27 de dez de 2017 14:02, \"Jeremy Finzel\" <[email protected]\n> <mailto:[email protected]>> escreveu:\n>\n>\n>\n> The EXPLAIN\n>\n> 'Index Scan using idx_user_country on card\n> (cost=0.57..1854.66 rows=460 width=922)'\n> '  Index Cond: (((user_id)::text = '4684'::text) AND\n> (user_country = 'BR'::bpchar))'\n>\n>\n> Show 3 runs of the full explain analyze plan on given\n> condition so that we can also see cold vs warm cache performance.\n>\n> There is definitely something wrong as there is no way a query\n> like that should take 500ms.  Your instinct is correct there.\n>\n>\n\n\n\n\n\n\n\n\n\nOn 27/12/17 18:02, Jean Baro wrote:\n\n\nSorry guys,\n \n\nThe performance problem is not caused by PG. \n\n\n\n'Index Scan using idx_user_country on\n public.old_card  (cost=0.57..1854.66 rows=460 width=922)\n (actual time=3.442..76.606 rows=200 loops=1)'\n'  Output: id, user_id, user_country,\n user_channel, user_role, created_by_system_key,\n created_by_username, created_at, last_modified_at,\n date_start, date_end, payload, tags, menu, deleted,\n campaign, correlation_id'\n'  Index Cond: (((old_card.user_id)::text =\n '1234'::text) AND (old_card.user_country = 'BR'::bpchar))'\n'  Buffers: shared hit=11 read=138 written=35'\n'Planning time: 7.748 ms'\n'Execution time: 76.755 ms'\n\n\n77ms on an 8GB database with 167MM rows and\n almost 500GB in size is amazing!!\n\n\n\n\n\n     gp2 disks are of *variable* performance. Once you exhaust the\n I/O credits, you are capped to a baseline IOPS that are proportional\n to the size. I guess you would experience low performance in this\n scenario since your disk is not big. And actually performance\n numbers with gp2 disks are unreliable as you don't know in which\n credit status you are.\n\n     Benchmark with provisioned iops to get a right picture of the\n desired performance.\n\n\n     Cheers,\n\n     Álvaro\n\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres\n\n\n\n\n\n\n\n\n\nNow we are investigating other bottlenecks, is it\n the creation of a new connection to PG  (no connection\n poller at the moment, like PGBouncer), is it the Lambda\n start up time? Is it the network performance  between PG and\n Lambda?\n\n\nI am sorry for wasting your time guys, it helped\n us to find the problem though, even if it wasn't a PG\n problem. \n\n\nBTW, what a performance! I am impressed. \n\n\nThanks PG community! \n\n\n\nEm 27 de dez de 2017 14:34, \"Jean Baro\"\n <[email protected]>\n escreveu:\n\n\nThanks Jeremy,\n \n\nWe will provide a more complete EXPLAIN\n as other people have suggested. \n\n\nI am glad we might end up with a much\n better performance (currently each query takes around\n 2 seconds!).\n\n\nCheers\n\n\nEm 27 de dez de 2017 14:02,\n \"Jeremy Finzel\" <[email protected]>\n escreveu:\n\n\n\n\n\n\n\n\n\n\nThe EXPLAIN\n\n\n\n'Index Scan using\n idx_user_country on card \n (cost=0.57..1854.66 rows=460\n width=922)'\n'  Index Cond:\n (((user_id)::text =\n '4684'::text) AND (user_country\n = 'BR'::bpchar))'\n\n\n\n\n\n\n\n\nShow 3 runs of the full\n explain analyze plan on given condition so\n that we can also see cold vs warm cache\n performance.\n\n\nThere is definitely\n something wrong as there is no way a query\n like that should take 500ms.  Your instinct is\n correct there.", "msg_date": "Wed, 27 Dec 2017 18:09:28 +0100", "msg_from": "Alvaro Hernandez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "In my experience, that 77ms will stay quite constant even if your db grew to > 1TB. Postgres IS amazing. BTW, for a db, you should always have provisioned IOPS or else your performance can vary wildly, since the SSDs are shared.\n\n \n\nRe Lambda: another team is working on a new web app using Lambda calls and they were also experiencing horrific performance, just like yours (2 seconds per call). They discovered it was the Lambda connection/spin-up time causing the problem. They solved it by keeping several Lambda’s “hot”, for an instant connection…solved the problem, the last I heard. Google for that topic, you’ll find solutions.\n\n \n\nMike\n\n \n\nFrom: Jean Baro [mailto:[email protected]] \nSent: Wednesday, December 27, 2017 9:03 AM\n\n\n\nSorry guys,\n\n \n\nThe performance problem is not caused by PG. \n\n \n\n'Index Scan using idx_user_country on public.old_card (cost=0.57..1854.66 rows=460 width=922) (actual time=3.442..76.606 rows=200 loops=1)'\n\n' Output: id, user_id, user_country, user_channel, user_role, created_by_system_key, created_by_username, created_at, last_modified_at, date_start, date_end, payload, tags, menu, deleted, campaign, correlation_id'\n\n' Index Cond: (((old_card.user_id)::text = '1234'::text) AND (old_card.user_country = 'BR'::bpchar))'\n\n' Buffers: shared hit=11 read=138 written=35'\n\n'Planning time: 7.748 ms'\n\n'Execution time: 76.755 ms'\n\n \n\n77ms on an 8GB database with 167MM rows and almost 500GB in size is amazing!!\n\n \n\nNow we are investigating other bottlenecks, is it the creation of a new connection to PG (no connection poller at the moment, like PGBouncer), is it the Lambda start up time? Is it the network performance between PG and Lambda?\n\n \n\nI am sorry for wasting your time guys, it helped us to find the problem though, even if it wasn't a PG problem. \n\n \n\nBTW, what a performance! I am impressed. \n\n \n\nThanks PG community! \n\n \n\nEm 27 de dez de 2017 14:34, \"Jean Baro\" <[email protected] <mailto:[email protected]> > escreveu:\n\nThanks Jeremy,\n\n \n\nWe will provide a more complete EXPLAIN as other people have suggested. \n\n \n\nI am glad we might end up with a much better performance (currently each query takes around 2 seconds!).\n\n \n\nCheers\n\n \n\n \n\nEm 27 de dez de 2017 14:02, \"Jeremy Finzel\" <[email protected] <mailto:[email protected]> > escreveu:\n\n \n\n \n\nThe EXPLAIN\n\n \n\n'Index Scan using idx_user_country on card (cost=0.57..1854.66 rows=460 width=922)'\n\n' Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'\n\n \n\nShow 3 runs of the full explain analyze plan on given condition so that we can also see cold vs warm cache performance.\n\n \n\nThere is definitely something wrong as there is no way a query like that should take 500ms. Your instinct is correct there.\n\n \n\n\nIn my experience, that 77ms will stay quite constant even if your db grew to > 1TB.  Postgres IS amazing.  BTW, for a db, you should always have provisioned IOPS or else your performance can vary wildly, since the SSDs are shared. Re Lambda:  another team is working on a new web app using Lambda calls and they were also experiencing horrific performance, just like yours (2 seconds per call).  They discovered it was the Lambda connection/spin-up time causing the problem.  They solved it by keeping several Lambda’s “hot”, for an instant connection…solved the problem, the last I heard.  Google for that topic, you’ll find solutions. Mike From: Jean Baro [mailto:[email protected]] Sent: Wednesday, December 27, 2017 9:03 AMSorry guys, The performance problem is not caused by PG.  'Index Scan using idx_user_country on public.old_card  (cost=0.57..1854.66 rows=460 width=922) (actual time=3.442..76.606 rows=200 loops=1)''  Output: id, user_id, user_country, user_channel, user_role, created_by_system_key, created_by_username, created_at, last_modified_at, date_start, date_end, payload, tags, menu, deleted, campaign, correlation_id''  Index Cond: (((old_card.user_id)::text = '1234'::text) AND (old_card.user_country = 'BR'::bpchar))''  Buffers: shared hit=11 read=138 written=35''Planning time: 7.748 ms''Execution time: 76.755 ms' 77ms on an 8GB database with 167MM rows and almost 500GB in size is amazing!! Now we are investigating other bottlenecks, is it the creation of a new connection to PG  (no connection poller at the moment, like PGBouncer), is it the Lambda start up time? Is it the network performance  between PG and Lambda? I am sorry for wasting your time guys, it helped us to find the problem though, even if it wasn't a PG problem.  BTW, what a performance! I am impressed.  Thanks PG community!  Em 27 de dez de 2017 14:34, \"Jean Baro\" <[email protected]> escreveu:Thanks Jeremy, We will provide a more complete EXPLAIN as other people have suggested.  I am glad we might end up with a much better performance (currently each query takes around 2 seconds!). Cheers  Em 27 de dez de 2017 14:02, \"Jeremy Finzel\" <[email protected]> escreveu:  The EXPLAIN 'Index Scan using idx_user_country on card  (cost=0.57..1854.66 rows=460 width=922)''  Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))' Show 3 runs of the full explain analyze plan on given condition so that we can also see cold vs warm cache performance. There is definitely something wrong as there is no way a query like that should take 500ms.  Your instinct is correct there.", "msg_date": "Wed, 27 Dec 2017 09:10:33 -0800", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Batch insert heavily affecting query performance." }, { "msg_contents": "Jean,\nIt is very likely you are running out of IOPS with that size of server. We have several Postgres databases running at AWS. We consistently run out of IOPS on our development servers due to the types queries and sizing of our development databases. I would check the AWS monitoring graphs to determine the cause. We typically see low CPU and high IOPS just prior to our degraded performance. Our production environment runs provisioned IOPS to avoid this very issue.\nRegards, David \n\n From: Jean Baro <[email protected]>\n To: Jeremy Finzel <[email protected]> \nCc: Danylo Hlynskyi <[email protected]>; [email protected]\n Sent: Wednesday, December 27, 2017 11:03 AM\n Subject: Re: Batch insert heavily affecting query performance.\n \nSorry guys,\nThe performance problem is not caused by PG. \n'Index Scan using idx_user_country on public.old_card  (cost=0.57..1854.66 rows=460 width=922) (actual time=3.442..76.606 rows=200 loops=1)''  Output: id, user_id, user_country, user_channel, user_role, created_by_system_key, created_by_username, created_at, last_modified_at, date_start, date_end, payload, tags, menu, deleted, campaign, correlation_id''  Index Cond: (((old_card.user_id)::text = '1234'::text) AND (old_card.user_country = 'BR'::bpchar))''  Buffers: shared hit=11 read=138 written=35''Planning time: 7.748 ms''Execution time: 76.755 ms'\n77ms on an 8GB database with 167MM rows and almost 500GB in size is amazing!!\nNow we are investigating other bottlenecks, is it the creation of a new connection to PG  (no connection poller at the moment, like PGBouncer), is it the Lambda start up time? Is it the network performance  between PG and Lambda?\nI am sorry for wasting your time guys, it helped us to find the problem though, even if it wasn't a PG problem. \nBTW, what a performance! I am impressed. \nThanks PG community! \nEm 27 de dez de 2017 14:34, \"Jean Baro\" <[email protected]> escreveu:\n\nThanks Jeremy,\nWe will provide a more complete EXPLAIN as other people have suggested. \nI am glad we might end up with a much better performance (currently each query takes around 2 seconds!).\nCheers\n\nEm 27 de dez de 2017 14:02, \"Jeremy Finzel\" <[email protected]> escreveu:\n\n\n\n\nThe EXPLAIN\n'Index Scan using idx_user_country on card  (cost=0.57..1854.66 rows=460 width=922)''  Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'\n\nShow 3 runs of the full explain analyze plan on given condition so that we can also see cold vs warm cache performance.\nThere is definitely something wrong as there is no way a query like that should take 500ms.  Your instinct is correct there.\n\n\n\n\n \nJean,It is very likely you are running out of IOPS with that size of server. We have several Postgres databases running at AWS. We consistently run out of IOPS on our development servers due to the types queries and sizing of our development databases. I would check the AWS monitoring graphs to determine the cause. We typically see low CPU and high IOPS just prior to our degraded performance. Our production environment runs provisioned IOPS to avoid this very issue.Regards, David  From: Jean Baro <[email protected]> To: Jeremy Finzel <[email protected]> Cc: Danylo Hlynskyi <[email protected]>; [email protected] Sent: Wednesday, December 27, 2017 11:03 AM Subject: Re: Batch insert heavily affecting query performance. Sorry guys,The performance problem is not caused by PG. 'Index Scan using idx_user_country on public.old_card  (cost=0.57..1854.66 rows=460 width=922) (actual time=3.442..76.606 rows=200 loops=1)''  Output: id, user_id, user_country, user_channel, user_role, created_by_system_key, created_by_username, created_at, last_modified_at, date_start, date_end, payload, tags, menu, deleted, campaign, correlation_id''  Index Cond: (((old_card.user_id)::text = '1234'::text) AND (old_card.user_country = 'BR'::bpchar))''  Buffers: shared hit=11 read=138 written=35''Planning time: 7.748 ms''Execution time: 76.755 ms'77ms on an 8GB database with 167MM rows and almost 500GB in size is amazing!!Now we are investigating other bottlenecks, is it the creation of a new connection to PG  (no connection poller at the moment, like PGBouncer), is it the Lambda start up time? Is it the network performance  between PG and Lambda?I am sorry for wasting your time guys, it helped us to find the problem though, even if it wasn't a PG problem. BTW, what a performance! I am impressed. Thanks PG community! Em 27 de dez de 2017 14:34, \"Jean Baro\" <[email protected]> escreveu:Thanks Jeremy,We will provide a more complete EXPLAIN as other people have suggested. I am glad we might end up with a much better performance (currently each query takes around 2 seconds!).CheersEm 27 de dez de 2017 14:02, \"Jeremy Finzel\" <[email protected]> escreveu:The EXPLAIN'Index Scan using idx_user_country on card  (cost=0.57..1854.66 rows=460 width=922)''  Index Cond: (((user_id)::text = '4684'::text) AND (user_country = 'BR'::bpchar))'Show 3 runs of the full explain analyze plan on given condition so that we can also see cold vs warm cache performance.There is definitely something wrong as there is no way a query like that should take 500ms.  Your instinct is correct there.", "msg_date": "Wed, 27 Dec 2017 17:15:31 +0000 (UTC)", "msg_from": "David Miller <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch insert heavily affecting query performance." }, { "msg_contents": "On Wed, Dec 27, 2017 at 2:10 PM, Mike Sofen <[email protected]> wrote:\n\n> In my experience, that 77ms will stay quite constant even if your db grew\n> to > 1TB. Postgres IS amazing. BTW, for a db, you should always have\n> provisioned IOPS or else your performance can vary wildly, since the SSDs\n> are shared.\n>\n>\n>\n> Re Lambda: another team is working on a new web app using Lambda calls\n> and they were also experiencing horrific performance, just like yours (2\n> seconds per call). They discovered it was the Lambda connection/spin-up\n> time causing the problem. They solved it by keeping several Lambda’s\n> “hot”, for an instant connection…solved the problem, the last I heard.\n> Google for that topic, you’ll find solutions.\n>\n\nYou should try to implement an internal connection pool in your lambda.\n\nLambda functions are reused. You have no guarantees as to how long these\nprocesses will live, but they will live for more than one request. So if\nyou keep a persistent connection in your lambda code, the first invocation\nmay be slow, but further invocations will be fast. Lambda will try to batch\nseveral calls at once. In fact, you can usually configure batching in the\nevent source to try to maximize this effect.\n\nIn my experience, your lambda will be most probably network-bound. Increase\nthe lambda's memory allocation, to get a bigger chunk of the available\nnetwork bandwidth (why they decided to call that \"memory\" nobody will ever\nbe able to tell).\n\nOn Wed, Dec 27, 2017 at 2:10 PM, Mike Sofen <[email protected]> wrote:In my experience, that 77ms will stay quite constant even if your db grew to > 1TB.  Postgres IS amazing.  BTW, for a db, you should always have provisioned IOPS or else your performance can vary wildly, since the SSDs are shared. Re Lambda:  another team is working on a new web app using Lambda calls and they were also experiencing horrific performance, just like yours (2 seconds per call).  They discovered it was the Lambda connection/spin-up time causing the problem.  They solved it by keeping several Lambda’s “hot”, for an instant connection…solved the problem, the last I heard.  Google for that topic, you’ll find solutions.You should try to implement an internal connection pool in your lambda.Lambda functions are reused. You have no guarantees as to how long these processes will live, but they will live for more than one request. So if you keep a persistent connection in your lambda code, the first invocation may be slow, but further invocations will be fast. Lambda will try to batch several calls at once. In fact, you can usually configure batching in the event source to try to maximize this effect.In my experience, your lambda will be most probably network-bound. Increase the lambda's memory allocation, to get a bigger chunk of the available network bandwidth (why they decided to call that \"memory\" nobody will ever be able to tell).", "msg_date": "Tue, 9 Jan 2018 16:45:19 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Batch insert heavily affecting query performance." } ]
[ { "msg_contents": "Question on large tables…\n\n\nWhen should one consider table partitioning vs. just stuffing 10 million rows into one table?\n\nI currently have CDR’s that are injected into a table at the rate of over 100,000 a day, which is large.\n\n\nAt some point I’ll want to prune these records out, so being able to just drop or truncate the table in one shot makes child table partitions attractive.\n\n\nFrom a pure data warehousing standpoint, what are the do’s/don’t of keeping such large tables?\n\nOther notes…\n- This table is never updated, only appended (CDR’s)\n- Right now daily SQL called to delete records older than X days. (costly, purging ~100k records at a time)\n\n\n\n--\ninoc.net!rblayzor\nXMPP: rblayzor.AT.inoc.net\nPGP: https://inoc.net/~rblayzor/\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 27 Dec 2017 19:54:23 -0500", "msg_from": "Robert Blayzor <[email protected]>", "msg_from_op": true, "msg_subject": "Table performance with millions of rows" }, { "msg_contents": "On Wed, Dec 27, 2017 at 07:54:23PM -0500, Robert Blayzor wrote:\n> Question on large tables…\n> \n> When should one consider table partitioning vs. just stuffing 10 million rows into one table?\n\nIMO, whenever constraint exclusion, DROP vs DELETE, or seq scan on individual\nchildren justify the minor administrative overhead of partitioning. Note that\npartitioning may be implemented as direct insertion into child tables, or may\ninvolve triggers or rules.\n\n> I currently have CDR’s that are injected into a table at the rate of over\n> 100,000 a day, which is large.\n> \n> At some point I’ll want to prune these records out, so being able to just\n> drop or truncate the table in one shot makes child table partitions\n> attractive.\n\nThat's one of the major use cases for partitioning (DROP rather than DELETE and\nthus avoiding any following vacuum+analyze).\nhttps://www.postgresql.org/docs/10/static/ddl-partitioning.html#DDL-PARTITIONING-OVERVIEW\n\nJustin\n\n", "msg_date": "Wed, 27 Dec 2017 19:20:09 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table performance with millions of rows (partitioning)" }, { "msg_contents": "On Dec 27, 2017, at 8:20 PM, Justin Pryzby <[email protected]> wrote:\n> \n> That's one of the major use cases for partitioning (DROP rather than DELETE and\n> thus avoiding any following vacuum+analyze).\n> https://www.postgresql.org/docs/10/static/ddl-partitioning.html#DDL-PARTITIONING-OVERVIEW\n\n\nThat’s the plan to partition and I can easily change the code to insert directly into the child tables.\n\nRight now, I was going to use date ranges (per month) based on a timestamp.\n\nBut could I just create 12 child tables, one for each month instead of creating one for Year+month ?\n\nie: instead of:\n\n (CHECK (ts >= DATE ‘2017-12-01' AND ts < DATE ‘2018-01-01’))\n\nuse:\n\n (CHECK (EXTRACT(MONTH FROM ts) = 12))\n\n\nI’ll never need more than the least six months, so I’ll just truncate the older child tables. By the time the data wraps around, the child table will be empty.\n\n\nI’m not even sure if the above CHECK (w/ EXTRACT) instead of just looking for a date range is valid.\n\n\n\n\n", "msg_date": "Wed, 27 Dec 2017 20:27:08 -0500", "msg_from": "Robert Blayzor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table performance with millions of rows (partitioning)" }, { "msg_contents": "No, it's unfortunately not possible.\nDocumentation says in Caveats part:\n\n/Constraint exclusion only works when the query's WHERE clause contains\nconstants (or externally supplied parameters). For example, a comparison\nagainst a non-immutable function such as CURRENT_TIMESTAMP cannot be\noptimized, since the planner cannot know which partition the function value\nmight fall into at run time.\n\nKeep the partitioning constraints simple, else the planner may not be able\nto prove that partitions don't need to be visited. Use simple equality\nconditions for list partitioning, or simple range tests for range\npartitioning, as illustrated in the preceding examples. A good rule of thumb\nis that partitioning constraints should contain only comparisons of the\npartitioning column(s) to constants using B-tree-indexable operators./\n\nEven making a function in SQL or plpgsql and declaring it as immutable will\nnot help. Postgres will always check against all the partitions. It's not\nenough \"simple\" for the planner.\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Wed, 27 Dec 2017 19:22:14 -0700 (MST)", "msg_from": "pinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table performance with millions of rows (partitioning)" } ]
[ { "msg_contents": "The docs claim that the master table “should” be empty. It it possible to just create child tables off an existing master table with data, then just inserting data into the new child tables.\n\nTHe plan would be to keep data in the master table and purge it over time until it’s eventually empty, then drop the indexes as well.\n\nFully understanding that data needs to be placed in the right child tables. Data outside of those child ranges would remain as “old data” in the master table.\n\n\nJust trying to grab if that’s an acceptable migration of live data from a single large table and move into partitioning. Think of it as a very large table of cyclic data that ages out. New data in child tables while removing data from the master table over time.\n\n--\ninoc.net!rblayzor\nXMPP: rblayzor.AT.inoc.net\nPGP: https://inoc.net/~rblayzor/\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 29 Dec 2017 23:37:56 -0500", "msg_from": "Robert Blayzor <[email protected]>", "msg_from_op": true, "msg_subject": "partitioning an existing table" }, { "msg_contents": "On Fri, Dec 29, 2017 at 11:37:56PM -0500, Robert Blayzor wrote:\n> The docs claim that the master table “should” be empty. It it possible to just create child tables off an existing master table with data, then just inserting data into the new child tables.\n> \n> THe plan would be to keep data in the master table and purge it over time until it’s eventually empty, then drop the indexes as well.\n> \n> Fully understanding that data needs to be placed in the right child tables. Data outside of those child ranges would remain as “old data” in the master table.\n> \n> Just trying to grab if that’s an acceptable migration of live data from a single large table and move into partitioning. Think of it as a very large table of cyclic data that ages out. New data in child tables while removing data from the master table over time.\n\nFor PG10 \"partitions\" (as in relkind='p') the parent is defined as empty\n(actually has no underlying storage).\n\nFor inheritance (available in and before PG10), the parent may be nonempty,\nwhich works fine, although someone else might find it unintuitive. (Does the\ndoc actually say \"should\" somewhere ?)\n\nYou almost certainly want child tables to have constraints, to allow\nconstraint_exclusion (which is the only reason one child table is more \"right\"\nthan any other, besides the associated pruning/retention schedule).\n\nSince you'll be running DELETE rather than DROP on the parent, you might\nconsider DELETE ONLY.. but it won't matter if your children's constraints are\nusable with DELETE's WHERE condition.\n\nAlso, note that autoanalyze doesn't know to analyze the PARENT's statistics\nwhen its children are INSERTED/DROPPED/etc. So I'd suggest to consider ANALYZE\neach parent following DROP of its children (or maybe on some more frequent\nschedule to handle inserted rows, too). Perhaps that should be included as a\nCAVEAT?\nhttps://www.postgresql.org/docs/10/static/ddl-inherit.html#DDL-INHERIT-CAVEATS\n\nJust curious: are your constraints/indices on starting time or ending time?\n\nBTW depending on your requirements, it may be possible to make pg_dump much\nmore efficient. For our data, it's reasonable to assume that a table is\n\"final\" if its constraints exclude data older than a few days ago, and it can\nbe permanently dumped and excluded from future, daily backups, which makes the\nbackups smaller and faster, and probably causes less cache churn, etc. But I\nimagine you might have different requirements, so that may be infeasible, or\nyou'd maybe have to track insertions, either via pg_stat_user_tables, or at the\napplication layer, and redump the relevant table.\n\nJustin\n\n", "msg_date": "Fri, 29 Dec 2017 23:38:21 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning an existing table" }, { "msg_contents": "On Dec 30, 2017, at 12:38 AM, Justin Pryzby <[email protected]> wrote:\n> For inheritance (available in and before PG10), the parent may be nonempty,\n> which works fine, although someone else might find it unintuitive. (Does the\n> doc actually say \"should\" somewhere ?)\n\nWell it doesn’t say should, but says “normally”..\n\n\"The parent table itself is normally empty; it exists just to represent the entire data set. …\n\n\n> Just curious: are your constraints/indices on starting time or ending time?\n\nYes, the child tables will be strictly on a months worth of data.\n\nCREATE TABLE table_201801\n (CHECK (ts >= DATE ‘2018-01-01' AND ts < DATE ‘2018-02-01'))\n INHERITS …\n\n\nThe application will insert directly into the child tables, so no need for triggers or rules.\n\n\n> BTW depending on your requirements, it may be possible to make pg_dump much\n> more efficient. For our data, it's reasonable to assume that a table is\n> \"final\" if its constraints exclude data older than a few days ago, and it can\n> be permanently dumped and excluded from future, daily backups, which makes the\n> backups smaller and faster, and probably causes less cache churn, etc. But I\n> imagine you might have different requirements, so that may be infeasible, or\n> you'd maybe have to track insertions, either via p\n\nThe idea is only only keep a # of months available for searching over a period of months. Those months could be 3 or more, up to a year, etc. But being able to just drop and entire child table for pruning is very attractive. Right now the average months data is about 2-3 million rows each. Data is just inserted and then only searched. Never updated…\n\nI also like the idea of skipping all this older data from a PGdump. We archive records inserted into these tables daily into cold storage. ie: export and compressed. So the data is saved cold. We dump the DB nightly also, but probably would make sense to skip anything outside of the newest child table. Just not sure how to make that happen, yet….\n\n\n\n\n\n", "msg_date": "Sat, 30 Dec 2017 09:19:05 -0500", "msg_from": "Robert Blayzor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning an existing table" }, { "msg_contents": "On Sat, Dec 30, 2017 at 09:19:05AM -0500, Robert Blayzor wrote:\n> On Dec 30, 2017, at 12:38 AM, Justin Pryzby <[email protected]> wrote:\n\n> > BTW depending on your requirements, it may be possible to make pg_dump much\n> > more efficient. For our data, it's reasonable to assume that a table is\n> > \"final\" if its constraints exclude data older than a few days ago, and it can\n> > be permanently dumped and excluded from future, daily backups, which makes the\n> > backups smaller and faster, and probably causes less cache churn, etc. But I\n> > imagine you might have different requirements, so that may be infeasible, or\n> > you'd maybe have to track insertions, either via p\n> \n> The idea is only only keep a # of months available for searching over a period of months. Those months could be 3 or more, up to a year, etc. But being able to just drop and entire child table for pruning is very attractive. Right now the average months data is about 2-3 million rows each. Data is just inserted and then only searched. Never updated…\n> \n> I also like the idea of skipping all this older data from a PGdump. We archive records inserted into these tables daily into cold storage. ie: export and compressed. So the data is saved cold. We dump the DB nightly also, but probably would make sense to skip anything outside of the newest child table. Just not sure how to make that happen, yet….\n\nFor us, I classify the tables as \"partitioned\" or \"not partitioned\" and\nsubdivide \"partitioned\" into \"recent\" or \"historic\" based on table names; but\nif you design it from scratch then you'd have the opportunity to keep a list of\npartitioned tables, their associated date range, date of most recent insertion,\nand most recent \"final\" backup.\n\nThis is the essence of it:\nsnap= ... SELECT pg_export_snapshot();\npg_dump --snap \"$snap\" -T \"$ptnreg\" -f nonpartitioned.new\npg_dump --snap \"$snap\" -t \"$recent\" -f recent.new\nloop around historic partitioned tables and run \"final\" pg_dump if it's been\n INSERTed more recently than it's been dumped.\nremove any \"final\" pg_dump not included in any existing backup (assuming you\n keep multiple copies on different rotation).\n\nNote that pg_dump -t/-T is different from \"egrep\" in a few special ways..\n\nJustin\n\n", "msg_date": "Sat, 30 Dec 2017 12:42:47 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning an existing table - efficient pg_dump" } ]
[ { "msg_contents": "This topic is confusing to lots of people, usually including myself, so I'm\nhoping to clarify it at least to myself, and maybe provide a good reference or\ndoc update for others in the future.\n\nautovacuum/analyze automatically scans tables being inserted/updated/deleted\nand updates their statistics in pg_class and pg_statistic. Since PG 9.0 [0,1],\nANALYZE (can) include stats of child tables along with stats of the (ONLY)\nparent table. But, autoanalyze still doesn't know to analyze (typical) empty\nparent tables, which need to be manually ANALYZEd to include stats for their\nchildren.\n\n...which leaves one wondering: \"which stats are being used?, and why are we\nkeeping two and apparently sometimes not looking at both/either\" ?\n\nI think the explanation is this:\n - Parent table stats without children (pg_statistic.stainherit='f') [2] are\nused if you query SELECT ONLY). Simple enough.\n\n - Postgres uses rowcount estimate as the primary component of query planning.\nWhen planning a query involving a parent table, its rowcount estimate is\nobtained as the sum of the rowcounts for its child nodes (appendrels) - if a\ntable is excluded by query exclusion, it doesn't even show up in the plan, and\nif only a fraction of its rows are returned due to a restrictive clause, that's\nreflected in its rowcount estimate and in the estimate of the parent. So child\ntables need to be analyzed for their rowcount (and also for their column stats\nwhich affect rowcount).\n\n - But, column stats (ndistinct, most-common values, and histogram) are\nrelatively big, and there's nothing implemented (yet) to intelligently combine\nthem across child tables in a query. So postgres, having obtained a rowcount\nestimate for parent tables involved in a query, having determined how (or one\nway) to join the tables, needs to determine how many rows are expected to\nresult be output by a join, which uses on parent table's column stats\n(ndistinct, MCV list, histogram).\n\nIs that mostly right ?\n\nToward the future: maybe, with declarative partitioning, combining\nselectivities as in [3] is possible now without objectionable planning overhead\n(?)\n\nJustin\n\nReferences\n[0] https://www.postgresql.org/docs/9.0/static/release-9-0.html#AEN102560\n[1] https://www.postgresql.org/message-id/flat/2674.1262040064%40sss.pgh.pa.us#[email protected]\n[2] https://www.postgresql.org/docs/current/static/catalog-pg-statistic.html\n[3] https://www.postgresql.org/message-id/flat/7363.1426537103%40sss.pgh.pa.us#[email protected]\nmore:\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/[email protected]\nhttps://www.postgresql.org/message-id/29559.1287206562%40sss.pgh.pa.us\n\n", "msg_date": "Fri, 29 Dec 2017 23:56:30 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "analyze stats: child vs parent" } ]
[ { "msg_contents": "Hi,\n\n\nI try to restore a table on U16.04, but it's ten times slower than on U14.04. This is the definition of the table:\n\n\ntestdb=# \\d photos_searchlog\n\nTable \"public.photos_searchlog\"\n Column | Type | Collation | Nullable | Default\n----------+--------------------------+-----------+----------+----------------------------------------------\n id | integer | | not null | nextval('photos_searchlog_id_seq'::regclass)\n created | timestamp with time zone | | not null |\n updated | timestamp with time zone | | not null |\n lang | character varying(2) | | not null |\n q | character varying(255) | | not null |\n hits | integer | | not null |\n count | integer | | not null |\n ip_list | text | | not null |\n locked | boolean | | not null |\n ts_list | text | | not null |\n ts_count | integer | | not null |\nIndexes:\n \"photos_searchlog_pkey\" PRIMARY KEY, btree (id)\n \"photos_searchlog_lang_q_key\" UNIQUE CONSTRAINT, btree (lang, q)\n \"photos_searchlog_count\" btree (count)\n \"photos_searchlog_created\" btree (created)\n \"photos_searchlog_ts_count\" btree (ts_count)\n \"photos_searchlog_updated\" btree (updated)\n\nIt's only the statement ALTER TABLE ONLY photos_searchlog ADD CONSTRAINT photos_searchlog_lang_q_key UNIQUE (lang, q); which causes the delay. I use the default postgres configuration on the same hardware (/etc/postgresql/10/main/postgresql.conf). I tested different postgres versions, checked the locale and other settings but can not find any differences. I also tried with more or less data, but always the same result.\n\nDoes anybody have a clue what could cause the time difference?\n\nThanks\n\n\n\n\n\n\n\n\n\nHi,\n\n\nI try to restore a table on U16.04, but it's ten times slower than on U14.04. This is the definition\n of the table:\n\n\n\ntestdb=# \\d photos_searchlog\n\n\nTable \"public.photos_searchlog\"\n  Column  |           Type           | Collation | Nullable |                   Default                    \n----------+--------------------------+-----------+----------+----------------------------------------------\n id       | integer                  |           | not null | nextval('photos_searchlog_id_seq'::regclass)\n created  | timestamp with time zone |           | not null | \n updated  | timestamp with time zone |           | not null | \n lang     | character varying(2)     |           | not null | \n q        | character varying(255)   |           | not null | \n hits     | integer                  |           | not null | \n count    | integer                  |           | not null | \n ip_list  | text                     |           | not null | \n locked   | boolean                  |           | not null | \n ts_list  | text                     |           | not null | \n ts_count | integer                  |           | not null | \nIndexes:\n    \"photos_searchlog_pkey\" PRIMARY KEY, btree (id)\n    \"photos_searchlog_lang_q_key\" UNIQUE CONSTRAINT, btree (lang, q)\n    \"photos_searchlog_count\" btree (count)\n    \"photos_searchlog_created\" btree (created)\n    \"photos_searchlog_ts_count\" btree (ts_count)\n    \"photos_searchlog_updated\" btree (updated)\n\n\nIt's only the statement ALTER TABLE ONLY photos_searchlog ADD CONSTRAINT photos_searchlog_lang_q_key UNIQUE (lang, q); which causes the delay. I use the default postgres configuration on the same hardware (/etc/postgresql/10/main/postgresql.conf). I tested\n different postgres versions, checked the locale and other settings but can not find any differences. I also tried with more or less data, but always the same result.\n\n\n\nDoes anybody have a clue what could cause the time difference?\n\n\nThanks", "msg_date": "Tue, 2 Jan 2018 13:27:29 +0000", "msg_from": "Hans Braxmeier <[email protected]>", "msg_from_op": true, "msg_subject": "Restoring a table is ten times slower on Ubuntu 14.04 than on Ubuntu\n 16.04" }, { "msg_contents": "You are not providing too much info, its unclear to me whats actually slow.\nIf you can, try loading the data first and then create the indexes / constraints. that should be faster.\n\n> On 2 Jan 2018, at 15:27, Hans Braxmeier <[email protected]> wrote:\n> \n> Hi,\n> \n> I try to restore a table on U16.04, but it's ten times slower than on U14.04. This is the definition of the table:\n> \n> testdb=# \\d photos_searchlog\n> \n> Table \"public.photos_searchlog\"\n> Column | Type | Collation | Nullable | Default \n> ----------+--------------------------+-----------+----------+----------------------------------------------\n> id | integer | | not null | nextval('photos_searchlog_id_seq'::regclass)\n> created | timestamp with time zone | | not null | \n> updated | timestamp with time zone | | not null | \n> lang | character varying(2) | | not null | \n> q | character varying(255) | | not null | \n> hits | integer | | not null | \n> count | integer | | not null | \n> ip_list | text | | not null | \n> locked | boolean | | not null | \n> ts_list | text | | not null | \n> ts_count | integer | | not null | \n> Indexes:\n> \"photos_searchlog_pkey\" PRIMARY KEY, btree (id)\n> \"photos_searchlog_lang_q_key\" UNIQUE CONSTRAINT, btree (lang, q)\n> \"photos_searchlog_count\" btree (count)\n> \"photos_searchlog_created\" btree (created)\n> \"photos_searchlog_ts_count\" btree (ts_count)\n> \"photos_searchlog_updated\" btree (updated)\n> \n> It's only the statement ALTER TABLE ONLY photos_searchlog ADD CONSTRAINT photos_searchlog_lang_q_key UNIQUE (lang, q); which causes the delay. I use the default postgres configuration on the same hardware (/etc/postgresql/10/main/postgresql.conf). I tested different postgres versions, checked the locale and other settings but can not find any differences. I also tried with more or less data, but always the same result.\n> \n> Does anybody have a clue what could cause the time difference?\n> \n> Thanks\n\n\nYou are not providing too much info, its unclear to me whats actually slow.If you can, try loading the data first and then create the indexes / constraints. that should be faster.On 2 Jan 2018, at 15:27, Hans Braxmeier <[email protected]> wrote:Hi,I try to restore a table on U16.04, but it's ten times slower than on U14.04. This is the definition of the table:testdb=# \\d photos_searchlogTable \"public.photos_searchlog\"  Column  |           Type           | Collation | Nullable |                   Default                    ----------+--------------------------+-----------+----------+---------------------------------------------- id       | integer                  |           | not null | nextval('photos_searchlog_id_seq'::regclass) created  | timestamp with time zone |           | not null |  updated  | timestamp with time zone |           | not null |  lang     | character varying(2)     |           | not null |  q        | character varying(255)   |           | not null |  hits     | integer                  |           | not null |  count    | integer                  |           | not null |  ip_list  | text                     |           | not null |  locked   | boolean                  |           | not null |  ts_list  | text                     |           | not null |  ts_count | integer                  |           | not null | Indexes:    \"photos_searchlog_pkey\" PRIMARY KEY, btree (id)    \"photos_searchlog_lang_q_key\" UNIQUE CONSTRAINT, btree (lang, q)    \"photos_searchlog_count\" btree (count)    \"photos_searchlog_created\" btree (created)    \"photos_searchlog_ts_count\" btree (ts_count)    \"photos_searchlog_updated\" btree (updated)It's only the statement ALTER TABLE ONLY photos_searchlog ADD CONSTRAINT photos_searchlog_lang_q_key UNIQUE (lang, q); which causes the delay. I use the default postgres configuration on the same hardware (/etc/postgresql/10/main/postgresql.conf). I tested different postgres versions, checked the locale and other settings but can not find any differences. I also tried with more or less data, but always the same result.Does anybody have a clue what could cause the time difference?Thanks", "msg_date": "Tue, 2 Jan 2018 15:30:17 +0200", "msg_from": "Vasilis Ventirozos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restoring a table is ten times slower on Ubuntu 14.04 than on\n Ubuntu 16.04" } ]
[ { "msg_contents": "After reading this article about keys in relational databases, highlighted\non hacker news this morning:\nhttps://begriffs.com/posts/2018-01-01-sql-keys-in-depth.html\n\nI keep pondering the performance chart, regarding uuid insert, shown\ntowards the bottom of the article. I believe he was doing that test with\nPostgreSQL.\n\nMy understanding is that the performance is degrading because he has a\nbtree primary key index. Is it possible to try a hash index or some other\nindex type for a uuid primary key that would mitigate the performance issue\nhe is recording?\n\nAfter all, I can't think of any use case where I query for a \"range\" of\nuuid values. They are always exact matches. So a hash index would\npossibly be a really good fit.\n\nI have many tables, several with more than 1 billion rows, that use uuid's\nas the primary key. Many of those uuid's are generated off system, so I\ncan't play around with the uuid generation algorithm like he was doing.\n\nI'm hoping to move to PG 10 any day now, and can migrate the data with\nupdated index definitions if it will actually help performance in any way.\n(I'm always looking for ways to tweak the performance for the better any\nchance I get.)\n\nAfter reading this article about keys in relational databases, highlighted on hacker news this morning:https://begriffs.com/posts/2018-01-01-sql-keys-in-depth.htmlI keep pondering the performance chart, regarding uuid insert, shown towards the bottom of the article.  I believe he was doing that test with PostgreSQL.My understanding is that the performance is degrading because he has a btree primary key index.  Is it possible to try a hash index or some other index type for a uuid primary key that would mitigate the performance issue he is recording?After all, I can't think of any use case where I query for a \"range\" of uuid values.  They are always exact matches.  So a hash index would possibly be a really good fit.I have many tables, several with more than 1 billion rows, that use uuid's as the primary key.  Many of those uuid's are generated off system, so I can't play around with the uuid generation algorithm like he was doing.I'm hoping to move to PG 10 any day now, and can migrate the data with updated index definitions if it will actually help performance in any way.  (I'm always looking for ways to tweak the performance for the better any chance I get.)", "msg_date": "Tue, 2 Jan 2018 09:02:50 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": true, "msg_subject": "primary key hash index" }, { "msg_contents": "On Tue, Jan 2, 2018 at 3:02 PM, Rick Otten <[email protected]> wrote:\n\n> After reading this article about keys in relational databases, highlighted\n> on hacker news this morning:\n> https://begriffs.com/posts/2018-01-01-sql-keys-in-depth.html\n>\n> I keep pondering the performance chart, regarding uuid insert, shown\n> towards the bottom of the article. I believe he was doing that test with\n> PostgreSQL.\n>\n> My understanding is that the performance is degrading because he has a\n> btree primary key index. Is it possible to try a hash index or some other\n> index type for a uuid primary key that would mitigate the performance issue\n> he is recording?\n>\n> After all, I can't think of any use case where I query for a \"range\" of\n> uuid values. They are always exact matches. So a hash index would\n> possibly be a really good fit.\n>\n> I have many tables, several with more than 1 billion rows, that use uuid's\n> as the primary key. Many of those uuid's are generated off system, so I\n> can't play around with the uuid generation algorithm like he was doing.\n>\n> I'm hoping to move to PG 10 any day now, and can migrate the data with\n> updated index definitions if it will actually help performance in any way.\n> (I'm always looking for ways to tweak the performance for the better any\n> chance I get.)\n>\n>\nHash indexes unfortunately don't support UNIQUE indexes. At least not yet.\nSo while you can use them for regular indexing, they cannot be used as a\nPRIMARY KEY.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Jan 2, 2018 at 3:02 PM, Rick Otten <[email protected]> wrote:After reading this article about keys in relational databases, highlighted on hacker news this morning:https://begriffs.com/posts/2018-01-01-sql-keys-in-depth.htmlI keep pondering the performance chart, regarding uuid insert, shown towards the bottom of the article.  I believe he was doing that test with PostgreSQL.My understanding is that the performance is degrading because he has a btree primary key index.  Is it possible to try a hash index or some other index type for a uuid primary key that would mitigate the performance issue he is recording?After all, I can't think of any use case where I query for a \"range\" of uuid values.  They are always exact matches.  So a hash index would possibly be a really good fit.I have many tables, several with more than 1 billion rows, that use uuid's as the primary key.  Many of those uuid's are generated off system, so I can't play around with the uuid generation algorithm like he was doing.I'm hoping to move to PG 10 any day now, and can migrate the data with updated index definitions if it will actually help performance in any way.  (I'm always looking for ways to tweak the performance for the better any chance I get.)Hash indexes unfortunately don't support UNIQUE indexes. At least not yet. So while you can use them for regular indexing, they cannot be used as a PRIMARY KEY.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Tue, 2 Jan 2018 15:09:50 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: primary key hash index" }, { "msg_contents": "On Tue, Jan 2, 2018 at 6:02 AM, Rick Otten <[email protected]> wrote:\n\n> After reading this article about keys in relational databases, highlighted\n> on hacker news this morning:\n> https://begriffs.com/posts/2018-01-01-sql-keys-in-depth.html\n>\n> I keep pondering the performance chart, regarding uuid insert, shown\n> towards the bottom of the article. I believe he was doing that test with\n> PostgreSQL.\n>\n> My understanding is that the performance is degrading because he has a\n> btree primary key index. Is it possible to try a hash index or some other\n> index type for a uuid primary key that would mitigate the performance issue\n> he is recording?\n>\n\nHash indexes do not yet support primary keys, but you could always test it\nwith just an plain index, since you already know the keys are unique via\nthe way they are constructed. But I wouldn't expect any real improvement.\nHash indexes still trigger FPW and still dirty massive numbers of pages in\na random fashion (even worse than btree does as far as randomness goes but\nsince the hash is more compact maybe more of the pages will be re-dirtied\nand so save on FPW or separate writes). I was surprised that turning off\nFPW was so effective for him, that suggests that maybe his checkpoints are\ntoo close together, which I guess means max_wal_size is too low.\n\nCheers,\n\nJeff\n\nOn Tue, Jan 2, 2018 at 6:02 AM, Rick Otten <[email protected]> wrote:After reading this article about keys in relational databases, highlighted on hacker news this morning:https://begriffs.com/posts/2018-01-01-sql-keys-in-depth.htmlI keep pondering the performance chart, regarding uuid insert, shown towards the bottom of the article.  I believe he was doing that test with PostgreSQL.My understanding is that the performance is degrading because he has a btree primary key index.  Is it possible to try a hash index or some other index type for a uuid primary key that would mitigate the performance issue he is recording?Hash indexes do not yet support primary keys, but you could always test it with just an plain index, since you already know the keys are unique via the way they are constructed.  But I wouldn't expect any real improvement.  Hash indexes still trigger FPW and still dirty massive numbers of pages in a random fashion (even worse than btree does as far as randomness goes but since the hash is more compact maybe more of the pages will be re-dirtied and so save on FPW or separate writes).  I was surprised that turning off FPW was so effective for him, that suggests that maybe his checkpoints are too close together, which I guess means max_wal_size is too low. Cheers,Jeff", "msg_date": "Thu, 4 Jan 2018 12:01:11 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: primary key hash index" } ]
[ { "msg_contents": "Hi,\n\nWe recently had an issue in production. We have queries that are\nprocedurally generated by an Object/Relational Mapping framework. Some of\nthese queries are huge, involving over 120 tables.\n\nWith the following parameters the planner seemed to be getting very bad\nplans for some of these queries (times are from a single execution, but\nthey are in those orders of magnitude):\n\n----\nfrom_collapse_limit = 14\njoin_collapse_limit = 14\ngeqo_threshold = 14\ngeqo_effort= 5\n\n(cost=14691360.79..81261293.30 rows=6 width=15934)\n\n Planning time: 3859.928 ms\n Execution time: 6883365.973 ms\n----\n\nIf we raise the join_collapse_limit to a really high value the plans are\nmuch better, but (of course) planning time gets worse:\n\n----\nfrom_collapse_limit = 150\njoin_collapse_limit = 150\ngeqo_threshold = 14\ngeqo_effort= 5\n\n(cost=379719.44..562997.32 rows=7 width=15934)\n\n Planning time: 7112.416 ms\n Execution time: 7.741 ms\n----\n\nAfter some testing in order to lower the planning time we ended bringing\ndown the GEQO values, and we have the best results with:\n\n----\nfrom_collapse_limit = 150\njoin_collapse_limit = 150\ngeqo_threshold = 2\ngeqo_effort= 2\n\n(cost=406427.86..589667.55 rows=6 width=15934)\n\n Planning time: 2721.099 ms\n Execution time: 22.728 ms\n----\n\nIssues with the join_collapse_limit have been discussed before [1], but\nlowering the GEQO values seems counterintuitive based on the documentation\nfor this parameter [2]: \"Setting this value [join_collapse_limit] to\ngeqo_threshold or more may trigger use of the GEQO planner, resulting in\nnon-optimal plans.\"\n\nWhat we want to know is if this mechanisms are working as intended and we\ncan follow a similar approach in the future (lower GEQO values), or this is\njust a fluke for a corner case.\n\nI have been able to reproduce a similar behaviour, to a much smaller scale,\nwith the attached scripts in Postgres 10.\n\n[1] https://www.postgresql.org/message-id/25845.1483809942%40sss.pgh.pa.us\n[2] https://www.postgresql.org/docs/current/static/runtime-config-query.html\n\n\nRegards,\n\nJuan José Santamaría", "msg_date": "Fri, 5 Jan 2018 15:30:25 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "GEQO and join_collapse_limit correlation" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <[email protected]> writes:\n> We recently had an issue in production. We have queries that are\n> procedurally generated by an Object/Relational Mapping framework. Some of\n> these queries are huge, involving over 120 tables.\n\nYeah, you're going to have problems with that :-(\n\n> After some testing in order to lower the planning time we ended bringing\n> down the GEQO values, and we have the best results with:\n\n> from_collapse_limit = 150\n> join_collapse_limit = 150\n> geqo_threshold = 2\n> geqo_effort= 2\n\nHmm. The trouble with this approach is that you're relying on GEQO\nto find a good plan, and that's only probabilistic --- especially so\nwhen you're reducing geqo_effort, meaning it doesn't try as many\npossibilities as it otherwise might. Basically, therefore, the\nfear is that every so often you'll get a bad plan.\n\nIf the queries are fairly stylized, you might be able to get good \nresults by exploiting rather than bypassing join_collapse_limit:\ndetermine what a good join order is, and then write the FROM clause\nas an explicit JOIN nest in that order, and then *reduce* not raise\njoin_collapse_limit to force the planner to follow the syntactic\njoin order. In this way you'd get rid of most of the run-time\njoin order search effort. Don't know how cooperative your ORM\nwould be with such an approach though.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 05 Jan 2018 11:16:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GEQO and join_collapse_limit correlation" }, { "msg_contents": "Hi,\n\n> After some testing in order to lower the planning time we ended bringing\n> > down the GEQO values, and we have the best results with:\n>\n> > from_collapse_limit = 150\n> > join_collapse_limit = 150\n> > geqo_threshold = 2\n> > geqo_effort= 2\n>\n> Hmm. The trouble with this approach is that you're relying on GEQO\n> to find a good plan, and that's only probabilistic --- especially so\n> when you're reducing geqo_effort, meaning it doesn't try as many\n> possibilities as it otherwise might. Basically, therefore, the\n> fear is that every so often you'll get a bad plan.\n>\n\nWhat we felt odd was having to find a balance between geqo_threshold and\njoin_collapse_limit, lowering one was only effective after raising the\nother. The geqo_effort was only mofidied after we found this path, and some\nmore testing.\n\nIn an environment with geqo_threshold=1 and join_collapse_limit=1, would\nthe planner be GEQO exclusive (and syntactic)?\n\nIf the queries are fairly stylized, you might be able to get good\n> results by exploiting rather than bypassing join_collapse_limit:\n> determine what a good join order is, and then write the FROM clause\n> as an explicit JOIN nest in that order, and then *reduce* not raise\n> join_collapse_limit to force the planner to follow the syntactic\n> join order. In this way you'd get rid of most of the run-time\n> join order search effort. Don't know how cooperative your ORM\n> would be with such an approach though.\n>\n\nThe ORM seems to build the join path just the other way round of what would\nbe good for the planner. The thing we should take a good look at if it is\nreally needed looking at +120 tables for a query that gets a pretty trivial\nresult, but that is completely off topic.\n\n\n> regards, tom lane\n>\n\nThanks for your repply.\n\nRegards,\n\nJuan José Santamaría\n\nHi,> After some testing in order to lower the planning time we ended bringing\n> down the GEQO values, and we have the best results with:\n\n> from_collapse_limit = 150\n> join_collapse_limit = 150\n> geqo_threshold = 2\n> geqo_effort= 2\n\nHmm.  The trouble with this approach is that you're relying on GEQO\nto find a good plan, and that's only probabilistic --- especially so\nwhen you're reducing geqo_effort, meaning it doesn't try as many\npossibilities as it otherwise might.  Basically, therefore, the\nfear is that every so often you'll get a bad plan.What we felt odd was having to find a balance between geqo_threshold and join_collapse_limit, lowering one was only effective after raising the other. The geqo_effort was only mofidied after we found this path, and some more testing.In an environment with geqo_threshold=1 and join_collapse_limit=1, would the planner be GEQO exclusive (and syntactic)?If the queries are fairly stylized, you might be able to get good\nresults by exploiting rather than bypassing join_collapse_limit:\ndetermine what a good join order is, and then write the FROM clause\nas an explicit JOIN nest in that order, and then *reduce* not raise\njoin_collapse_limit to force the planner to follow the syntactic\njoin order.  In this way you'd get rid of most of the run-time\njoin order search effort.  Don't know how cooperative your ORM\nwould be with such an approach though.The ORM seems to build the join path just the other way round of what would be good for the planner. The thing we should take a good look at if it is really needed looking at +120 tables for a query that gets a pretty trivial result, but that is completely off topic.                         regards, tom lane\nThanks for your repply.Regards,Juan José Santamaría", "msg_date": "Fri, 5 Jan 2018 21:17:06 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GEQO and join_collapse_limit correlation" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <[email protected]> writes:\n> In an environment with geqo_threshold=1 and join_collapse_limit=1, would\n> the planner be GEQO exclusive (and syntactic)?\n\nGEQO's only function, basically, is to search for the join order to use.\nIf you're constraining the join order completely with\njoin_collapse_limit=1 then forcing the GEQO path to be taken would just\nadd pointless overhead. (If it does anything at all ... I don't remember\nthe logic exactly but we might be bright enough not to bother with GEQO in\nsuch a situation, regardless of geqo_threshold.)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 05 Jan 2018 15:29:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GEQO and join_collapse_limit correlation" }, { "msg_contents": "Hi,\n\nGEQO's only function, basically, is to search for the join order to use.\n> If you're constraining the join order completely with\n> join_collapse_limit=1 then forcing the GEQO path to be taken would just\n> add pointless overhead. (If it does anything at all ... I don't remember\n> the logic exactly but we might be bright enough not to bother with GEQO in\n> such a situation, regardless of geqo_threshold.)\n>\n\nGot it. Thanks a lot.\n\nRegards,\n\nJuan José Santamaría\n\nHi,GEQO's only function, basically, is to search for the join order to use.\nIf you're constraining the join order completely with\njoin_collapse_limit=1 then forcing the GEQO path to be taken would just\nadd pointless overhead.  (If it does anything at all ... I don't remember\nthe logic exactly but we might be bright enough not to bother with GEQO in\nsuch a situation, regardless of geqo_threshold.) Got it. Thanks a lot.Regards,Juan José Santamaría", "msg_date": "Sat, 6 Jan 2018 12:31:51 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GEQO and join_collapse_limit correlation" } ]
[ { "msg_contents": "Hi Team,\n\nWe are using system with 60 GB RAM and 4 TB , and this is AWS EC2 instance.\nwe are seeing some times replication lag 6 to 10 seconds, as this is very\ncritical data, we should not see the lag, please help us on this to become\nlag zero.\n\nRegards,\n\nRambabu Vakada,\n\nPostgreSQL DBA,\n\n9849137684.\n\nHi Team,We are using system with 60 GB RAM and 4 TB , and this is AWS EC2 instance.we are seeing some times replication lag 6 to 10 seconds, as this is very critical data, we should not see the lag, please help us on this to become lag zero.Regards,Rambabu Vakada,PostgreSQL DBA,9849137684.", "msg_date": "Mon, 8 Jan 2018 14:51:58 +0530", "msg_from": "Rambabu V <[email protected]>", "msg_from_op": true, "msg_subject": "seeing lag in postgresql replication" }, { "msg_contents": "Hi,\nWhat version of postgresql do you have ?\nDo you use streaming replcation or a different tool like repmgr or pgpool\nor something else ?\nWhat are all the configurations for you wals(the parameters depends on your\nversion..) ?\nDo you see any errors or warnings in the server log ?\n\nRegards, Mariel.\n\n\n2018-01-08 11:21 GMT+02:00 Rambabu V <[email protected]>:\n\n> Hi Team,\n>\n> We are using system with 60 GB RAM and 4 TB , and this is AWS EC2 instance.\n> we are seeing some times replication lag 6 to 10 seconds, as this is very\n> critical data, we should not see the lag, please help us on this to become\n> lag zero.\n>\n> Regards,\n>\n> Rambabu Vakada,\n>\n> PostgreSQL DBA,\n>\n> 9849137684.\n>\n\nHi,What version of postgresql do you have ?Do you use streaming replcation or a different tool like repmgr or pgpool or something else ?What are all the configurations for you wals(the parameters depends on your version..) ?Do you see any errors or warnings in the server log ?Regards, Mariel.2018-01-08 11:21 GMT+02:00 Rambabu V <[email protected]>:Hi Team,We are using system with 60 GB RAM and 4 TB , and this is AWS EC2 instance.we are seeing some times replication lag 6 to 10 seconds, as this is very critical data, we should not see the lag, please help us on this to become lag zero.Regards,Rambabu Vakada,PostgreSQL DBA,9849137684.", "msg_date": "Mon, 8 Jan 2018 12:31:56 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seeing lag in postgresql replication" } ]
[ { "msg_contents": "Hi Team,\n\nDaily 4000 Archive files are generating and these are occupying more space,\nwe are trying to compress wall files with using wal_compression parameter,\nbut we are not seeing any change in wal files count, could you please help\nus on this.\n\nHi Team,Daily 4000 Archive files are generating and these are occupying more space, we are trying to compress wall files with using wal_compression parameter, but we are not seeing any change in wal files count, could you please help us on this.", "msg_date": "Tue, 9 Jan 2018 12:23:24 +0530", "msg_from": "Rambabu V <[email protected]>", "msg_from_op": true, "msg_subject": "Need Help on wal_compression" }, { "msg_contents": "On Tue, Jan 9, 2018 at 3:53 AM, Rambabu V <[email protected]> wrote:\n\n> Hi Team,\n>\n> Daily 4000 Archive files are generating and these are occupying more\n> space, we are trying to compress wall files with using wal_compression\n> parameter, but we are not seeing any change in wal files count, could you\n> please help us on this.\n>\n\nThat's very little information to go on.\n\nYou'll probably want to inspect WAL record stats before and after enabling\nwal_compression to see whether it makes sense to do so. Take a look at\npg_xlogdump --stats\n\nFor example:\n\n$ pg_xlogdump --stats -p /path/to/pg_xlog 000000010002C364000000F0\n000000010002C364000000FA\nType N (%) Record\nsize (%) FPI size (%) Combined size (%)\n---- - ---\n----------- --- -------- ---\n------------- ---\nXLOG 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nTransaction 11 ( 0.00)\n352 ( 0.00) 0 ( 0.00) 352 ( 0.00)\nStorage 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nCLOG 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nDatabase 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nTablespace 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nMultiXact 4 ( 0.00)\n208 ( 0.00) 0 ( 0.00) 208 ( 0.00)\nRelMap 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nStandby 2 ( 0.00)\n116 ( 0.00) 0 ( 0.00) 116 ( 0.00)\nHeap2 2504 ( 0.18)\n78468 ( 0.20) 1385576 ( 3.55) 1464044 ( 1.89)\nHeap 667619 ( 48.23)\n19432159 ( 50.47) 28641357 ( 73.35) 48073516 (\n61.99)\nBtree 712093 ( 51.45)\n18643846 ( 48.42) 9021270 ( 23.10) 27665116 (\n35.67)\nHash 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nGin 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nGist 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nSequence 1918 ( 0.14)\n349076 ( 0.91) 0 ( 0.00) 349076 ( 0.45)\nSPGist 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nBRIN 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nCommitTs 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\nReplicationOrigin 0 (\n0.00) 0 ( 0.00) 0 (\n0.00) 0 ( 0.00)\n --------\n-------- -------- --------\nTotal 1384151\n38504225 [49.65%] 39048203 [50.35%] 77552428 [100%]\n\n\nThat shows 50% of that are full page writes. This is with compression\nenabled. WAL compression will only help FPW, so if you don't have a large\nvolume of FPW, or they don't compress well, you won't benefit much.\n\nOn Tue, Jan 9, 2018 at 3:53 AM, Rambabu V <[email protected]> wrote:Hi Team,Daily 4000 Archive files are generating and these are occupying more space, we are trying to compress wall files with using wal_compression parameter, but we are not seeing any change in wal files count, could you please help us on this.\nThat's very little information to go on.You'll probably want to inspect WAL record stats before and after enabling wal_compression to see whether it makes sense to do so. Take a look at pg_xlogdump --statsFor example:$ pg_xlogdump --stats -p /path/to/pg_xlog 000000010002C364000000F0 000000010002C364000000FAType                                           N      (%)          Record size      (%)             FPI size      (%)        Combined size      (%)----                                           -      ---          -----------      ---             --------      ---        -------------      ---XLOG                                           0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)Transaction                                   11 (  0.00)                  352 (  0.00)                    0 (  0.00)                  352 (  0.00)Storage                                        0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)CLOG                                           0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)Database                                       0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)Tablespace                                     0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)MultiXact                                      4 (  0.00)                  208 (  0.00)                    0 (  0.00)                  208 (  0.00)RelMap                                         0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)Standby                                        2 (  0.00)                  116 (  0.00)                    0 (  0.00)                  116 (  0.00)Heap2                                       2504 (  0.18)                78468 (  0.20)              1385576 (  3.55)              1464044 (  1.89)Heap                                      667619 ( 48.23)             19432159 ( 50.47)             28641357 ( 73.35)             48073516 ( 61.99)Btree                                     712093 ( 51.45)             18643846 ( 48.42)              9021270 ( 23.10)             27665116 ( 35.67)Hash                                           0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)Gin                                            0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)Gist                                           0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)Sequence                                    1918 (  0.14)               349076 (  0.91)                    0 (  0.00)               349076 (  0.45)SPGist                                         0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)BRIN                                           0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)CommitTs                                       0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)ReplicationOrigin                              0 (  0.00)                    0 (  0.00)                    0 (  0.00)                    0 (  0.00)                                        --------                      --------                      --------                      --------Total                                    1384151                      38504225 [49.65%]             39048203 [50.35%]             77552428 [100%]That shows 50% of that are full page writes. This is with compression enabled. WAL compression will only help FPW, so if you don't have a large volume of FPW, or they don't compress well, you won't benefit much.", "msg_date": "Tue, 9 Jan 2018 13:53:14 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need Help on wal_compression" }, { "msg_contents": "On Mon, Jan 8, 2018 at 11:53 PM, Rambabu V <[email protected]> wrote:\n> Hi Team,\n>\n> Daily 4000 Archive files are generating and these are occupying more space,\n> we are trying to compress wall files with using wal_compression parameter,\n> but we are not seeing any change in wal files count, could you please help\n> us on this.\n\nCompression won't change the number of wal files, it will just make\nthe ones created smaller.\n\n-- \nTo understand recursion, one must first understand recursion.\n\n", "msg_date": "Tue, 9 Jan 2018 13:39:33 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need Help on wal_compression" }, { "msg_contents": "On Tue, Jan 9, 2018 at 1:53 AM, Rambabu V <[email protected]> wrote:\n\n> Hi Team,\n>\n> Daily 4000 Archive files are generating and these are occupying more\n> space, we are trying to compress wall files with using wal_compression\n> parameter, but we are not seeing any change in wal files count, could you\n> please help us on this.\n>\n\nIf the number of files is driven by archive_timeout, then no reduction in\nthe number of them would be expected by turning on wal_compression.\n\nIf the number of files is driven by the 16MB limit on each file, then it is\nsurprising that wal_compression did not change it. (But the difference\nmight not be all that large, depending on the type of transactions and data\nyou are working with.)\n\nI use an external compression program, xz, which compresses very well. But\nit is slow and has trouble keeping up at times of peak activity (e.g. bulk\nloads or updates, or reindexing). It reduces the aggregate size, but not\nthe number of files.\n\nCheers,\n\nJeff\n\nOn Tue, Jan 9, 2018 at 1:53 AM, Rambabu V <[email protected]> wrote:Hi Team,Daily 4000 Archive files are generating and these are occupying more space, we are trying to compress wall files with using wal_compression parameter, but we are not seeing any change in wal files count, could you please help us on this.\nIf the number of files is driven by archive_timeout, then no reduction in the number of them would be expected by turning on wal_compression.If the number of files is driven by the 16MB limit on each file, then it is surprising that wal_compression did not change it. (But the difference might not be all that large, depending on the type of transactions and data you are working with.)I use an external compression program, xz, which compresses very well.  But it is slow and has trouble keeping up at times of peak activity (e.g. bulk loads or updates, or reindexing).  It reduces the aggregate size, but not the number of files.Cheers,Jeff", "msg_date": "Tue, 9 Jan 2018 16:33:05 -0500", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need Help on wal_compression" }, { "msg_contents": "On Tue, Jan 09, 2018 at 01:53:14PM -0300, Claudio Freire wrote:\n> That shows 50% of that are full page writes. This is with compression\n> enabled. WAL compression will only help FPW, so if you don't have a large\n> volume of FPW, or they don't compress well, you won't benefit much.\n\nThis highly depends on the data types used as well. You won't get much\ncompressibility with things like UUIDs for example. When we worked on\nthe patch, I recall that FDW compression saved 25% for a relation with a\none-column integer, and only 12~15% when using UUIDs.\n--\nMichael", "msg_date": "Wed, 10 Jan 2018 10:24:10 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need Help on wal_compression" } ]
[ { "msg_contents": "Hello Gurus,\n\nI am struggling to tune a query which is doing join on top of aggregate for around 3 million rows. The plan and SQL is attached to the email.\n\nBelow is system Details:\n\nPGSQL version - 10.1\nOS - RHEL 3.10.0-693.5.2.el7.x86_64\nBinary - Dowloaded from postgres.org compiled and installed.\nHardware - Virtual Machine with 8vCPU and 32GB of RAM, on XFS filesystem.\n\n\nPlease let me know if you need more information.\n\n\nRegards,\nVirendra\n\n________________________________\n\nThis message is intended only for the use of the addressee and may contain\ninformation that is PRIVILEGED AND CONFIDENTIAL.\n\nIf you are not the intended recipient, you are hereby notified that any\ndissemination of this communication is strictly prohibited. If you have\nreceived this communication in error, please erase all copies of the message\nand its attachments and notify the sender immediately. Thank you.", "msg_date": "Tue, 9 Jan 2018 21:18:02 +0000", "msg_from": "\"Kumar, Virendra\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of a Query" }, { "msg_contents": "On Tue, Jan 9, 2018 at 2:18 PM, Kumar, Virendra\n<[email protected]> wrote:\n> Hello Gurus,\n>\n> I am struggling to tune a query which is doing join on top of aggregate for\n> around 3 million rows. The plan and SQL is attached to the email.\n>\n> Below is system Details:\n>\n> PGSQL version – 10.1\n>\n> OS – RHEL 3.10.0-693.5.2.el7.x86_64\n>\n> Binary – Dowloaded from postgres.org compiled and installed.\n>\n> Hardware – Virtual Machine with 8vCPU and 32GB of RAM, on XFS filesystem.\n\nI uploaded your query plan here: https://explain.depesz.com/s/14r6\n\nThe most expensive part is the merge join at the end.\n\nLines like this one: \"Buffers: shared hit=676 read=306596, temp\nread=135840 written=135972\"\n\nTell me that your sorts etc are spilling to disk, so the first thing\nto try is upping work_mem a bit. Don't go crazy, as it can run your\nmachine out of memory if you do. but doubling or tripling it and\nseeing the effect on the query performance is a good place to start.\n\nThe good news is that most of your row estimates are about right, so\nthe query planner is doing what it can to make the query fast, but I'm\nguessing if you get the work_mem high enough it will switch from a\nmerge join to a hash_join or something more efficient for large\nnumbers of rows.\n\n", "msg_date": "Tue, 9 Jan 2018 15:08:17 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of a Query" }, { "msg_contents": "Thank you Scott!\r\nI have current work_mem set as 4MB, shared_buffers to 8GB, hugepages on.\r\nI gradually increased the work_mem to 1GB but it did not help a bit. Am I missing something obvious.\r\n\r\nRegards,\r\nVirendra\r\n-----Original Message-----\r\nFrom: Scott Marlowe [mailto:[email protected]]\r\nSent: Tuesday, January 09, 2018 5:08 PM\r\nTo: Kumar, Virendra\r\nCc: [email protected]\r\nSubject: Re: Performance of a Query\r\n\r\nOn Tue, Jan 9, 2018 at 2:18 PM, Kumar, Virendra <[email protected]> wrote:\r\n> Hello Gurus,\r\n>\r\n> I am struggling to tune a query which is doing join on top of\r\n> aggregate for around 3 million rows. The plan and SQL is attached to the email.\r\n>\r\n> Below is system Details:\r\n>\r\n> PGSQL version – 10.1\r\n>\r\n> OS – RHEL 3.10.0-693.5.2.el7.x86_64\r\n>\r\n> Binary – Dowloaded from postgres.org compiled and installed.\r\n>\r\n> Hardware – Virtual Machine with 8vCPU and 32GB of RAM, on XFS filesystem.\r\n\r\nI uploaded your query plan here: https://explain.depesz.com/s/14r6\r\n\r\nThe most expensive part is the merge join at the end.\r\n\r\nLines like this one: \"Buffers: shared hit=676 read=306596, temp\r\nread=135840 written=135972\"\r\n\r\nTell me that your sorts etc are spilling to disk, so the first thing to try is upping work_mem a bit. Don't go crazy, as it can run your machine out of memory if you do. but doubling or tripling it and seeing the effect on the query performance is a good place to start.\r\n\r\nThe good news is that most of your row estimates are about right, so the query planner is doing what it can to make the query fast, but I'm guessing if you get the work_mem high enough it will switch from a merge join to a hash_join or something more efficient for large numbers of rows.\r\n\r\n\r\n________________________________\r\n\r\nThis message is intended only for the use of the addressee and may contain\r\ninformation that is PRIVILEGED AND CONFIDENTIAL.\r\n\r\nIf you are not the intended recipient, you are hereby notified that any\r\ndissemination of this communication is strictly prohibited. If you have\r\nreceived this communication in error, please erase all copies of the message\r\nand its attachments and notify the sender immediately. Thank you.\r\n", "msg_date": "Tue, 9 Jan 2018 22:25:59 +0000", "msg_from": "\"Kumar, Virendra\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Performance of a Query" }, { "msg_contents": "On Tue, Jan 9, 2018 at 3:25 PM, Kumar, Virendra\n<[email protected]> wrote:\n> Thank you Scott!\n> I have current work_mem set as 4MB, shared_buffers to 8GB, hugepages on.\n> I gradually increased the work_mem to 1GB but it did not help a bit. Am I missing something obvious.\n> From: Scott Marlowe [mailto:[email protected]]\n> Sent: Tuesday, January 09, 2018 5:08 PM\n> To: Kumar, Virendra\n> Cc: [email protected]\n> Subject: Re: Performance of a Query\n\nTry it with something reasonable like 64MB and then post your query\nplans to explain.depesz and then here and let's compare. Note that\nsome queries are just slow, and this one is handling a lot of data, so\nthere's only so much to do if an index won't fix it.\n\n", "msg_date": "Tue, 9 Jan 2018 15:59:59 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of a Query" }, { "msg_contents": "It did not seem to help.\r\nSee attachment.\r\n\r\n\r\nRegards,\r\nVirendra\r\n-----Original Message-----\r\nFrom: Scott Marlowe [mailto:[email protected]]\r\nSent: Tuesday, January 09, 2018 6:00 PM\r\nTo: Kumar, Virendra\r\nCc: [email protected]\r\nSubject: Re: Performance of a Query\r\n\r\nOn Tue, Jan 9, 2018 at 3:25 PM, Kumar, Virendra <[email protected]> wrote:\r\n> Thank you Scott!\r\n> I have current work_mem set as 4MB, shared_buffers to 8GB, hugepages on.\r\n> I gradually increased the work_mem to 1GB but it did not help a bit. Am I missing something obvious.\r\n> From: Scott Marlowe [mailto:[email protected]]\r\n> Sent: Tuesday, January 09, 2018 5:08 PM\r\n> To: Kumar, Virendra\r\n> Cc: [email protected]\r\n> Subject: Re: Performance of a Query\r\n\r\nTry it with something reasonable like 64MB and then post your query plans to explain.depesz and then here and let's compare. Note that some queries are just slow, and this one is handling a lot of data, so there's only so much to do if an index won't fix it.\r\n\r\n\r\n________________________________\r\n\r\nThis message is intended only for the use of the addressee and may contain\r\ninformation that is PRIVILEGED AND CONFIDENTIAL.\r\n\r\nIf you are not the intended recipient, you are hereby notified that any\r\ndissemination of this communication is strictly prohibited. If you have\r\nreceived this communication in error, please erase all copies of the message\r\nand its attachments and notify the sender immediately. Thank you.", "msg_date": "Tue, 9 Jan 2018 23:09:13 +0000", "msg_from": "\"Kumar, Virendra\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Performance of a Query" }, { "msg_contents": "On Tue, Jan 9, 2018 at 4:09 PM, Kumar, Virendra\n<[email protected]> wrote:\n> It did not seem to help.\n> See attachment.\n\nYeah while it's still writing, it's about half as much but most of the\ntime seems to be in merging etc multiple data sets. I'm wondering\nwhat non-default values you might have set otherwise. Are you running\non SSDs? If so lowering random_page_cost might help, but again, this\nmight just be a very expensive query as well.\n\n", "msg_date": "Tue, 9 Jan 2018 16:25:58 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of a Query" }, { "msg_contents": "Can you try to extract filter part as CTE? Like\n\nwith filtered as (select ... where policyid = 123456)\nselect ... (here comes original query but uses filtered table instead)\n\n10 янв. 2018 г. 1:10 пользователь \"Kumar, Virendra\" <\[email protected]> написал:\n\nIt did not seem to help.\nSee attachment.\n\n\nRegards,\nVirendra\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]]\nSent: Tuesday, January 09, 2018 6:00 PM\nTo: Kumar, Virendra\nCc: [email protected]\nSubject: Re: Performance of a Query\n\nOn Tue, Jan 9, 2018 at 3:25 PM, Kumar, Virendra <[email protected]>\nwrote:\n> Thank you Scott!\n> I have current work_mem set as 4MB, shared_buffers to 8GB, hugepages on.\n> I gradually increased the work_mem to 1GB but it did not help a bit. Am I\nmissing something obvious.\n> From: Scott Marlowe [mailto:[email protected]]\n> Sent: Tuesday, January 09, 2018 5:08 PM\n> To: Kumar, Virendra\n> Cc: [email protected]\n> Subject: Re: Performance of a Query\n\nTry it with something reasonable like 64MB and then post your query plans\nto explain.depesz and then here and let's compare. Note that some queries\nare just slow, and this one is handling a lot of data, so there's only so\nmuch to do if an index won't fix it.\n\n\n________________________________\n\nThis message is intended only for the use of the addressee and may contain\ninformation that is PRIVILEGED AND CONFIDENTIAL.\n\nIf you are not the intended recipient, you are hereby notified that any\ndissemination of this communication is strictly prohibited. If you have\nreceived this communication in error, please erase all copies of the message\nand its attachments and notify the sender immediately. Thank you.\n\nCan you try to extract filter part as CTE? Likewith filtered as (select ... where policyid = 123456)select ... (here comes original query but uses filtered table instead)10 янв. 2018 г. 1:10 пользователь \"Kumar, Virendra\" <[email protected]> написал:It did not seem to help.\nSee attachment.\n\n\nRegards,\nVirendra\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]]\nSent: Tuesday, January 09, 2018 6:00 PM\nTo: Kumar, Virendra\nCc: [email protected]\nSubject: Re: Performance of a Query\n\nOn Tue, Jan 9, 2018 at 3:25 PM, Kumar, Virendra <[email protected]> wrote:\n> Thank you Scott!\n> I have current work_mem set as 4MB, shared_buffers to 8GB, hugepages on.\n> I gradually increased the work_mem to 1GB but it did not help a bit. Am I missing something obvious.\n> From: Scott Marlowe [mailto:[email protected]]\n> Sent: Tuesday, January 09, 2018 5:08 PM\n> To: Kumar, Virendra\n> Cc: [email protected]\n> Subject: Re: Performance of a Query\n\nTry it with something reasonable like 64MB and then post your query plans to explain.depesz and then here and let's compare. Note that some queries are just slow, and this one is handling a lot of data, so there's only so much to do if an index won't fix it.\n\n\n________________________________\n\nThis message is intended only for the use of the addressee and may contain\ninformation that is PRIVILEGED AND CONFIDENTIAL.\n\nIf you are not the intended recipient, you are hereby notified that any\ndissemination of this communication is strictly prohibited. If you have\nreceived this communication in error, please erase all copies of the message\nand its attachments and notify the sender immediately. Thank you.", "msg_date": "Wed, 10 Jan 2018 05:29:12 +0200", "msg_from": "Danylo Hlynskyi <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Performance of a Query" }, { "msg_contents": "Scott Marlowe wrote:\n> On Tue, Jan 9, 2018 at 2:18 PM, Kumar, Virendra\n> <[email protected]> wrote:\n> > Hello Gurus,\n> > \n> > I am struggling to tune a query which is doing join on top of aggregate for\n> > around 3 million rows. The plan and SQL is attached to the email.\n> > \n> > Below is system Details:\n> > \n> > PGSQL version – 10.1\n> > \n> > OS – RHEL 3.10.0-693.5.2.el7.x86_64\n> > \n> > Binary – Dowloaded from postgres.org compiled and installed.\n> > \n> > Hardware – Virtual Machine with 8vCPU and 32GB of RAM, on XFS filesystem.\n> \n> I uploaded your query plan here: https://explain.depesz.com/s/14r6\n> \n> The most expensive part is the merge join at the end.\n> \n> Lines like this one: \"Buffers: shared hit=676 read=306596, temp\n> read=135840 written=135972\"\n> \n> Tell me that your sorts etc are spilling to disk, so the first thing\n> to try is upping work_mem a bit. Don't go crazy, as it can run your\n> machine out of memory if you do. but doubling or tripling it and\n> seeing the effect on the query performance is a good place to start.\n> \n> The good news is that most of your row estimates are about right, so\n> the query planner is doing what it can to make the query fast, but I'm\n> guessing if you get the work_mem high enough it will switch from a\n> merge join to a hash_join or something more efficient for large\n> numbers of rows.\n\nLooking at the plan, I'd guess that the following index could be helpful:\n\nCREATE INDEX ON ap.site_exposure(portfolio_id, peril_id, account_id);\n\nDon't know how much it would buy you, but you could avoid the\nsequential scan and the sort that way.\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Wed, 10 Jan 2018 09:51:47 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of a Query" } ]
[ { "msg_contents": "Hi,\r\n\r\nA view got converted to postgresql, performance while querying the view in postgresql is 10X longer compared to oracle.\r\nHardware resources are matching between oracle and postgresql.\r\n\r\nOracle version - Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production (RHEL7)\r\nPostgresql database version - PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit (Amazon RDS)\r\n\r\nFollowing details from oracle database.\r\n\r\n\r\nSQL> set autot traceonly exp stat\r\nSQL> SELECT IAT_ID, IAT_NAME, IAT_TYPE, IAV_VALUE, IAV_APPROVED FROM V_ITEM_ATTRIBUTEs WHERE IAV_ITM_ID = 2904107;\r\n\r\n66 rows selected.\r\n\r\nElapsed: 00:00:00.02\r\n\r\nExecution Plan\r\n----------------------------------------------------------\r\nPlan hash value: 1137648293\r\n\r\n-------------------------------------------------------------------------------------------------------\r\n| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |\r\n-------------------------------------------------------------------------------------------------------\r\n| 0 | SELECT STATEMENT | | 1 | 107 | 8 (0)| 00:00:01 |\r\n| 1 | NESTED LOOPS | | 1 | 107 | 8 (0)| 00:00:01 |\r\n| 2 | NESTED LOOPS | | 1 | 107 | 8 (0)| 00:00:01 |\r\n| 3 | NESTED LOOPS | | 1 | 77 | 7 (0)| 00:00:01 |\r\n| 4 | VIEW | VW_SQ_1 | 1 | 39 | 4 (0)| 00:00:01 |\r\n| 5 | HASH GROUP BY | | 1 | 14 | 4 (0)| 00:00:01 |\r\n|* 6 | INDEX RANGE SCAN | UNIQUE_IAV_VERSION | 23 | 322 | 4 (0)| 00:00:01 |\r\n| 7 | TABLE ACCESS BY INDEX ROWID| ITEM_ATTRIBUTE_VALUE | 1 | 38 | 3 (0)| 00:00:01 |\r\n|* 8 | INDEX UNIQUE SCAN | UNIQUE_IAV_VERSION | 1 | | 2 (0)| 00:00:01 |\r\n|* 9 | INDEX UNIQUE SCAN | PK_IAT_ID | 1 | | 0 (0)| 00:00:01 |\r\n| 10 | TABLE ACCESS BY INDEX ROWID | ITEM_ATTRIBUTE | 1 | 30 | 1 (0)| 00:00:01 |\r\n-------------------------------------------------------------------------------------------------------\r\n\r\nPredicate Information (identified by operation id):\r\n---------------------------------------------------\r\n\r\n 6 - access(\"B\".\"IAV_ITM_ID\"=2904107)\r\n 8 - access(\"A\".\"IAV_ITM_ID\"=2904107 AND \"ITEM_2\"=\"A\".\"IAV_IAT_ID\" AND\r\n \"A\".\"IAV_VERSION\"=\"MAX(B.IAV_VERSION)\")\r\n 9 - access(\"A\".\"IAV_IAT_ID\"=\"IAT_ID\")\r\n\r\n\r\nStatistics\r\n----------------------------------------------------------\r\n 0 recursive calls\r\n 0 db block gets\r\n 10047 consistent gets\r\n 0 physical reads\r\n 0 redo size\r\n 4346 bytes sent via SQL*Net to client\r\n 568 bytes received via SQL*Net from client\r\n 6 SQL*Net roundtrips to/from client\r\n 0 sorts (memory)\r\n 0 sorts (disk)\r\n 66 rows processed\r\n\r\n\r\n\r\nSQL execution details on Postgredql Database.\r\n\r\n\r\nqpsnap1pg=> explain (analyze on, buffers on, timing on) SELECT IAT_ID, IAT_NAME, IAT_TYPE, IAV_VALUE, IAV_APPROVED FROM V_ITEM_ATTRIBUTEs WHERE IAV_ITM_ID = 2904107;\r\n QUERY PLAN\r\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nNested Loop (cost=0.84..1282.74 rows=3 width=53) (actual time=0.904..464.233 rows=66 loops=1)\r\n Buffers: shared hit=65460\r\n -> Index Scan using idx_iav_itm_id on item_attribute_value a (cost=0.57..1275.83 rows=3 width=29) (actual time=0.895..463.787 rows=66 loops=1)\r\n Index Cond: (iav_itm_id = '2904107'::numeric)\r\n Filter: (iav_version = (SubPlan 2))\r\n Rows Removed by Filter: 11931\r\n Buffers: shared hit=65261\r\n SubPlan 2\r\n -> Result (cost=1.87..1.88 rows=1 width=32) (actual time=0.036..0.036 rows=1 loops=11997)\r\n Buffers: shared hit=59985\r\n InitPlan 1 (returns $2)\r\n -> Limit (cost=0.57..1.87 rows=1 width=5) (actual time=0.034..0.034 rows=1 loops=11997)\r\n Buffers: shared hit=59985\r\n -> Index Only Scan Backward using unique_iav_version on item_attribute_value b (cost=0.57..3.17 rows=2 width=5) (actual time=0.032..0.032 rows=1 loops=11997)\r\n Index Cond: ((iav_itm_id = a.iav_itm_id) AND (iav_iat_id = a.iav_iat_id) AND (iav_version IS NOT NULL))\r\n Heap Fetches: 11997\r\n Buffers: shared hit=59985\r\n -> Index Scan using pk_iat_id on item_attribute (cost=0.28..2.29 rows=1 width=29) (actual time=0.003..0.004 rows=1 loops=66)\r\n Index Cond: (iat_id = a.iav_iat_id)\r\n Buffers: shared hit=199\r\nPlanning time: 0.554 ms\r\nExecution time: 464.439 ms\r\n(22 rows)\r\n\r\nTime: 1616.691 ms\r\nqpsnap1pg=>\r\n\r\nV_item_attributes view code as below, same in oracle and postgresql.\r\n-------------------------------------------------------------------------------------\r\nSELECT a.iav_id,\r\n a.iav_itm_id,\r\n a.iav_iat_id,\r\n a.iav_value,\r\n a.iav_version,\r\n a.iav_approved,\r\n a.iav_create_date,\r\n a.iav_created_by,\r\n a.iav_modify_date,\r\n a.iav_modified_by,\r\n item_attribute.iat_id,\r\n item_attribute.iat_name,\r\n item_attribute.iat_type,\r\n item_attribute.iat_status,\r\n item_attribute.iat_requires_approval,\r\n item_attribute.iat_multi_valued,\r\n item_attribute.iat_inheritable,\r\n item_attribute.iat_create_date,\r\n item_attribute.iat_created_by,\r\n item_attribute.iat_modify_date,\r\n item_attribute.iat_modified_by,\r\n item_attribute.iat_translated\r\n FROM (item_attribute_value a\r\n JOIN item_attribute ON ((a.iav_iat_id = item_attribute.iat_id)))\r\n WHERE (a.iav_version = ( SELECT max(b.iav_version) AS max\r\n FROM item_attribute_value b\r\n WHERE ((b.iav_itm_id = a.iav_itm_id) AND (b.iav_iat_id = a.iav_iat_id))));\r\n\r\n\r\nOracle is using push predicate of IAV_ITM_ID column wherever item_attribute_values table being used.\r\nAny alternatives available to reduce view execution time in postgresql database or any hints, thoughts would be appreciated.\r\n\r\nThanks,\r\nPavan.\r\n\r\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nA view got converted to postgresql, performance while querying the view in postgresql is 10X longer compared to oracle.\nHardware resources are matching between oracle and postgresql.\n \nOracle version - Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production (RHEL7)\r\n\nPostgresql database version - PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit (Amazon RDS)\n \nFollowing details from oracle database.\n \n \nSQL> set autot traceonly exp stat\nSQL> SELECT IAT_ID, IAT_NAME, IAT_TYPE, IAV_VALUE, IAV_APPROVED FROM V_ITEM_ATTRIBUTEs WHERE IAV_ITM_ID = 2904107;\n \n66 rows selected.\n \nElapsed: 00:00:00.02\n \nExecution Plan\n----------------------------------------------------------\nPlan hash value: 1137648293\n \n-------------------------------------------------------------------------------------------------------\n| Id  | Operation              | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |\n-------------------------------------------------------------------------------------------------------\n|   0 | SELECT STATEMENT             |                |     1 |   107 |     8 (0)| 00:00:01 |\n|   1 |  NESTED LOOPS                |                |     1 |   107 |     8 (0)| 00:00:01 |\n|   2 |   NESTED LOOPS               |                |     1 |   107 |     8 (0)| 00:00:01 |\n|   3 |    NESTED LOOPS              |                |     1 |    77 |     7 (0)| 00:00:01 |\n|   4 |     VIEW                   | VW_SQ_1          |     1 |    39 |     4 (0)| 00:00:01 |\n|   5 |      HASH GROUP BY           |                |     1 |    14 |     4 (0)| 00:00:01 |\n|*  6 |       INDEX RANGE SCAN             | UNIQUE_IAV_VERSION   |    23 |   322 |     4     (0)| 00:00:01 |\n|   7 |     TABLE ACCESS BY INDEX ROWID| ITEM_ATTRIBUTE_VALUE |     1 |    38 |     3      (0)| 00:00:01 |\n|*  8 |      INDEX UNIQUE SCAN             | UNIQUE_IAV_VERSION   |     1 |       |     2     (0)| 00:00:01 |\n|*  9 |    INDEX UNIQUE SCAN         | PK_IAT_ID            |     1 |       |     0 (0)| 00:00:01 |\n|  10 |   TABLE ACCESS BY INDEX ROWID  | ITEM_ATTRIBUTE       |     1 |    30 |     1      (0)| 00:00:01 |\n-------------------------------------------------------------------------------------------------------\n \nPredicate Information (identified by operation id):\n---------------------------------------------------\n \n   6 - access(\"B\".\"IAV_ITM_ID\"=2904107)\n   8 - access(\"A\".\"IAV_ITM_ID\"=2904107 AND \"ITEM_2\"=\"A\".\"IAV_IAT_ID\" AND\n            \"A\".\"IAV_VERSION\"=\"MAX(B.IAV_VERSION)\")\n   9 - access(\"A\".\"IAV_IAT_ID\"=\"IAT_ID\")\n \n \nStatistics\n----------------------------------------------------------\n        0  recursive calls\n        0  db block gets\n      10047  consistent gets\n        0  physical reads\n        0  redo size\n       4346  bytes sent via SQL*Net to client\n      568  bytes received via SQL*Net from client\n        6  SQL*Net roundtrips to/from client\n        0  sorts (memory)\n        0  sorts (disk)\n      66  rows processed\n \n \n \nSQL execution details on Postgredql Database.\n \n \nqpsnap1pg=> explain (analyze on, buffers on, timing on) SELECT IAT_ID, IAT_NAME, IAT_TYPE, IAV_VALUE, IAV_APPROVED FROM V_ITEM_ATTRIBUTEs WHERE IAV_ITM_ID = 2904107;\n                                                                                       QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nNested Loop  (cost=0.84..1282.74 rows=3 width=53) (actual time=0.904..464.233 rows=66 loops=1)\n   Buffers: shared hit=65460\n   ->  Index Scan using idx_iav_itm_id on item_attribute_value a  (cost=0.57..1275.83 rows=3 width=29) (actual time=0.895..463.787 rows=66 loops=1)\n         Index Cond: (iav_itm_id = '2904107'::numeric)\n         Filter: (iav_version = (SubPlan 2))\n         Rows Removed by Filter: 11931\n         Buffers: shared hit=65261\n         SubPlan 2\n           ->  Result  (cost=1.87..1.88 rows=1 width=32) (actual time=0.036..0.036 rows=1 loops=11997)\n                 Buffers: shared hit=59985\n                 InitPlan 1 (returns $2)\n                   ->  Limit  (cost=0.57..1.87 rows=1 width=5) (actual time=0.034..0.034 rows=1 loops=11997)\n                         Buffers: shared hit=59985\n                         ->  Index Only Scan Backward using unique_iav_version on item_attribute_value b  (cost=0.57..3.17 rows=2 width=5) (actual time=0.032..0.032 rows=1 loops=11997)\n                               Index Cond: ((iav_itm_id = a.iav_itm_id) AND (iav_iat_id = a.iav_iat_id) AND (iav_version IS NOT NULL))\n                               Heap Fetches: 11997\n                               Buffers: shared hit=59985\n   ->  Index Scan using pk_iat_id on item_attribute  (cost=0.28..2.29 rows=1 width=29) (actual time=0.003..0.004 rows=1 loops=66)\n         Index Cond: (iat_id = a.iav_iat_id)\n         Buffers: shared hit=199\nPlanning time: 0.554 ms\nExecution time: 464.439 ms\n(22 rows)\n \nTime: 1616.691 ms\nqpsnap1pg=>\n \nV_item_attributes view code as below, same in oracle and postgresql.\n-------------------------------------------------------------------------------------\nSELECT a.iav_id,\n    a.iav_itm_id,\n    a.iav_iat_id,\n    a.iav_value,\n    a.iav_version,\n    a.iav_approved,\n    a.iav_create_date,\n    a.iav_created_by,\n    a.iav_modify_date,\n    a.iav_modified_by,\n    item_attribute.iat_id,\n    item_attribute.iat_name,\n    item_attribute.iat_type,\n    item_attribute.iat_status,\n    item_attribute.iat_requires_approval,\n    item_attribute.iat_multi_valued,\n    item_attribute.iat_inheritable,\n    item_attribute.iat_create_date,\n    item_attribute.iat_created_by,\n    item_attribute.iat_modify_date,\n    item_attribute.iat_modified_by,\n    item_attribute.iat_translated\n   FROM (item_attribute_value a\n     JOIN item_attribute ON ((a.iav_iat_id = item_attribute.iat_id)))\n  WHERE (a.iav_version = ( SELECT max(b.iav_version) AS max\n           FROM item_attribute_value b\n          WHERE ((b.iav_itm_id = a.iav_itm_id) AND (b.iav_iat_id = a.iav_iat_id))));\n \n \nOracle is using push predicate of IAV_ITM_ID column wherever item_attribute_values table being used.\nAny alternatives available to reduce view execution time in postgresql database or any hints, thoughts would be appreciated.\n \nThanks,\nPavan.", "msg_date": "Tue, 9 Jan 2018 21:32:33 +0000", "msg_from": "\"Reddygari, Pavan\" <[email protected]>", "msg_from_op": true, "msg_subject": "View preformance oracle to postgresql" }, { "msg_contents": "Pavan Reddygari wrote:\n> A view got converted to postgresql, performance while querying the view in postgresql is 10X longer compared to oracle.\n> Hardware resources are matching between oracle and postgresql.\n> \n> V_item_attributes view code as below, same in oracle and postgresql.\n> -------------------------------------------------------------------------------------\n> SELECT a.iav_id,\n> a.iav_itm_id,\n> a.iav_iat_id,\n> a.iav_value,\n> a.iav_version,\n> a.iav_approved,\n> a.iav_create_date,\n> a.iav_created_by,\n> a.iav_modify_date,\n> a.iav_modified_by,\n> item_attribute.iat_id,\n> item_attribute.iat_name,\n> item_attribute.iat_type,\n> item_attribute.iat_status,\n> item_attribute.iat_requires_approval,\n> item_attribute.iat_multi_valued,\n> item_attribute.iat_inheritable,\n> item_attribute.iat_create_date,\n> item_attribute.iat_created_by,\n> item_attribute.iat_modify_date,\n> item_attribute.iat_modified_by,\n> item_attribute.iat_translated\n> FROM (item_attribute_value a\n> JOIN item_attribute ON ((a.iav_iat_id = item_attribute.iat_id)))\n> WHERE (a.iav_version = ( SELECT max(b.iav_version) AS max\n> FROM item_attribute_value b\n> WHERE ((b.iav_itm_id = a.iav_itm_id) AND (b.iav_iat_id = a.iav_iat_id))));\n> \n> \n> Oracle is using push predicate of IAV_ITM_ID column wherever item_attribute_values table being used.\n> Any alternatives available to reduce view execution time in postgresql database or any hints, thoughts would be appreciated.\n\nIf (iav_version, iav_itm_id, iav_iat_id) is unique, you could use\n\n SELECT DISTINCT ON (a.iav_itm_id, a.iav_iat_id)\n ...\n FROM item_attribute_value a JOIN item_attribute b ON ...\n ORDER BY a.iav_version DESC;\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Wed, 10 Jan 2018 11:41:04 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View preformance oracle to postgresql" }, { "msg_contents": "On Tue, Jan 9, 2018 at 3:32 PM, Reddygari, Pavan <[email protected]> wrote:\n>\n> A view got converted to postgresql, performance while querying the view in postgresql is 10X longer compared to oracle.\n>\n> FROM (item_attribute_value a\n> JOIN item_attribute ON ((a.iav_iat_id = item_attribute.iat_id)))\n> WHERE (a.iav_version = ( SELECT max(b.iav_version) AS max\n> FROM item_attribute_value b\n> WHERE ((b.iav_itm_id = a.iav_itm_id) AND (b.iav_iat_id =\n> a.iav_iat_id))));\n\ncan you try rewriting the (more sanely formatted)\nFROM item_attribute_value a\nJOIN item_attribute ON a.iav_iat_id = item_attribute.iat_id\nWHERE a.iav_version =\n (\n SELECT max(b.iav_version) AS max\n FROM item_attribute_value b\n WHERE\n b.iav_itm_id = a.iav_itm_id\n AND b.iav_iat_id = a.iav_iat_id\n );\n\nto\nFROM item_attribute_value a\nJOIN item_attribute ON a.iav_iat_id = item_attribute.iat_id\nJOIN\n(\n SELECT max(b.iav_version) AS iav_version\n FROM item_attribute_value b\n GROUP BY iav_itm_id, iav_iat_id\n) q USING (iav_itm_id, iav_iat_id, iav_version);\n\nmerlin\n\n", "msg_date": "Wed, 10 Jan 2018 07:58:28 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View preformance oracle to postgresql" } ]
[ { "msg_contents": "HI List\n\nI am trying to understand the following :\n\nhave 2  identical PG cluster on diff hosts, same postgresql.conf, same \ndb schema :\n\n  same tale DDL and row counts but different size ( 14GB diff  ), I run \nreindex and full vacuum analyze,  but I can not decrease the size of \nlarger table(50GB) to match the size in second\n\nPG cluster.\n\nany tips what can make this 2 tables to have diff size except the host ( \nsame OS and PG version 9.5.3)?\n\n\nThank you\n\n\n", "msg_date": "Tue, 9 Jan 2018 17:54:07 -0500", "msg_from": "ghiureai <[email protected]>", "msg_from_op": true, "msg_subject": "PG 9.5 2 tables same DDL with diff size" }, { "msg_contents": "-----Original Message-----\r\nFrom: ghiureai [mailto:[email protected]] \r\nSent: Tuesday, January 09, 2018 5:54 PM\r\nTo: [email protected]\r\nSubject: PG 9.5 2 tables same DDL with diff size\r\n\r\nHI List\r\n\r\nI am trying to understand the following :\r\n\r\nhave 2 identical PG cluster on diff hosts, same postgresql.conf, same db schema :\r\n\r\n same tale DDL and row counts but different size ( 14GB diff ), I run reindex and full vacuum analyze, but I can not decrease the size of larger table(50GB) to match the size in second\r\n\r\nPG cluster.\r\n\r\nany tips what can make this 2 tables to have diff size except the host ( same OS and PG version 9.5.3)?\r\n\r\n\r\nThank you\r\n________________________________________________________________________________________________\r\n\r\nTable is still bloated because of some long running transactions, which don't allow full vacuum to do its job?\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n", "msg_date": "Wed, 10 Jan 2018 15:14:09 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "RE: PG 9.5 2 tables same DDL with diff size" }, { "msg_contents": "I run full vacuum and reindex on largest table (50GB) while there was no\nserver activities so I assume no transaction was holding a lock on table\nsince the full vacuum was able to run, anything where I should consider\nlooking ?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Wed, 10 Jan 2018 08:48:16 -0700 (MST)", "msg_from": "Isabella Ghiurea <[email protected]>", "msg_from_op": false, "msg_subject": "RE: PG 9.5 2 tables same DDL with diff size" }, { "msg_contents": "\n-----Original Message-----\nFrom: Isabella Ghiurea [mailto:[email protected]] \nSent: Wednesday, January 10, 2018 10:48 AM\nTo: [email protected]\nSubject: RE: PG 9.5 2 tables same DDL with diff size\n\nAttention: This email was sent from someone outside of Perceptron. Always exercise caution when opening attachments or clicking links from unknown senders or when receiving unexpected emails.\n\n\nI run full vacuum and reindex on largest table (50GB) while there was no server activities so I assume no transaction was holding a lock on table since the full vacuum was able to run, anything where I should consider looking ?\n\n\n__________________________________________________________________________________________________________\n\nYes, in pg_stat_activity look for idle transactions that started long time ago.\nTo prevent vacuum from doing its job they don't need to lock the table, they could just prevent from cleaning \"old\" row versions.\n\nRegards,\nIgor Neyman\n\n\n", "msg_date": "Wed, 10 Jan 2018 16:10:13 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "RE: PG 9.5 2 tables same DDL with diff size" }, { "msg_contents": "\n\nThank you Igor, I was able to eliminate  the 15GB bloating for a 35GB \ntable size  , only after I restart the      Pg server with one single \nconnections and run a full vacuum for table.\n\n\nIsabella\nOn 10/01/18 11:10 AM, Igor Neyman wrote:\n> -----Original Message-----\n> From: Isabella Ghiurea [mailto:[email protected]]\n> Sent: Wednesday, January 10, 2018 10:48 AM\n> To: [email protected]\n> Subject: RE: PG 9.5 2 tables same DDL with diff size\n>\n> Attention: This email was sent from someone outside of Perceptron. Always exercise caution when opening attachments or clicking links from unknown senders or when receiving unexpected emails.\n>\n>\n> I run full vacuum and reindex on largest table (50GB) while there was no server activities so I assume no transaction was holding a lock on table since the full vacuum was able to run, anything where I should consider looking ?\n>\n>\n> __________________________________________________________________________________________________________\n>\n> Yes, in pg_stat_activity look for idle transactions that started long time ago.\n> To prevent vacuum from doing its job they don't need to lock the table, they could just prevent from cleaning \"old\" row versions.\n>\n> Regards,\n> Igor Neyman\n>\n\n\n", "msg_date": "Wed, 10 Jan 2018 13:49:41 -0500", "msg_from": "ghiureai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG 9.5 2 tables same DDL with diff size" } ]
[ { "msg_contents": "Hi Expert,\n\nAfter restarting PostgreSQL Server, I am unable to connect postgres from putty, I am getting error\n\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n\nWhile postgres is already running and also I am able to connect databases from PGAdmin tool but not from command prompt.\n\nSoftware-postgresql-9.3\nOs-Centos\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nHi Expert,\n \nAfter restarting PostgreSQL Server, I am unable to connect postgres from putty, I am getting error\n \npsql: could not connect to server: No such file or directory\n    Is the server running locally and accepting\n    connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n \nWhile postgres is already running and also I am able to connect databases from PGAdmin tool but not from command prompt.\n \nSoftware-postgresql-9.3\nOs-Centos\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.", "msg_date": "Wed, 10 Jan 2018 09:01:24 +0000", "msg_from": "Dinesh Chandra 12108 <[email protected]>", "msg_from_op": true, "msg_subject": "Unable to connect Postgres using psql while postgres is already\n running." }, { "msg_contents": "Dinesh Chandra 12108 wrote:\n> After restarting PostgreSQL Server, I am unable to connect postgres from putty, I am getting error\n> \n> psql: could not connect to server: No such file or directory\n> Is the server running locally and accepting\n> connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\n> \n> While postgres is already running and also I am able to connect databases from PGAdmin tool but not from command prompt.\n\nYou know that a local connection only works when you are logged in\non the database machine, right?\n\nIs your database listening on port 5432?\n\nConnect as user \"postgres\" and run the following queries:\n\n SHOW port;\n SHOW unix_socket_directories;\n\nThat will show the port and the directories where UNIX sockets are created.\n\nYou can use a socket directory name with the -h option of psql.\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Wed, 10 Jan 2018 10:10:58 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unable to connect Postgres using psql while postgres is already\n running." }, { "msg_contents": "Hi Laurenz Albe,\r\n\r\nThanks for your response.\r\n\r\nBut file \".s.PGSQL.5432\" does not exist .\r\nHow can I re-create this or any other option?\r\n\r\nRegards,\r\nDinesh\r\n\r\n\r\n-----Original Message-----\r\nFrom: Laurenz Albe [mailto:[email protected]]\r\nSent: 10 January, 2018 2:41 PM\r\nTo: Dinesh Chandra 12108 <[email protected]>; [email protected]\r\nSubject: [EXTERNAL]Re: Unable to connect Postgres using psql while postgres is already running.\r\n\r\nDinesh Chandra 12108 wrote:\r\n> After restarting PostgreSQL Server, I am unable to connect postgres\r\n> from putty, I am getting error\r\n>\r\n> psql: could not connect to server: No such file or directory\r\n> Is the server running locally and accepting\r\n> connections on Unix domain socket \"/tmp/.s.PGSQL.5432\"?\r\n>\r\n> While postgres is already running and also I am able to connect databases from PGAdmin tool but not from command prompt.\r\n\r\nYou know that a local connection only works when you are logged in on the database machine, right?\r\n\r\nIs your database listening on port 5432?\r\n\r\nConnect as user \"postgres\" and run the following queries:\r\n\r\n SHOW port;\r\n SHOW unix_socket_directories;\r\n\r\nThat will show the port and the directories where UNIX sockets are created.\r\n\r\nYou can use a socket directory name with the -h option of psql.\r\n\r\nYours,\r\nLaurenz Albe\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n", "msg_date": "Wed, 10 Jan 2018 12:42:24 +0000", "msg_from": "Dinesh Chandra 12108 <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Re: Unable to connect Postgres using psql while postgres is\n already\n running." } ]
[ { "msg_contents": "Hello,\n\nThis is my first question in postgres mailing list. If there are any\nmistakes, please don't mind.\n\nI am using PostgreSQL 9.4.4 on a Mac machine executing queries on postgres\nserver through the psql client.\n\nservicedesk=# select version();\n\n version\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.4.4 on x86_64-apple-darwin, compiled by\ni686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build\n5658) (LLVM build 2336.11.00), 64-bit\n(1 row)\n\n\nRepeatedly, I came across instances where any query when run for the first\ntime takes longer time to execute (nearly 2 second sometimes), but\nsubsequent execution of the same query is very fast (less than 20\nmilliseconds).\n\nThe tables involved in the query also have very less number of rows (less\nthan 50).\n\nOn running explain (analyze, buffers) got the following results.\n\n\n-- start --\n\nservicedesk=#\nservicedesk=# explain (analyze, buffers, verbose) SELECT COUNT(*) FROM\nChangeDetails LEFT JOIN SDOrganization AaaOrg ON\nChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\n\n\n\n\n\n QUERY PLAN\n\n\n\n\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=13.25..13.26 rows=1 width=160) (actual time=0.018..0.018\nrows=1 loops=1)\n Output: count(*)\n Buffers: shared hit=1\n -> Seq Scan on public.changedetails (cost=0.00..12.60 rows=260\nwidth=160) (actual time=0.007..0.008 rows=2 loops=1)\n Output: changedetails.changeid, changedetails.initiatorid,\nchangedetails.technicianid, changedetails.stageid,\nchangedetails.priorityid, changedetails.categoryid,\nchangedetails.subcategoryid, changedetails.itemid,\nchangedetails.appr_statusid, changedetails.changetypeid,\nchangedetails.urgencyid, changedetails.title, changedetails.description,\nchangedetails.createdtime, changedetails.scheduledstarttime,\nchangedetails.scheduledendtime, changedetails.completedtime,\nchangedetails.notespresent, changedetails.siteid, changedetails.groupid,\nchangedetails.templateid, changedetails.wfid, changedetails.wfstageid,\nchangedetails.wfstatusid, changedetails.isemergency,\nchangedetails.isretrospective, changedetails.reasonforchangeid,\nchangedetails.closurecodeid, changedetails.changemanagerid,\nchangedetails.riskid, changedetails.impactid, changedetails.slaid,\nchangedetails.isoverdue\n Buffers: shared hit=1\n Planning time: 468.239 ms\n Execution time: 0.104 ms\n(8 rows)\n\n\nservicedesk=#\nservicedesk=# explain (analyze, buffers, verbose) SELECT COUNT(*) FROM\nChangeDetails LEFT JOIN SDOrganization AaaOrg ON\nChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\n\n\n\n\n\n QUERY PLAN\n\n\n\n\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=13.25..13.26 rows=1 width=160) (actual time=0.009..0.009\nrows=1 loops=1)\n Output: count(*)\n Buffers: shared hit=1\n -> Seq Scan on public.changedetails (cost=0.00..12.60 rows=260\nwidth=160) (actual time=0.005..0.005 rows=2 loops=1)\n Output: changedetails.changeid, changedetails.initiatorid,\nchangedetails.technicianid, changedetails.stageid,\nchangedetails.priorityid, changedetails.categoryid,\nchangedetails.subcategoryid, changedetails.itemid,\nchangedetails.appr_statusid, changedetails.changetypeid,\nchangedetails.urgencyid, changedetails.title, changedetails.description,\nchangedetails.createdtime, changedetails.scheduledstarttime,\nchangedetails.scheduledendtime, changedetails.completedtime,\nchangedetails.notespresent, changedetails.siteid, changedetails.groupid,\nchangedetails.templateid, changedetails.wfid, changedetails.wfstageid,\nchangedetails.wfstatusid, changedetails.isemergency,\nchangedetails.isretrospective, changedetails.reasonforchangeid,\nchangedetails.closurecodeid, changedetails.changemanagerid,\nchangedetails.riskid, changedetails.impactid, changedetails.slaid,\nchangedetails.isoverdue\n Buffers: shared hit=1\n Planning time: 1.058 ms\n Execution time: 0.066 ms\n(8 rows)\n\n\n-- end --\n\n\n From the above result, it is clear that the query execution is very fast\nbut planning time is high in the first run (468.239 ms).\n\nI am not using prepared statements. Postgres documentation and previous\nquestions in the pgsql-performance mailing list mention that the query plan\nis cached only when prepared statements are used.\n\nhttps://www.postgresql.org/message-id/15600.1346885470%40sss.pgh.pa.us\n\nIn the above thread Tom Lane mentions that the plan is never cached for raw\nqueries. Yet, this is exactly what seems to be happening in my case. Am I\nmissing something? Please let me know how I can make sure the query\nexecution for the first time is fast too.\n\n\nThanks and regards,\nNanda\n\nHello,This is my first question in postgres mailing list. If there are any mistakes, please don't mind.I am using PostgreSQL 9.4.4 on a Mac machine executing queries on postgres server through the psql client.servicedesk=# select version();                                                                              version                                                                               -------------------------------------------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 9.4.4 on x86_64-apple-darwin, compiled by i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00), 64-bit(1 row)Repeatedly, I came across instances where any query when run for the first time takes longer time to execute (nearly 2 second sometimes), but subsequent execution of the same query is very fast (less than 20 milliseconds).The tables involved in the query also have very less number of rows (less than 50).On running explain (analyze, buffers) got the following results.-- start --servicedesk=# servicedesk=# explain (analyze, buffers, verbose) SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;                                                                                                                                                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                                                      ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=13.25..13.26 rows=1 width=160) (actual time=0.018..0.018 rows=1 loops=1)   Output: count(*)   Buffers: shared hit=1   ->  Seq Scan on public.changedetails  (cost=0.00..12.60 rows=260 width=160) (actual time=0.007..0.008 rows=2 loops=1)         Output: changedetails.changeid, changedetails.initiatorid, changedetails.technicianid, changedetails.stageid, changedetails.priorityid, changedetails.categoryid, changedetails.subcategoryid, changedetails.itemid, changedetails.appr_statusid, changedetails.changetypeid, changedetails.urgencyid, changedetails.title, changedetails.description, changedetails.createdtime, changedetails.scheduledstarttime, changedetails.scheduledendtime, changedetails.completedtime, changedetails.notespresent, changedetails.siteid, changedetails.groupid, changedetails.templateid, changedetails.wfid, changedetails.wfstageid, changedetails.wfstatusid, changedetails.isemergency, changedetails.isretrospective, changedetails.reasonforchangeid, changedetails.closurecodeid, changedetails.changemanagerid, changedetails.riskid, changedetails.impactid, changedetails.slaid, changedetails.isoverdue         Buffers: shared hit=1 Planning time: 468.239 ms Execution time: 0.104 ms(8 rows)servicedesk=# servicedesk=# explain (analyze, buffers, verbose) SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;                                                                                                                                                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                                                                                                                                                      ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=13.25..13.26 rows=1 width=160) (actual time=0.009..0.009 rows=1 loops=1)   Output: count(*)   Buffers: shared hit=1   ->  Seq Scan on public.changedetails  (cost=0.00..12.60 rows=260 width=160) (actual time=0.005..0.005 rows=2 loops=1)         Output: changedetails.changeid, changedetails.initiatorid, changedetails.technicianid, changedetails.stageid, changedetails.priorityid, changedetails.categoryid, changedetails.subcategoryid, changedetails.itemid, changedetails.appr_statusid, changedetails.changetypeid, changedetails.urgencyid, changedetails.title, changedetails.description, changedetails.createdtime, changedetails.scheduledstarttime, changedetails.scheduledendtime, changedetails.completedtime, changedetails.notespresent, changedetails.siteid, changedetails.groupid, changedetails.templateid, changedetails.wfid, changedetails.wfstageid, changedetails.wfstatusid, changedetails.isemergency, changedetails.isretrospective, changedetails.reasonforchangeid, changedetails.closurecodeid, changedetails.changemanagerid, changedetails.riskid, changedetails.impactid, changedetails.slaid, changedetails.isoverdue         Buffers: shared hit=1 Planning time: 1.058 ms Execution time: 0.066 ms(8 rows)-- end --From the above result, it is clear that the query execution is very fast but planning time is high in the first run (468.239 ms).I am not using prepared statements. Postgres documentation and previous questions in the pgsql-performance mailing list mention that the query plan is cached only when prepared statements are used.https://www.postgresql.org/message-id/15600.1346885470%40sss.pgh.pa.usIn the above thread Tom Lane mentions that the plan is never cached for raw queries. Yet, this is exactly what seems to be happening in my case. Am I missing something? Please let me know how I can make sure the query execution for the first time is fast too.Thanks and regards,Nanda", "msg_date": "Wed, 10 Jan 2018 17:29:50 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": true, "msg_subject": "Query is slow when run for first time; subsequent execution is fast" }, { "msg_contents": "On Wed, Jan 10, 2018 at 3:59 AM, Nandakumar M <[email protected]> wrote:\n\n>\n> I am not using prepared statements. Postgres documentation and previous\n> questions in the pgsql-performance mailing list mention that the query plan\n> is cached only when prepared statements are used.\n>\n> https://www.postgresql.org/message-id/15600.1346885470%40sss.pgh.pa.us\n>\n> In the above thread Tom Lane mentions that the plan is never cached for\n> raw queries. Yet, this is exactly what seems to be happening in my case. Am\n> I missing something?\n>\n\nThe query plan itself is not cached, but all the metadata about the (large\nnumber) of tables used in the query is cached. Apparently reading/parsing\nthat data is the slow step, not coming up with the actual plan.\n\n> Please let me know how I can make sure the query execution for the first\ntime is fast too.\n\nDon't keep closing and reopening connections. Use a connection pooler\n(pgbouncer, pgpool, whatever pooler is built into your\nlanguage/library/driver, etc.) if necessary to accomplish this.\n\nCheers,\n\nJeff\n\nOn Wed, Jan 10, 2018 at 3:59 AM, Nandakumar M <[email protected]> wrote:I am not using prepared statements. Postgres documentation and previous questions in the pgsql-performance mailing list mention that the query plan is cached only when prepared statements are used.https://www.postgresql.org/message-id/15600.1346885470%40sss.pgh.pa.usIn the above thread Tom Lane mentions that the plan is never cached for raw queries. Yet, this is exactly what seems to be happening in my case. Am I missing something? The query plan itself is not cached, but all the metadata about the (large number) of tables used in the query is cached.  Apparently reading/parsing that data is the slow step, not coming up with the actual plan. > Please let me know how I can make sure the query execution for the first time is fast too.Don't keep closing and reopening connections.  Use a connection pooler (pgbouncer, pgpool, whatever pooler is built into your language/library/driver, etc.) if necessary to accomplish this.Cheers,Jeff", "msg_date": "Wed, 10 Jan 2018 11:34:17 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slow when run for first time;\n subsequent execution is fast" }, { "msg_contents": "Hello Jeff,\n\nThanks for the insights.\n\n>Don't keep closing and reopening connections.\n\nEven if I close a connection and open a new one and execute the same query,\nthe planning time is considerably less than the first time. Only when I\nrestart the Postgres server then I face high planning time again.\n\n>The query plan itself is not cached, but all the metadata about the (large\nnumber) of tables used in the query is cached. Apparently reading/parsing\nthat data is the slow step, not coming up with the actual plan.\n\nI enabled logging for parser, planner etc in postgresql.conf and re run the\nqueries. Following is the logs - I am not sure exactly how this should be\nread, but the major difference in elapsed time seems to be in PLANNER\nSTATISTICS section.\n\n-- start --\n\n1. First run\n\nLOG: PARSER STATISTICS\nDETAIL: ! system usage stats:\n! 0.000482 elapsed 0.000356 user 0.000127 system sec\n! [0.004921 user 0.004824 sys total]\n! 0/0 [0/1] filesystem blocks in/out\n! 0/102 [0/1076] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent\n! 0/0 [8/11] voluntary/involuntary context switches\nSTATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\nAaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: statement: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN\nSDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: PARSE ANALYSIS STATISTICS\nDETAIL: ! system usage stats:\n! 0.030012 elapsed 0.006251 user 0.006894 system sec\n! [0.011270 user 0.011777 sys total]\n! 0/0 [0/1] filesystem blocks in/out\n! 0/1036 [0/2126] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent\n! 154/5 [163/16] voluntary/involuntary context switches\nSTATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\nAaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: REWRITER STATISTICS\nDETAIL: ! system usage stats:\n! 0.000058 elapsed 0.000052 user 0.000006 system sec\n! [0.011350 user 0.011793 sys total]\n! 0/0 [0/1] filesystem blocks in/out\n! 0/6 [0/2132] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent\n! 0/0 [163/16] voluntary/involuntary context switches\nSTATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\nAaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: PLANNER STATISTICS\nDETAIL: ! system usage stats:\n! 0.326018 elapsed 0.013452 user 0.009604 system sec\n! [0.024821 user 0.021400 sys total]\n! 0/0 [0/1] filesystem blocks in/out\n! 0/531 [0/2663] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent\n! 51/71 [214/87] voluntary/involuntary context switches\nSTATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\nAaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n! 0.000047 elapsed 0.000026 user 0.000019 system sec\n! [0.024961 user 0.021461 sys total]\n! 0/0 [0/1] filesystem blocks in/out\n! 0/13 [0/2709] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent\n! 0/0 [214/87] voluntary/involuntary context switches\nSTATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\nAaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: duration: 357.192 ms\n\n\n2. Second run\n\n\nLOG: PARSER STATISTICS\nDETAIL: ! system usage stats:\n! 0.000169 elapsed 0.000161 user 0.000018 system sec\n! [0.025308 user 0.021656 sys total]\n! 0/0 [0/1] filesystem blocks in/out\n! 0/4 [0/2716] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent\n! 0/0 [215/87] voluntary/involuntary context switches\nSTATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\nAaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: statement: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN\nSDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: PARSE ANALYSIS STATISTICS\nDETAIL: ! system usage stats:\n! 0.002665 elapsed 0.001974 user 0.000196 system sec\n! [0.027325 user 0.021866 sys total]\n! 0/0 [0/1] filesystem blocks in/out\n! 0/17 [0/2734] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent\n! 0/56 [215/144] voluntary/involuntary context switches\nSTATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\nAaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: REWRITER STATISTICS\nDETAIL: ! system usage stats:\n! 0.000068 elapsed 0.000068 user 0.000000 system sec\n! [0.027425 user 0.021876 sys total]\n! 0/0 [0/1] filesystem blocks in/out\n! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent\n! 0/0 [215/144] voluntary/involuntary context switches\nSTATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\nAaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: PLANNER STATISTICS\nDETAIL: ! system usage stats:\n! 0.001025 elapsed 0.000917 user 0.000105 system sec\n! [0.028363 user 0.021986 sys total]\n! 0/0 [0/1] filesystem blocks in/out\n! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent\n! 0/1 [215/145] voluntary/involuntary context switches\nSTATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\nAaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n! 0.000016 elapsed 0.000016 user 0.000000 system sec\n! [0.028449 user 0.021993 sys total]\n! 0/0 [0/1] filesystem blocks in/out\n! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent\n! 0/0 [215/145] voluntary/involuntary context switches\nSTATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\nAaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\nApprovalStatusDefinition ON\nChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN\nCategoryDefinition ON\nChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\nChange_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\nChange_StageDefinition ON\nChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN\nChange_StatusDefinition ON\nChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN\nAaaUser ChangeManager ON\nChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\nChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT\nJOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\nLEFT JOIN ChangeResolution ON\nChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate\nON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN\nChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\nLEFT JOIN Change_ClosureCode ON\nChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition\nON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN\nChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN\nImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT\nJOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\nPriorityDefinition ON\nChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN\nQueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN\nRiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN\nStageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN\nSubCategoryDefinition ON\nChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN\nUrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID\nLEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;\nLOG: duration: 4.277 ms\n\n\n-- end --\n\na. If someone could interpret what exactly the PLANNER STATISTICS section\nmeans (to identify the exact bottleneck) it would be great!\n\nb. Sometimes, first execution of a query takes nearly 2 seconds of planning\ntime. This seems to be too high even for the first run of the query. Will\nsome configuration change help speed up the planning time? Also, is there\nany way to pre warm the caches so that the meta data that is required for\nthe query planning is available in cache before hand?\n\nThanks and regards,\nNanda\n\nHello Jeff,Thanks for the insights.>Don't keep closing and reopening connections.Even if I close a connection and open a new one and execute the same query, the planning time is considerably less than the first time. Only when I restart the Postgres server then I face high planning time again.>The query plan itself is not cached, but all the metadata about the (large number) of tables used in the query is cached.  Apparently reading/parsing that data is the slow step, not coming up with the actual plan.I enabled logging for parser, planner etc in postgresql.conf and re run the queries. Following is the logs - I am not sure exactly how this should be read, but the major difference in elapsed time seems to be in PLANNER STATISTICS section.-- start --1. First runLOG:  PARSER STATISTICSDETAIL:  ! system usage stats: ! 0.000482 elapsed 0.000356 user 0.000127 system sec ! [0.004921 user 0.004824 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/102 [0/1076] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent ! 0/0 [8/11] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  statement: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  PARSE ANALYSIS STATISTICSDETAIL:  ! system usage stats: ! 0.030012 elapsed 0.006251 user 0.006894 system sec ! [0.011270 user 0.011777 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/1036 [0/2126] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent ! 154/5 [163/16] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  REWRITER STATISTICSDETAIL:  ! system usage stats: ! 0.000058 elapsed 0.000052 user 0.000006 system sec ! [0.011350 user 0.011793 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/6 [0/2132] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent ! 0/0 [163/16] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  PLANNER STATISTICSDETAIL:  ! system usage stats: ! 0.326018 elapsed 0.013452 user 0.009604 system sec ! [0.024821 user 0.021400 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/531 [0/2663] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent ! 51/71 [214/87] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  EXECUTOR STATISTICSDETAIL:  ! system usage stats: ! 0.000047 elapsed 0.000026 user 0.000019 system sec ! [0.024961 user 0.021461 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/13 [0/2709] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent ! 0/0 [214/87] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  duration: 357.192 ms2. Second runLOG:  PARSER STATISTICSDETAIL:  ! system usage stats: ! 0.000169 elapsed 0.000161 user 0.000018 system sec ! [0.025308 user 0.021656 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/4 [0/2716] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent ! 0/0 [215/87] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  statement: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  PARSE ANALYSIS STATISTICSDETAIL:  ! system usage stats: ! 0.002665 elapsed 0.001974 user 0.000196 system sec ! [0.027325 user 0.021866 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/17 [0/2734] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent ! 0/56 [215/144] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  REWRITER STATISTICSDETAIL:  ! system usage stats: ! 0.000068 elapsed 0.000068 user 0.000000 system sec ! [0.027425 user 0.021876 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent ! 0/0 [215/144] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  PLANNER STATISTICSDETAIL:  ! system usage stats: ! 0.001025 elapsed 0.000917 user 0.000105 system sec ! [0.028363 user 0.021986 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent ! 0/1 [215/145] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  EXECUTOR STATISTICSDETAIL:  ! system usage stats: ! 0.000016 elapsed 0.000016 user 0.000000 system sec ! [0.028449 user 0.021993 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent ! 0/0 [215/145] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  duration: 4.277 ms-- end --a. If someone could interpret what exactly the PLANNER STATISTICS section means (to identify the exact bottleneck) it would be great!b. Sometimes, first execution of a query takes nearly 2 seconds of planning time. This seems to be too high even for the first run of the query. Will some configuration change help speed up the planning time? Also, is there any way to pre warm the caches so that the meta data that is required for the query planning is available in cache before hand?Thanks and regards,Nanda", "msg_date": "Fri, 12 Jan 2018 13:33:32 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query is slow when run for first time;\n subsequent execution is fast" }, { "msg_contents": "2018-01-12 9:03 GMT+01:00 Nandakumar M <[email protected]>:\n\n> Hello Jeff,\n>\n> Thanks for the insights.\n>\n> >Don't keep closing and reopening connections.\n>\n> Even if I close a connection and open a new one and execute the same\n> query, the planning time is considerably less than the first time. Only\n> when I restart the Postgres server then I face high planning time again.\n>\n> >The query plan itself is not cached, but all the metadata about the\n> (large number) of tables used in the query is cached. Apparently\n> reading/parsing that data is the slow step, not coming up with the actual\n> plan.\n>\n> I enabled logging for parser, planner etc in postgresql.conf and re run\n> the queries. Following is the logs - I am not sure exactly how this should\n> be read, but the major difference in elapsed time seems to be in PLANNER\n> STATISTICS section.\n>\n> -- start --\n>\n> 1. First run\n>\n> LOG: PARSER STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.000482 elapsed 0.000356 user 0.000127 system sec\n> ! [0.004921 user 0.004824 sys total]\n> ! 0/0 [0/1] filesystem blocks in/out\n> ! 0/102 [0/1076] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent\n> ! 0/0 [8/11] voluntary/involuntary context switches\n> STATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\n> AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: statement: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN\n> SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: PARSE ANALYSIS STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.030012 elapsed 0.006251 user 0.006894 system sec\n> ! [0.011270 user 0.011777 sys total]\n> ! 0/0 [0/1] filesystem blocks in/out\n> ! 0/1036 [0/2126] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent\n> ! 154/5 [163/16] voluntary/involuntary context switches\n> STATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\n> AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: REWRITER STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.000058 elapsed 0.000052 user 0.000006 system sec\n> ! [0.011350 user 0.011793 sys total]\n> ! 0/0 [0/1] filesystem blocks in/out\n> ! 0/6 [0/2132] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent\n> ! 0/0 [163/16] voluntary/involuntary context switches\n> STATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\n> AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: PLANNER STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.326018 elapsed 0.013452 user 0.009604 system sec\n> ! [0.024821 user 0.021400 sys total]\n> ! 0/0 [0/1] filesystem blocks in/out\n> ! 0/531 [0/2663] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent\n> ! 51/71 [214/87] voluntary/involuntary context switches\n> STATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\n> AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: EXECUTOR STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.000047 elapsed 0.000026 user 0.000019 system sec\n> ! [0.024961 user 0.021461 sys total]\n> ! 0/0 [0/1] filesystem blocks in/out\n> ! 0/13 [0/2709] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent\n> ! 0/0 [214/87] voluntary/involuntary context switches\n> STATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\n> AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: duration: 357.192 ms\n>\n>\n> 2. Second run\n>\n>\n> LOG: PARSER STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.000169 elapsed 0.000161 user 0.000018 system sec\n> ! [0.025308 user 0.021656 sys total]\n> ! 0/0 [0/1] filesystem blocks in/out\n> ! 0/4 [0/2716] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent\n> ! 0/0 [215/87] voluntary/involuntary context switches\n> STATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\n> AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: statement: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN\n> SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: PARSE ANALYSIS STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.002665 elapsed 0.001974 user 0.000196 system sec\n> ! [0.027325 user 0.021866 sys total]\n> ! 0/0 [0/1] filesystem blocks in/out\n> ! 0/17 [0/2734] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent\n> ! 0/56 [215/144] voluntary/involuntary context switches\n> STATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\n> AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: REWRITER STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.000068 elapsed 0.000068 user 0.000000 system sec\n> ! [0.027425 user 0.021876 sys total]\n> ! 0/0 [0/1] filesystem blocks in/out\n> ! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent\n> ! 0/0 [215/144] voluntary/involuntary context switches\n> STATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\n> AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: PLANNER STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.001025 elapsed 0.000917 user 0.000105 system sec\n> ! [0.028363 user 0.021986 sys total]\n> ! 0/0 [0/1] filesystem blocks in/out\n> ! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent\n> ! 0/1 [215/145] voluntary/involuntary context switches\n> STATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\n> AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: EXECUTOR STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.000016 elapsed 0.000016 user 0.000000 system sec\n> ! [0.028449 user 0.021993 sys total]\n> ! 0/0 [0/1] filesystem blocks in/out\n> ! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent\n> ! 0/0 [215/145] voluntary/involuntary context switches\n> STATEMENT: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization\n> AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN\n> ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=\n> ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON\n> ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN\n> Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN\n> Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID\n> LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=\n> Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON\n> ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser\n> ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID\n> LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID\n> LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID\n> LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID\n> LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID\n> LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID\n> LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=\n> ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON\n> ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON\n> ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition\n> ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN\n> PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID\n> LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID\n> LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID\n> LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID\n> LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=\n> SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON\n> ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON\n> ChangeDetails.INITIATORID=SDUser.USERID;\n> LOG: duration: 4.277 ms\n>\n>\n> -- end --\n>\n> a. If someone could interpret what exactly the PLANNER STATISTICS section\n> means (to identify the exact bottleneck) it would be great!\n>\n> b. Sometimes, first execution of a query takes nearly 2 seconds of\n> planning time. This seems to be too high even for the first run of the\n> query. Will some configuration change help speed up the planning time?\n> Also, is there any way to pre warm the caches so that the meta data that is\n> required for the query planning is available in cache before hand?\n>\n\nmaybe some your indexes and some system tables are bloated. Try you run\nVACUUM FULL ANALYZE\n\nRegards\n\nPavel\n\n\n> Thanks and regards,\n> Nanda\n>\n\n2018-01-12 9:03 GMT+01:00 Nandakumar M <[email protected]>:Hello Jeff,Thanks for the insights.>Don't keep closing and reopening connections.Even if I close a connection and open a new one and execute the same query, the planning time is considerably less than the first time. Only when I restart the Postgres server then I face high planning time again.>The query plan itself is not cached, but all the metadata about the (large number) of tables used in the query is cached.  Apparently reading/parsing that data is the slow step, not coming up with the actual plan.I enabled logging for parser, planner etc in postgresql.conf and re run the queries. Following is the logs - I am not sure exactly how this should be read, but the major difference in elapsed time seems to be in PLANNER STATISTICS section.-- start --1. First runLOG:  PARSER STATISTICSDETAIL:  ! system usage stats: ! 0.000482 elapsed 0.000356 user 0.000127 system sec ! [0.004921 user 0.004824 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/102 [0/1076] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent ! 0/0 [8/11] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  statement: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  PARSE ANALYSIS STATISTICSDETAIL:  ! system usage stats: ! 0.030012 elapsed 0.006251 user 0.006894 system sec ! [0.011270 user 0.011777 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/1036 [0/2126] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent ! 154/5 [163/16] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  REWRITER STATISTICSDETAIL:  ! system usage stats: ! 0.000058 elapsed 0.000052 user 0.000006 system sec ! [0.011350 user 0.011793 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/6 [0/2132] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent ! 0/0 [163/16] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  PLANNER STATISTICSDETAIL:  ! system usage stats: ! 0.326018 elapsed 0.013452 user 0.009604 system sec ! [0.024821 user 0.021400 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/531 [0/2663] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent ! 51/71 [214/87] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  EXECUTOR STATISTICSDETAIL:  ! system usage stats: ! 0.000047 elapsed 0.000026 user 0.000019 system sec ! [0.024961 user 0.021461 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/13 [0/2709] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [3/5] messages rcvd/sent ! 0/0 [214/87] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  duration: 357.192 ms2. Second runLOG:  PARSER STATISTICSDETAIL:  ! system usage stats: ! 0.000169 elapsed 0.000161 user 0.000018 system sec ! [0.025308 user 0.021656 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/4 [0/2716] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent ! 0/0 [215/87] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  statement: SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  PARSE ANALYSIS STATISTICSDETAIL:  ! system usage stats: ! 0.002665 elapsed 0.001974 user 0.000196 system sec ! [0.027325 user 0.021866 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/17 [0/2734] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent ! 0/56 [215/144] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  REWRITER STATISTICSDETAIL:  ! system usage stats: ! 0.000068 elapsed 0.000068 user 0.000000 system sec ! [0.027425 user 0.021876 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent ! 0/0 [215/144] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  PLANNER STATISTICSDETAIL:  ! system usage stats: ! 0.001025 elapsed 0.000917 user 0.000105 system sec ! [0.028363 user 0.021986 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent ! 0/1 [215/145] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  EXECUTOR STATISTICSDETAIL:  ! system usage stats: ! 0.000016 elapsed 0.000016 user 0.000000 system sec ! [0.028449 user 0.021993 sys total] ! 0/0 [0/1] filesystem blocks in/out ! 0/0 [0/2734] page faults/reclaims, 0 [0] swaps ! 0 [0] signals rcvd, 0/0 [4/21] messages rcvd/sent ! 0/0 [215/145] voluntary/involuntary context switchesSTATEMENT:  SELECT COUNT(*) FROM ChangeDetails LEFT JOIN SDOrganization AaaOrg ON ChangeDetails.SITEID=AaaOrg.ORG_ID LEFT JOIN ApprovalStatusDefinition ON ChangeDetails.APPR_STATUSID=ApprovalStatusDefinition.STATUSID LEFT JOIN CategoryDefinition ON ChangeDetails.CATEGORYID=CategoryDefinition.CATEGORYID LEFT JOIN Change_Fields ON ChangeDetails.CHANGEID=Change_Fields.CHANGEID LEFT JOIN Change_StageDefinition ON ChangeDetails.WFSTAGEID=Change_StageDefinition.WFSTAGEID LEFT JOIN Change_StatusDefinition ON ChangeDetails.WFSTATUSID=Change_StatusDefinition.WFSTATUSID LEFT JOIN AaaUser ChangeManager ON ChangeDetails.CHANGEMANAGERID=ChangeManager.USER_ID LEFT JOIN AaaUser ChangeOriginator ON ChangeDetails.INITIATORID=ChangeOriginator.USER_ID LEFT JOIN AaaUser ChangeOwner ON ChangeDetails.TECHNICIANID=ChangeOwner.USER_ID LEFT JOIN ChangeResolution ON ChangeDetails.CHANGEID=ChangeResolution.CHANGEID LEFT JOIN ChangeTemplate ON ChangeDetails.TEMPLATEID=ChangeTemplate.TEMPLATEID LEFT JOIN ChangeToClosureCode ON ChangeDetails.CHANGEID=ChangeToClosureCode.CHANGEID LEFT JOIN Change_ClosureCode ON ChangeToClosureCode.ID=Change_ClosureCode.ID LEFT JOIN ChangeTypeDefinition ON ChangeDetails.CHANGETYPEID=ChangeTypeDefinition.CHANGETYPEID LEFT JOIN ChangeWF_Definition ON ChangeDetails.WFID=ChangeWF_Definition.ID LEFT JOIN ImpactDefinition ON ChangeDetails.IMPACTID=ImpactDefinition.IMPACTID LEFT JOIN ItemDefinition ON ChangeDetails.ITEMID=ItemDefinition.ITEMID LEFT JOIN PriorityDefinition ON ChangeDetails.PRIORITYID=PriorityDefinition.PRIORITYID LEFT JOIN QueueDefinition ON ChangeDetails.GROUPID=QueueDefinition.QUEUEID LEFT JOIN RiskDefinition ON ChangeDetails.RISKID=RiskDefinition.RISKID LEFT JOIN StageDefinition ON ChangeDetails.STAGEID=StageDefinition.STAGEID LEFT JOIN SubCategoryDefinition ON ChangeDetails.SUBCATEGORYID=SubCategoryDefinition.SUBCATEGORYID LEFT JOIN UrgencyDefinition ON ChangeDetails.URGENCYID=UrgencyDefinition.URGENCYID LEFT JOIN SDUser ON ChangeDetails.INITIATORID=SDUser.USERID;LOG:  duration: 4.277 ms-- end --a. If someone could interpret what exactly the PLANNER STATISTICS section means (to identify the exact bottleneck) it would be great!b. Sometimes, first execution of a query takes nearly 2 seconds of planning time. This seems to be too high even for the first run of the query. Will some configuration change help speed up the planning time? Also, is there any way to pre warm the caches so that the meta data that is required for the query planning is available in cache before hand?maybe some your indexes and some system tables are bloated. Try you run VACUUM FULL ANALYZERegardsPavelThanks and regards,Nanda", "msg_date": "Fri, 12 Jan 2018 11:04:42 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slow when run for first time;\n subsequent execution is fast" }, { "msg_contents": "Missed to have mailing list in to address.. forwarding now.\n\n---------- Forwarded message ----------\nFrom: \"Nandakumar M\" <[email protected]>\nDate: 15 Jan 2018 12:16\nSubject: Re: Query is slow when run for first time; subsequent execution is\nfast\nTo: \"Pavel Stehule\" <[email protected]>\nCc:\n\nHi,\n\nOn Fri, Jan 12, 2018 at 3:34 PM, Pavel Stehule <[email protected]>\nwrote:\n\n>\n> >> maybe some your indexes and some system tables are bloated. Try you run\n> VACUUM FULL ANALYZE\n>\n\nTried this suggestion. Planning time gets reduced slightly but it is still\nway higher on the first run compared to subsequent runs of the same query.\n\nRegards,\nNanda\n\nMissed to have mailing list in to address.. forwarding now.---------- Forwarded message ----------From: \"Nandakumar M\" <[email protected]>Date: 15 Jan 2018 12:16Subject: Re: Query is slow when run for first time; subsequent execution is fastTo: \"Pavel Stehule\" <[email protected]>Cc: Hi,On Fri, Jan 12, 2018 at 3:34 PM, Pavel Stehule <[email protected]> wrote:>> maybe some your indexes and some system tables are bloated. Try you run VACUUM FULL ANALYZE Tried this suggestion. Planning time gets reduced slightly but it is still way higher on the first run compared to subsequent runs of the same query.Regards,Nanda", "msg_date": "Tue, 16 Jan 2018 17:16:03 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Re: Query is slow when run for first time; subsequent execution\n is fast" }, { "msg_contents": "Nandakumar M schrieb am 12.01.2018 um 09:03:\n> Even if I close a connection and open a new one and execute the same\n> query, the planning time is considerably less than the first time.\n> Only when I restart the Postgres server then I face high planning\n> time again.\n\nYes, because the data is cached by Postgres (\"shared_buffers\") and the filesystem.\n\n\n\n", "msg_date": "Tue, 16 Jan 2018 13:20:39 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slow when run for first time; subsequent execution is\n fast" }, { "msg_contents": "On Fri, Jan 12, 2018 at 12:03 AM, Nandakumar M <[email protected]> wrote:\n\n> Hello Jeff,\n>\n> Thanks for the insights.\n>\n> >Don't keep closing and reopening connections.\n>\n> Even if I close a connection and open a new one and execute the same\n> query, the planning time is considerably less than the first time. Only\n> when I restart the Postgres server then I face high planning time again.\n>\n\nOh. I've not seen that before. But then again I don't often restart my\nserver and then immediately run very large queries with a stringent time\ndeadline.\n\nYou can try pg_prewarm, on pg_statistic table and its index. But I'd\nprobably just put an entry in my db startup script to run this query\nimmediately after startng the server, and let the query warm the cache\nitself.\n\nWhy do you restart your database often enough for this to be an issue?\n\nCheers,\n\nJeff\n\nOn Fri, Jan 12, 2018 at 12:03 AM, Nandakumar M <[email protected]> wrote:Hello Jeff,Thanks for the insights.>Don't keep closing and reopening connections.Even if I close a connection and open a new one and execute the same query, the planning time is considerably less than the first time. Only when I restart the Postgres server then I face high planning time again.Oh.  I've not seen that before.  But then again I don't often restart my server and then immediately run very large queries with a stringent time deadline. You can try pg_prewarm, on pg_statistic table and its index.  But I'd probably just put an entry in my db startup script to run this query immediately after startng the server, and let the query warm the cache itself.Why do you restart your database often enough for this to be an issue?Cheers,Jeff", "msg_date": "Tue, 16 Jan 2018 21:18:25 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slow when run for first time;\n subsequent execution is fast" }, { "msg_contents": "Hello,\n\n \n\nFWIW, I do have the same issue.\n\nUnfortunately our application is running on a standard laptop/desktop computers, not dedicated servers.\n\nRestarting the computer leads to a restart of the database server, which slow down all queries for several minutes.\n\n \n\nAre you on Windows or Linux? I’m on Windows and wondering if the issue is the same on Linux?\n\n \n\nBR,\n\nGuillaume\n\n \n\n \n\nDe : Jeff Janes [mailto:[email protected]] \nEnvoyé : mercredi 17 janvier 2018 06:18\nÀ : Nandakumar M\nCc : pgsql-performa.\nObjet : Re: Query is slow when run for first time; subsequent execution is fast\n\n \n\nOn Fri, Jan 12, 2018 at 12:03 AM, Nandakumar M <[email protected] <mailto:[email protected]> > wrote:\n\nHello Jeff,\n\n \n\nThanks for the insights.\n\n \n\n>Don't keep closing and reopening connections.\n\n \n\nEven if I close a connection and open a new one and execute the same query, the planning time is considerably less than the first time. Only when I restart the Postgres server then I face high planning time again.\n\n \n\nOh. I've not seen that before. But then again I don't often restart my server and then immediately run very large queries with a stringent time deadline.\n\n \n\nYou can try pg_prewarm, on pg_statistic table and its index. But I'd probably just put an entry in my db startup script to run this query immediately after startng the server, and let the query warm the cache itself.\n\n \n\nWhy do you restart your database often enough for this to be an issue?\n\n \n\nCheers,\n\n \n\nJeff\nThis message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.", "msg_date": "Wed, 17 Jan 2018 07:25:10 +0000", "msg_from": "\"POUSSEL, Guillaume\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Query is slow when run for first time; subsequent execution is\n fast" }, { "msg_contents": "Hi,\n\nOn 17 Jan 2018 12:55, \"POUSSEL, Guillaume\" <[email protected]>\nwrote:\n\nAre you on Windows or Linux? I’m on Windows and wondering if the issue is\nthe same on Linux?\n\n\nI have experienced this on Mac and Linux machines.\n\nYou can try pg_prewarm, on pg_statistic table and its index. But I'd\nprobably just put an entry in my db startup script to run this query\nimmediately after startng the server, and let the query warm the cache\nitself.\n\nI will try this suggestion and get back on the thread. Is pg_statistic the\nonly table to be pre cached? Pls let me know if any other table/index needs\nto be pre warmed.\n\nBtw, I don't running a \"select * from pg_statistic\" will fill the shared\nbuffer. Only 256 kb of data will be cached during sequential scans. I will\ntry pg_prewarm\n\nWhy do you restart your database often\n\nPostgres is bundled with our application and deployed by our client.\nStarting / stopping the server is not under my control.\n\nRegards,\nNanda\n\nHi,On 17 Jan 2018 12:55, \"POUSSEL, Guillaume\" <[email protected]> wrote:Are you on Windows or Linux? I’m on Windows and wondering if the issue is the same on Linux?I have experienced this on Mac and Linux machines.You can try pg_prewarm, on pg_statistic table and its index.  But I'd probably just put an entry in my db startup script to run this query immediately after startng the server, and let the query warm the cache itself.I will try this suggestion and get back on the thread. Is pg_statistic the only table to be pre cached? Pls let me know if any other table/index needs to be pre warmed.Btw, I don't running a \"select * from pg_statistic\" will fill the shared buffer. Only 256 kb of data will be cached during sequential scans. I will try pg_prewarmWhy do you restart your database oftenPostgres is bundled with our application and deployed by our client. Starting / stopping the server is not under my control.Regards,Nanda", "msg_date": "Wed, 17 Jan 2018 15:39:36 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Query is slow when run for first time;\n subsequent execution is fast" }, { "msg_contents": "On Tue, Jan 16, 2018 at 09:18:25PM -0800, Jeff Janes wrote:\n> Oh. I've not seen that before. But then again I don't often restart my\n> server and then immediately run very large queries with a stringent time\n> deadline.\n> \n> You can try pg_prewarm, on pg_statistic table and its index. But I'd\n> probably just put an entry in my db startup script to run this query\n> immediately after startng the server, and let the query warm the cache\n> itself.\n> \n> Why do you restart your database often enough for this to be an issue?\n\nAnother thing that you could use here is pg_buffercache which offers a\nway to look at the Postgres shared buffer contents in real-time:\nhttps://www.postgresql.org/docs/current/static/pgbuffercache.html\n\nAs Jeff says, pg_prewarm is a good tool for such cases to avoid any kind\nof warmup period when a server starts..\n--\nMichael", "msg_date": "Thu, 18 Jan 2018 10:55:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slow when run for first time; subsequent execution is\n fast" }, { "msg_contents": "Hi,\n\nI tried pg_prewarm as suggested by Jeff Janes and it works - thanks a lot\nJeff. Now the query planning is fast on the first execution.\n\nHere is the list of tables that needed to be pre warmed (or you could just\npre warm all the 'pg_%' tables. :-) ).\n\nselect pg_prewarm('pg_statistic');\nselect pg_prewarm('pg_trigger_tgrelid_tgname_index');\nselect pg_prewarm('pg_trigger');\nselect pg_prewarm('pg_statistic_relid_att_inh_index');\nselect pg_prewarm('pg_index_indrelid_index');\nselect pg_prewarm('pg_index_indexrelid_index');\nselect pg_prewarm('pg_index');\nselect pg_prewarm('pg_constraint_conrelid_index');\nselect pg_prewarm('pg_constraint');\nselect pg_prewarm('pg_class_relname_nsp_index');\nselect pg_prewarm('pg_class_oid_index');\nselect pg_prewarm('pg_attribute_relid_attnum_index');\nselect pg_prewarm('pg_attribute');\nselect pg_prewarm('pg_attrdef_adrelid_adnum_index');\nselect pg_prewarm('pg_attrdef');\nselect pg_prewarm('pg_amproc_fam_proc_index');\nselect pg_prewarm('pg_namespace_oid_index');\n\nRegards,\nNanda\n\nOn 18 Jan 2018 07:25, \"Michael Paquier\" <[email protected]> wrote:\n\nOn Tue, Jan 16, 2018 at 09:18:25PM -0800, Jeff Janes wrote:\n> Oh. I've not seen that before. But then again I don't often restart my\n> server and then immediately run very large queries with a stringent time\n> deadline.\n>\n> You can try pg_prewarm, on pg_statistic table and its index. But I'd\n> probably just put an entry in my db startup script to run this query\n> immediately after startng the server, and let the query warm the cache\n> itself.\n>\n> Why do you restart your database often enough for this to be an issue?\n\nAnother thing that you could use here is pg_buffercache which offers a\nway to look at the Postgres shared buffer contents in real-time:\nhttps://www.postgresql.org/docs/current/static/pgbuffercache.html\n\nAs Jeff says, pg_prewarm is a good tool for such cases to avoid any kind\nof warmup period when a server starts..\n--\nMichael\n\nHi,I tried pg_prewarm as suggested by Jeff Janes and it works - thanks a lot Jeff. Now the query planning is fast on the first execution.Here is the list of tables that needed to be pre warmed (or you could just pre warm all the 'pg_%' tables. :-) ).select pg_prewarm('pg_statistic');select pg_prewarm('pg_trigger_tgrelid_tgname_index');select pg_prewarm('pg_trigger');select pg_prewarm('pg_statistic_relid_att_inh_index');select pg_prewarm('pg_index_indrelid_index');select pg_prewarm('pg_index_indexrelid_index');select pg_prewarm('pg_index');select pg_prewarm('pg_constraint_conrelid_index');select pg_prewarm('pg_constraint');select pg_prewarm('pg_class_relname_nsp_index');select pg_prewarm('pg_class_oid_index');select pg_prewarm('pg_attribute_relid_attnum_index');select pg_prewarm('pg_attribute');select pg_prewarm('pg_attrdef_adrelid_adnum_index');select pg_prewarm('pg_attrdef');select pg_prewarm('pg_amproc_fam_proc_index');select pg_prewarm('pg_namespace_oid_index');Regards,NandaOn 18 Jan 2018 07:25, \"Michael Paquier\" <[email protected]> wrote:On Tue, Jan 16, 2018 at 09:18:25PM -0800, Jeff Janes wrote:\n> Oh.  I've not seen that before.  But then again I don't often restart my\n> server and then immediately run very large queries with a stringent time\n> deadline.\n>\n> You can try pg_prewarm, on pg_statistic table and its index.  But I'd\n> probably just put an entry in my db startup script to run this query\n> immediately after startng the server, and let the query warm the cache\n> itself.\n>\n> Why do you restart your database often enough for this to be an issue?\n\nAnother thing that you could use here is pg_buffercache which offers a\nway to look at the Postgres shared buffer contents in real-time:\nhttps://www.postgresql.org/docs/current/static/pgbuffercache.html\n\nAs Jeff says, pg_prewarm is a good tool for such cases to avoid any kind\nof warmup period when a server starts..\n--\nMichael", "msg_date": "Fri, 26 Jan 2018 13:13:23 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query is slow when run for first time;\n subsequent execution is fast" } ]
[ { "msg_contents": "Hello,\n\n \n\nI’m running PostgreSQL 9.3 on Windows 7 and I’m having a performance\nissue at startup. I have installed PostgreSQL as a service through Windows\ninstaller.\n\nThe database size is 3 Go, with 120 tables.\n\n \n\nEvery time I try to run queries right after Windows startup, it takes a\nhuge amount of time.\n\nIf I restart the PostgreSQL Windows service, queries are way faster.\n\n \n\nI have activated debug log and here is what I get before Windows restart:\n\nduration: 2.000 ms parse\n\nduration: 3.000 ms bind\n\nduration: 0.000 ms execute\n\nAnd after Windows restart:\n\nduration: 364.000 ms parse\n\nduration: 415.000 ms bind\n\nduration: 0.000 ms execute\n\n\nFor information, the test query is:\n\nSELECT t.typlen FROM pg_catalog.pg_type t, pg_catalog.pg_namespace n WHERE\nt.typnamespace=n.oid AND t.typname='name' AND n.nspname='pg_catalog'\n\nIt’s not related to the query itself since other queries give the same\nresult (from 10x to 100x longer).\n\n \n\nHere are my settings (all log and locale-related settings omitted on\npurpose):\n\n\nbytea_output\n\nescape\n\nsession\n\n\ncheckpoint_segments\n\n45\n\nconfiguration file\n\n\nclient_encoding\n\nUNICODE\n\nsession\n\n\nclient_min_messages\n\nnotice\n\nsession\n\n\nDateStyle\n\nISO, DMY\n\nsession\n\n\ndebug_pretty_print\n\non\n\nconfiguration file\n\n\ndebug_print_plan\n\non\n\nconfiguration file\n\n\ndefault_text_search_config\n\npg_catalog.french\n\nconfiguration file\n\n\nlisten_addresses\n\n*\n\nconfiguration file\n\n\nlogging_collector\n\non\n\nconfiguration file\n\n\nmax_connections\n\n100\n\nconfiguration file\n\n\nmax_stack_depth\n\n2MB\n\nenvironment variable\n\n\nport\n\n5432\n\nconfiguration file\n\n\nshared_buffers\n\n128MB\n\nconfiguration file\n\n\nTimeZone\n\nGMT\n\nuser\n\n \n\nI run queries through JDBC driver (9.3-1100-jdbc4.jar). I know that the\nissue is not related to the PC, since it give the same result on a bunch of\ndifferent computers.\n\n \n\nI have two questions:\n\n* What is the difference between restarting PostgreSQL service and\nrestarting the computer? Is PostgreSQL relying on some kind of OS-level\ncache outside Windows service?\n\n* How can I dig down deeper and see what’s causing PostgreSQL\nslowdown?\n\n \n\nThanks in advance for your help,\n\nBR,\n\n \n\nGuillaume POUSSEL | ♠Sogeti High Tech\n\n <mailto:[email protected]> [email protected]\nThis message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.", "msg_date": "Thu, 11 Jan 2018 08:19:39 +0000", "msg_from": "\"POUSSEL, Guillaume\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow queries after Windows startup" }, { "msg_contents": "Have you verified that this is isn't caused by cold filesystem caches?\r\n\r\n\r\nOn 11.01.2018 09:19, POUSSEL, Guillaume wrote:\r\n> Hello,\r\n> \r\n> \r\n> \r\n> I’m running PostgreSQL 9.3 on Windows 7 and I’m having a performance\r\n> issue at startup. I have installed PostgreSQL as a service through Windows\r\n> installer.\r\n> \r\n> The database size is 3 Go, with 120 tables.\r\n> \r\n> \r\n> \r\n> Every time I try to run queries right after Windows startup, it takes a\r\n> huge amount of time.\r\n> \r\n> If I restart the PostgreSQL Windows service, queries are way faster.\r\n> \r\n> \r\n> \r\n> I have activated debug log and here is what I get before Windows restart:\r\n> \r\n> duration: 2.000 ms parse\r\n> \r\n> duration: 3.000 ms bind\r\n> \r\n> duration: 0.000 ms execute\r\n> \r\n> And after Windows restart:\r\n> \r\n> duration: 364.000 ms parse\r\n> \r\n> duration: 415.000 ms bind\r\n> \r\n> duration: 0.000 ms execute\r\n> \r\n> \r\n> For information, the test query is:\r\n> \r\n> SELECT t.typlen FROM pg_catalog.pg_type t, pg_catalog.pg_namespace n WHERE\r\n> t.typnamespace=n.oid AND t.typname='name' AND n.nspname='pg_catalog'\r\n> \r\n> It’s not related to the query itself since other queries give the same\r\n> result (from 10x to 100x longer).\r\n> \r\n> \r\n> \r\n> Here are my settings (all log and locale-related settings omitted on\r\n> purpose):\r\n> \r\n> \r\n> bytea_output\r\n> \r\n> escape\r\n> \r\n> session\r\n> \r\n> \r\n> checkpoint_segments\r\n> \r\n> 45\r\n> \r\n> configuration file\r\n> \r\n> \r\n> client_encoding\r\n> \r\n> UNICODE\r\n> \r\n> session\r\n> \r\n> \r\n> client_min_messages\r\n> \r\n> notice\r\n> \r\n> session\r\n> \r\n> \r\n> DateStyle\r\n> \r\n> ISO, DMY\r\n> \r\n> session\r\n> \r\n> \r\n> debug_pretty_print\r\n> \r\n> on\r\n> \r\n> configuration file\r\n> \r\n> \r\n> debug_print_plan\r\n> \r\n> on\r\n> \r\n> configuration file\r\n> \r\n> \r\n> default_text_search_config\r\n> \r\n> pg_catalog.french\r\n> \r\n> configuration file\r\n> \r\n> \r\n> listen_addresses\r\n> \r\n> *\r\n> \r\n> configuration file\r\n> \r\n> \r\n> logging_collector\r\n> \r\n> on\r\n> \r\n> configuration file\r\n> \r\n> \r\n> max_connections\r\n> \r\n> 100\r\n> \r\n> configuration file\r\n> \r\n> \r\n> max_stack_depth\r\n> \r\n> 2MB\r\n> \r\n> environment variable\r\n> \r\n> \r\n> port\r\n> \r\n> 5432\r\n> \r\n> configuration file\r\n> \r\n> \r\n> shared_buffers\r\n> \r\n> 128MB\r\n> \r\n> configuration file\r\n> \r\n> \r\n> TimeZone\r\n> \r\n> GMT\r\n> \r\n> user\r\n> \r\n> \r\n> \r\n> I run queries through JDBC driver (9.3-1100-jdbc4.jar). I know that the\r\n> issue is not related to the PC, since it give the same result on a bunch of\r\n> different computers.\r\n> \r\n> \r\n> \r\n> I have two questions:\r\n> \r\n> * What is the difference between restarting PostgreSQL service and\r\n> restarting the computer? Is PostgreSQL relying on some kind of OS-level\r\n> cache outside Windows service?\r\n> \r\n> * How can I dig down deeper and see what’s causing PostgreSQL\r\n> slowdown?\r\n> \r\n> \r\n> \r\n> Thanks in advance for your help,\r\n> \r\n> BR,\r\n> \r\n> \r\n> \r\n> Guillaume POUSSEL | ♠Sogeti High Tech\r\n> \r\n> <mailto:[email protected]> [email protected]\r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.\r\n> ", "msg_date": "Thu, 11 Jan 2018 09:01:01 +0000", "msg_from": "Robert Zenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow queries after Windows startup" }, { "msg_contents": "No, I have not checked it.\nHow can I monitor it on Windows? Do you know a tool that can help me?\n\nThanks!\n\n-----Message d'origine-----\nDe : Robert Zenz [mailto:[email protected]] \nEnvoyé : jeudi 11 janvier 2018 10:01\nÀ : [email protected]\nObjet : Re: Slow queries after Windows startup\n\nHave you verified that this is isn't caused by cold filesystem caches?\n\n\nOn 11.01.2018 09:19, POUSSEL, Guillaume wrote:\n> Hello,\n> \n> \n> \n> I’m running PostgreSQL 9.3 on Windows 7 and I’m having a performance \n> issue at startup. I have installed PostgreSQL as a service through \n> Windows installer.\n> \n> The database size is 3 Go, with 120 tables.\n> \n> \n> \n> Every time I try to run queries right after Windows startup, it takes \n> a huge amount of time.\n> \n> If I restart the PostgreSQL Windows service, queries are way faster.\n> \n> \n> \n> I have activated debug log and here is what I get before Windows restart:\n> \n> duration: 2.000 ms parse\n> \n> duration: 3.000 ms bind\n> \n> duration: 0.000 ms execute\n> \n> And after Windows restart:\n> \n> duration: 364.000 ms parse\n> \n> duration: 415.000 ms bind\n> \n> duration: 0.000 ms execute\n> \n> \n> For information, the test query is:\n> \n> SELECT t.typlen FROM pg_catalog.pg_type t, pg_catalog.pg_namespace n \n> WHERE t.typnamespace=n.oid AND t.typname='name' AND n.nspname='pg_catalog'\n> \n> It’s not related to the query itself since other queries give the same \n> result (from 10x to 100x longer).\n> \n> \n> \n> Here are my settings (all log and locale-related settings omitted on\n> purpose):\n> \n> \n> bytea_output\n> \n> escape\n> \n> session\n> \n> \n> checkpoint_segments\n> \n> 45\n> \n> configuration file\n> \n> \n> client_encoding\n> \n> UNICODE\n> \n> session\n> \n> \n> client_min_messages\n> \n> notice\n> \n> session\n> \n> \n> DateStyle\n> \n> ISO, DMY\n> \n> session\n> \n> \n> debug_pretty_print\n> \n> on\n> \n> configuration file\n> \n> \n> debug_print_plan\n> \n> on\n> \n> configuration file\n> \n> \n> default_text_search_config\n> \n> pg_catalog.french\n> \n> configuration file\n> \n> \n> listen_addresses\n> \n> *\n> \n> configuration file\n> \n> \n> logging_collector\n> \n> on\n> \n> configuration file\n> \n> \n> max_connections\n> \n> 100\n> \n> configuration file\n> \n> \n> max_stack_depth\n> \n> 2MB\n> \n> environment variable\n> \n> \n> port\n> \n> 5432\n> \n> configuration file\n> \n> \n> shared_buffers\n> \n> 128MB\n> \n> configuration file\n> \n> \n> TimeZone\n> \n> GMT\n> \n> user\n> \n> \n> \n> I run queries through JDBC driver (9.3-1100-jdbc4.jar). I know that \n> the issue is not related to the PC, since it give the same result on a \n> bunch of different computers.\n> \n> \n> \n> I have two questions:\n> \n> * What is the difference between restarting PostgreSQL service and\n> restarting the computer? Is PostgreSQL relying on some kind of \n> OS-level cache outside Windows service?\n> \n> * How can I dig down deeper and see what’s causing PostgreSQL\n> slowdown?\n> \n> \n> \n> Thanks in advance for your help,\n> \n> BR,\n> \n> \n> \n> Guillaume POUSSEL | ♠Sogeti High Tech\n> \n> <mailto:[email protected]> [email protected]\n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.\n>\nThis message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.", "msg_date": "Thu, 11 Jan 2018 09:06:37 +0000", "msg_from": "\"POUSSEL, Guillaume\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Slow queries after Windows startup" }, { "msg_contents": "I have no idea to be honest, I haven't done any Windows administration in a long\r\ntime.\r\n\r\nThe best I could find is this:\r\n\r\n * https://docs.microsoft.com/en-us/sysinternals/downloads/rammap\r\n * https://technet.microsoft.com/en-us/library/cc938589.aspx\r\n\r\n\r\nOn 11.01.2018 10:06, POUSSEL, Guillaume wrote:\r\n> No, I have not checked it.\r\n> How can I monitor it on Windows? Do you know a tool that can help me?\r\n> \r\n> Thanks!\r\n> \r\n> -----Message d'origine-----\r\n> De : Robert Zenz [mailto:[email protected]] \r\n> Envoyé : jeudi 11 janvier 2018 10:01\r\n> À : [email protected]\r\n> Objet : Re: Slow queries after Windows startup\r\n> \r\n> Have you verified that this is isn't caused by cold filesystem caches?\r\n> \r\n> \r\n> On 11.01.2018 09:19, POUSSEL, Guillaume wrote:\r\n>> Hello,\r\n>>\r\n>> \r\n>>\r\n>> I’m running PostgreSQL 9.3 on Windows 7 and I’m having a performance \r\n>> issue at startup. I have installed PostgreSQL as a service through \r\n>> Windows installer.\r\n>>\r\n>> The database size is 3 Go, with 120 tables.\r\n>>\r\n>> \r\n>>\r\n>> Every time I try to run queries right after Windows startup, it takes \r\n>> a huge amount of time.\r\n>>\r\n>> If I restart the PostgreSQL Windows service, queries are way faster.\r\n>>\r\n>> \r\n>>\r\n>> I have activated debug log and here is what I get before Windows restart:\r\n>>\r\n>> duration: 2.000 ms parse\r\n>>\r\n>> duration: 3.000 ms bind\r\n>>\r\n>> duration: 0.000 ms execute\r\n>>\r\n>> And after Windows restart:\r\n>>\r\n>> duration: 364.000 ms parse\r\n>>\r\n>> duration: 415.000 ms bind\r\n>>\r\n>> duration: 0.000 ms execute\r\n>>\r\n>>\r\n>> For information, the test query is:\r\n>>\r\n>> SELECT t.typlen FROM pg_catalog.pg_type t, pg_catalog.pg_namespace n \r\n>> WHERE t.typnamespace=n.oid AND t.typname='name' AND n.nspname='pg_catalog'\r\n>>\r\n>> It’s not related to the query itself since other queries give the same \r\n>> result (from 10x to 100x longer).\r\n>>\r\n>> \r\n>>\r\n>> Here are my settings (all log and locale-related settings omitted on\r\n>> purpose):\r\n>>\r\n>>\r\n>> bytea_output\r\n>>\r\n>> escape\r\n>>\r\n>> session\r\n>>\r\n>>\r\n>> checkpoint_segments\r\n>>\r\n>> 45\r\n>>\r\n>> configuration file\r\n>>\r\n>>\r\n>> client_encoding\r\n>>\r\n>> UNICODE\r\n>>\r\n>> session\r\n>>\r\n>>\r\n>> client_min_messages\r\n>>\r\n>> notice\r\n>>\r\n>> session\r\n>>\r\n>>\r\n>> DateStyle\r\n>>\r\n>> ISO, DMY\r\n>>\r\n>> session\r\n>>\r\n>>\r\n>> debug_pretty_print\r\n>>\r\n>> on\r\n>>\r\n>> configuration file\r\n>>\r\n>>\r\n>> debug_print_plan\r\n>>\r\n>> on\r\n>>\r\n>> configuration file\r\n>>\r\n>>\r\n>> default_text_search_config\r\n>>\r\n>> pg_catalog.french\r\n>>\r\n>> configuration file\r\n>>\r\n>>\r\n>> listen_addresses\r\n>>\r\n>> *\r\n>>\r\n>> configuration file\r\n>>\r\n>>\r\n>> logging_collector\r\n>>\r\n>> on\r\n>>\r\n>> configuration file\r\n>>\r\n>>\r\n>> max_connections\r\n>>\r\n>> 100\r\n>>\r\n>> configuration file\r\n>>\r\n>>\r\n>> max_stack_depth\r\n>>\r\n>> 2MB\r\n>>\r\n>> environment variable\r\n>>\r\n>>\r\n>> port\r\n>>\r\n>> 5432\r\n>>\r\n>> configuration file\r\n>>\r\n>>\r\n>> shared_buffers\r\n>>\r\n>> 128MB\r\n>>\r\n>> configuration file\r\n>>\r\n>>\r\n>> TimeZone\r\n>>\r\n>> GMT\r\n>>\r\n>> user\r\n>>\r\n>> \r\n>>\r\n>> I run queries through JDBC driver (9.3-1100-jdbc4.jar). I know that \r\n>> the issue is not related to the PC, since it give the same result on a \r\n>> bunch of different computers.\r\n>>\r\n>> \r\n>>\r\n>> I have two questions:\r\n>>\r\n>> * What is the difference between restarting PostgreSQL service and\r\n>> restarting the computer? Is PostgreSQL relying on some kind of \r\n>> OS-level cache outside Windows service?\r\n>>\r\n>> * How can I dig down deeper and see what’s causing PostgreSQL\r\n>> slowdown?\r\n>>\r\n>> \r\n>>\r\n>> Thanks in advance for your help,\r\n>>\r\n>> BR,\r\n>>\r\n>> \r\n>>\r\n>> Guillaume POUSSEL | ♠Sogeti High Tech\r\n>>\r\n>> <mailto:[email protected]> [email protected]\r\n>>\r\n>> \r\n>>\r\n>> \r\n>>\r\n>> \r\n>>\r\n>> \r\n>>\r\n>> \r\n>>\r\n>>\r\n>>\r\n>>\r\n>> This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.\r\n>>\r\n>>\r\n>>\r\n>> This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.", "msg_date": "Thu, 11 Jan 2018 09:54:32 +0000", "msg_from": "Robert Zenz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow queries after Windows startup" }, { "msg_contents": "Thanks for pointing out those links. I am trying to figure out how they can help me.\nOne more thing, I can't understand the postgresql.log duration vs. a \"EXPLAIN ANALYZE\" in the query:\nPostgresql.log gives:\nduration: 628.000 ms parse (...)\nduration: 0.000 ms bind (...)\nduration: 378.000 ms execute (...)\n\nEXPLAIN gives:\n(...)\nI/O Timings: read=0.019\n(...)\nTotal runtime: 0.167 ms\n\nWhere are all these milliseconds (378ms vs. 0.167ms)?\nWhat can be slowing down the query parsing?\n\n\n-----Message d'origine-----\nDe : Robert Zenz [mailto:[email protected]] \nEnvoyé : jeudi 11 janvier 2018 10:55\nÀ : [email protected]\nObjet : Re: Slow queries after Windows startup\n\nI have no idea to be honest, I haven't done any Windows administration in a long time.\n\nThe best I could find is this:\n\n * https://docs.microsoft.com/en-us/sysinternals/downloads/rammap\n * https://technet.microsoft.com/en-us/library/cc938589.aspx\n\n\nOn 11.01.2018 10:06, POUSSEL, Guillaume wrote:\n> No, I have not checked it.\n> How can I monitor it on Windows? Do you know a tool that can help me?\n> \n> Thanks!\n> \n> -----Message d'origine-----\n> De : Robert Zenz [mailto:[email protected]]\n> Envoyé : jeudi 11 janvier 2018 10:01\n> À : [email protected]\n> Objet : Re: Slow queries after Windows startup\n> \n> Have you verified that this is isn't caused by cold filesystem caches?\n> \n> \n> On 11.01.2018 09:19, POUSSEL, Guillaume wrote:\n>> Hello,\n>>\n>> \n>>\n>> I’m running PostgreSQL 9.3 on Windows 7 and I’m having a performance \n>> issue at startup. I have installed PostgreSQL as a service through \n>> Windows installer.\n>>\n>> The database size is 3 Go, with 120 tables.\n>>\n>> \n>>\n>> Every time I try to run queries right after Windows startup, it takes \n>> a huge amount of time.\n>>\n>> If I restart the PostgreSQL Windows service, queries are way faster.\n>>\n>> \n>>\n>> I have activated debug log and here is what I get before Windows restart:\n>>\n>> duration: 2.000 ms parse\n>>\n>> duration: 3.000 ms bind\n>>\n>> duration: 0.000 ms execute\n>>\n>> And after Windows restart:\n>>\n>> duration: 364.000 ms parse\n>>\n>> duration: 415.000 ms bind\n>>\n>> duration: 0.000 ms execute\n>>\n>>\n>> For information, the test query is:\n>>\n>> SELECT t.typlen FROM pg_catalog.pg_type t, pg_catalog.pg_namespace n \n>> WHERE t.typnamespace=n.oid AND t.typname='name' AND n.nspname='pg_catalog'\n>>\n>> It’s not related to the query itself since other queries give the \n>> same result (from 10x to 100x longer).\n>>\n>> \n>>\n>> Here are my settings (all log and locale-related settings omitted on\n>> purpose):\n>>\n>>\n>> bytea_output\n>>\n>> escape\n>>\n>> session\n>>\n>>\n>> checkpoint_segments\n>>\n>> 45\n>>\n>> configuration file\n>>\n>>\n>> client_encoding\n>>\n>> UNICODE\n>>\n>> session\n>>\n>>\n>> client_min_messages\n>>\n>> notice\n>>\n>> session\n>>\n>>\n>> DateStyle\n>>\n>> ISO, DMY\n>>\n>> session\n>>\n>>\n>> debug_pretty_print\n>>\n>> on\n>>\n>> configuration file\n>>\n>>\n>> debug_print_plan\n>>\n>> on\n>>\n>> configuration file\n>>\n>>\n>> default_text_search_config\n>>\n>> pg_catalog.french\n>>\n>> configuration file\n>>\n>>\n>> listen_addresses\n>>\n>> *\n>>\n>> configuration file\n>>\n>>\n>> logging_collector\n>>\n>> on\n>>\n>> configuration file\n>>\n>>\n>> max_connections\n>>\n>> 100\n>>\n>> configuration file\n>>\n>>\n>> max_stack_depth\n>>\n>> 2MB\n>>\n>> environment variable\n>>\n>>\n>> port\n>>\n>> 5432\n>>\n>> configuration file\n>>\n>>\n>> shared_buffers\n>>\n>> 128MB\n>>\n>> configuration file\n>>\n>>\n>> TimeZone\n>>\n>> GMT\n>>\n>> user\n>>\n>> \n>>\n>> I run queries through JDBC driver (9.3-1100-jdbc4.jar). I know that \n>> the issue is not related to the PC, since it give the same result on \n>> a bunch of different computers.\n>>\n>> \n>>\n>> I have two questions:\n>>\n>> * What is the difference between restarting PostgreSQL service and\n>> restarting the computer? Is PostgreSQL relying on some kind of \n>> OS-level cache outside Windows service?\n>>\n>> * How can I dig down deeper and see what’s causing PostgreSQL\n>> slowdown?\n>>\n>> \n>>\n>> Thanks in advance for your help,\n>>\n>> BR,\n>>\n>> \n>>\n>> Guillaume POUSSEL | ♠Sogeti High Tech\n>>\n>> <mailto:[email protected]> [email protected]\n>>\n>> \n>>\n>> \n>>\n>> \n>>\n>> \n>>\n>> \n>>\n>>\n>>\n>>\n>> This message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.\n>>\n>>\n>>\n>> This message contains information that may be privileged or \n>> confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.\nThis message contains information that may be privileged or confidential and is the property of the Capgemini Group. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.", "msg_date": "Thu, 11 Jan 2018 13:36:25 +0000", "msg_from": "\"POUSSEL, Guillaume\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Slow queries after Windows startup" }, { "msg_contents": "You should check this blog:\nhttp://blog.coelho.net/database/2013/08/14/postgresql-warmup.html\nTo warm-up your DB after reboot.\nLet me know \nRegards\nEric\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Thu, 11 Jan 2018 14:08:43 -0700 (MST)", "msg_from": "=?UTF-8?Q?=C3=89ric_Fontaine?= <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Slow queries after Windows startup" } ]
[ { "msg_contents": "Hi there,\n\nThis is likely me not understanding something, but I have a query that\nI would expect to be fast but PG insists on using a sequential scan.\nI've attached a minimized test case but I'll walk through the steps as\nwell.\n\nI'm running PostgreSQL 10.1 using the standard ArchLinux packages, but\nI've been able to reproduce this issue on our production systems\nrunning 9.5 as well.\n\nI have the following 2 tables in a standard users/addresses\nconfiguration with an extra index on addresses to make lookups on the\nreferring side faster:\n\n CREATE TABLE users (\n id integer PRIMARY KEY\n );\n\n CREATE TABLE addresses (\n id integer PRIMARY KEY,\n user_id integer REFERENCES users(id)\n );\n\n CREATE INDEX ix_addresses_user_id ON addresses (user_id);\n\nAlso, I turn off sequential scanning to force the database to consider\nany other plan first:\n\n SET enable_seqscan TO OFF;\n\nThen, I would expect the following query to have a query plan without\nany sequential scans:\n\n EXPLAIN (ANALYZE, BUFFERS)\n SELECT addresses.id\n FROM addresses\n WHERE (\n addresses.id = 1 OR\n EXISTS (\n SELECT 1 FROM users\n WHERE (\n users.id = addresses.user_id AND\n users.id = 1\n )\n )\n );\n\n -[ RECORD 1\n]------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Seq Scan on addresses\n(cost=10000000000.00..10000018508.10 rows=1130 width=4) (actual\ntime=0.001..0.001 rows=0 loops=1)\n -[ RECORD 2\n]------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Filter: ((id = 1) OR (alternatives: SubPlan 1 or\nhashed SubPlan 2))\n -[ RECORD 3\n]------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | SubPlan 1\n -[ RECORD 4\n]------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | -> Result (cost=0.15..8.17 rows=1 width=0)\n(never executed)\n -[ RECORD 5\n]------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | One-Time Filter: (addresses.user_id = 1)\n -[ RECORD 6\n]------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | -> Index Only Scan using users_pkey on\nusers (cost=0.15..8.17 rows=1 width=0) (never executed)\n -[ RECORD 7\n]------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Index Cond: (id = 1)\n -[ RECORD 8\n]------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Heap Fetches: 0\n -[ RECORD 9\n]------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | SubPlan 2\n -[ RECORD 10\n]-----------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | -> Index Only Scan using users_pkey on users\nusers_1 (cost=0.15..8.17 rows=1 width=4) (never executed)\n -[ RECORD 11\n]-----------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Index Cond: (id = 1)\n -[ RECORD 12\n]-----------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Heap Fetches: 0\n -[ RECORD 13\n]-----------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Planning time: 0.082 ms\n -[ RECORD 14\n]-----------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Execution time: 0.032 ms\n\nGiven the `Seq Scan on addresses` above, the database clearly\ndisagrees. What am I missing here?\n\nStrangely, breaking down the query to its components does as I expect.\nThis is the primary key lookup:\n\n EXPLAIN (ANALYZE, BUFFERS)\n SELECT addresses.id\n FROM addresses\n WHERE addresses.id = 1;\n\n -[ RECORD 1\n]-----------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Index Only Scan using addresses_pkey on addresses\n(cost=0.15..8.17 rows=1 width=4) (actual time=0.008..0.008 rows=0\nloops=1)\n -[ RECORD 2\n]-----------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Index Cond: (id = 1)\n -[ RECORD 3\n]-----------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Heap Fetches: 0\n -[ RECORD 4\n]-----------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Buffers: shared hit=1\n -[ RECORD 5\n]-----------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Planning time: 0.206 ms\n -[ RECORD 6\n]-----------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Execution time: 0.031 ms\n\nAnd this is the semi-join:\n\n EXPLAIN (ANALYZE, BUFFERS)\n SELECT addresses.id\n FROM addresses\n WHERE EXISTS (\n SELECT 1 FROM users\n WHERE (\n users.id = addresses.user_id AND\n users.id = 1\n )\n );\n\n -[ RECORD 1\n]---------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Nested Loop (cost=4.40..23.19 rows=11 width=4)\n(actual time=0.007..0.007 rows=0 loops=1)\n -[ RECORD 2\n]---------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Buffers: shared hit=1\n -[ RECORD 3\n]---------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | -> Index Only Scan using users_pkey on users\n(cost=0.15..8.17 rows=1 width=4) (actual time=0.007..0.007 rows=0\nloops=1)\n -[ RECORD 4\n]---------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Index Cond: (id = 1)\n -[ RECORD 5\n]---------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Heap Fetches: 0\n -[ RECORD 6\n]---------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Buffers: shared hit=1\n -[ RECORD 7\n]---------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | -> Bitmap Heap Scan on addresses\n(cost=4.24..14.91 rows=11 width=8) (never executed)\n -[ RECORD 8\n]---------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Recheck Cond: (user_id = 1)\n -[ RECORD 9\n]---------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | -> Bitmap Index Scan on ix_addresses_user_id\n (cost=0.00..4.24 rows=11 width=0) (never executed)\n -[ RECORD 10\n]--------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Index Cond: (user_id = 1)\n -[ RECORD 11\n]--------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Planning time: 0.145 ms\n -[ RECORD 12\n]--------------------------------------------------------------------------------------------------------------------------\n QUERY PLAN | Execution time: 0.038 ms\n\nI could break the `OR` into 2 separate queries with a `UNION` but that\nseems like a rather strange contortion that I would expect the\ndatabase to handle for me.", "msg_date": "Thu, 11 Jan 2018 11:30:30 -0500", "msg_from": "Ronuk Raval <[email protected]>", "msg_from_op": true, "msg_subject": "Disjunctions and sequential scans" } ]
[ { "msg_contents": "Dear Expert,\n\nWhile connecting PostgreSQL 9.3 with PGAdmin client I am getting the below error.\n\n[cid:[email protected]]\n\nHowever I am able to connect the database using psql thourgh Putty.\n\nEntry in pg_hba.conf\n\n# IPv4 local connections:\nhost all all 127.0.0.1/32 md5\nhost all all 0.0.0.0/0 md5\n\nentry in Postgresql.conf\n\nlisten_addresses = '*'\nport = 5432\n\nport is already open and I am able to telnet.\n\nEven after creating a new instance and fresh installation of PostgreSQL, I am getting the same error.\n\nOS-CentOS 7\nS/w-PostgreSQL9.3\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.", "msg_date": "Fri, 12 Jan 2018 04:55:41 +0000", "msg_from": "Dinesh Chandra 12108 <[email protected]>", "msg_from_op": true, "msg_subject": "PGadmin error while connecting with database." }, { "msg_contents": "Do you any errors in the server log or in the log of pgadmin (show us..)?\nWhat are the settings that you configured for your pgadmin client and for\nthe connection ?\n\n2018-01-12 6:55 GMT+02:00 Dinesh Chandra 12108 <[email protected]>:\n\n> Dear Expert,\n>\n>\n>\n> While connecting PostgreSQL 9.3 with PGAdmin client I am getting the below\n> error.\n>\n>\n>\n>\n>\n> However I am able to connect the database using psql thourgh Putty.\n>\n>\n>\n> *Entry in pg_hba.conf*\n>\n>\n>\n> # IPv4 local connections:\n>\n> host all all 127.0.0.1/32 md5\n>\n> host all all 0.0.0.0/0 md5\n>\n>\n>\n> *entry in Postgresql.conf*\n>\n>\n>\n> listen_addresses = '*'\n>\n> port = 5432\n>\n>\n>\n> port is already open and I am able to telnet.\n>\n>\n>\n> Even after creating a new instance and fresh installation of PostgreSQL, I\n> am getting the same error.\n>\n>\n>\n> OS-CentOS 7\n>\n> S/w-PostgreSQL9.3\n>\n>\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 | Ext 1078 |[email protected]\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India\n> <https://maps.google.com/?q=NSEZ,+Phase-II+,Noida-Dadri+Road,+Noida+-+201+305,India&entry=gmail&source=g>\n> .\n>\n>\n>\n> ------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>", "msg_date": "Fri, 12 Jan 2018 10:44:31 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PGadmin error while connecting with database." }, { "msg_contents": "Hi Mariel,\n\n\nThanks for your response. the problem has been resolved now .\n\n I checked all the listening port status and port 5432 was only opened for local not for all.\n\nroot> netstat -nlt\n\n\ntcp 0 127.0.0.1:5432 0.0.0.* LISTEN\n\n\nI changed it as the below and problem got resolved.\n\n\ntcp 0 0.0.0.0:5432 0.0.0.* LISTEN\n\n\nRegards,\nDinesh Chandra\nCyient Ltd.\n\n\n________________________________\nFrom: Mariel Cherkassky <[email protected]>\nSent: Friday, January 12, 2018 2:14:31 PM\nTo: Dinesh Chandra 12108\nCc: [email protected]; [email protected]\nSubject: [EXTERNAL]Re: PGadmin error while connecting with database.\n\nDo you any errors in the server log or in the log of pgadmin (show us..)? What are the settings that you configured for your pgadmin client and for the connection ?\n\n2018-01-12 6:55 GMT+02:00 Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>>:\nDear Expert,\n\nWhile connecting PostgreSQL 9.3 with PGAdmin client I am getting the below error.\n\n[cid:[email protected]]\n\nHowever I am able to connect the database using psql thourgh Putty.\n\nEntry in pg_hba.conf\n\n# IPv4 local connections:\nhost all all 127.0.0.1/32<http://127.0.0.1/32> md5\nhost all all 0.0.0.0/0<http://0.0.0.0/0> md5\n\nentry in Postgresql.conf\n\nlisten_addresses = '*'\nport = 5432\n\nport is already open and I am able to telnet.\n\nEven after creating a new instance and fresh installation of PostgreSQL, I am getting the same error.\n\nOS-CentOS 7\nS/w-PostgreSQL9.3\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India<https://maps.google.com/?q=NSEZ,+Phase-II+,Noida-Dadri+Road,+Noida+-+201+305,India&entry=gmail&source=g>.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.", "msg_date": "Sun, 14 Jan 2018 04:10:48 +0000", "msg_from": "Dinesh Chandra 12108 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: PGadmin error while connecting with database." } ]
[ { "msg_contents": "Dear all\n\nSomeone help me analyze the two execution plans below (Explain ANALYZE\nused), is the query 9 of TPC-H benchmark [1].\nI'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB\n15 Krpm AND SSD Sansung EVO 500GB.\nMy DBMS parameters presents in postgresql.conf is default, but in SSD I\nhave changed random_page_cost = 1.0.\n\nI do not understand, because running on an HDD a query used half the time.\nI explain better, in HDD spends on average 12 minutes the query execution\nand on SSD spent 26 minutes.\nI think maybe the execution plan is using more write operations, and so the\nHDD SAS 15Krpm has been faster.\nAnyway, I always thought that an SSD would be equal or faster, but in the\ncase and four more cases we have here, it lost a lot for the HDDs.\n\nAny help in understanding, is welcome\n\nBest Regards\nNeto\n\n----------------- Query execution Time on SSD ---------------\nexecution 1: 00:23:29\nexecution 2: 00:28:38\nexecution 3: 00:27:32\nexecution 4: 00:27:54\nexecution 5: 00:27:35\nexecution 6: 00:26:19\nAverage: 26min 54 seconds\n\n------------Query execution Time on HDD\n-------------------------------------------------------------------------------\nexecution 1: 00:12:44\nexecution 2: 00:12:30\nexecution 3: 00:12:47\nexecution 4: 00:13:02\nexecution 5: 00:13:00\nexecution 6: 00:12:47\nAverage: 12 minutes 48 seconds\n\n---------------------------------- EXECUTION PLAN SSD\nStorage--------------------------------------------------------\nFinalize GroupAggregate (cost=15.694.362.41..15842178.65 rows=60150\nwidth=66) (actual time=1670577.649..1674717.444 rows=175 loops=1) Group\nKey: nation.n_name, (date_part(_year_::text,\n(orders.o_orderdate)::timestamp without time zone)) -> Gather Merge\n(cost=15694362.41..15839923.02 rows=120300 width=66) (actual\ntime=1670552.446..1674716.748 rows=525 loops=1) Workers Planned:\n2 Workers Launched: 2 -> Partial GroupAggregate\n(cost=15693362.39..15825037.39 rows=60150 width=66) (actual\ntime=1640482.164..1644619.574 rows=175 loops=3) Group Key:\nnation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp\nwithout time zone)) -> Sort (cost=15693362.39..15709690.19\nrows=6531119 width=57) (actual time=1640467.384..1641511.970 rows=4344197\nloops=3) Sort Key: nation.n_name,\n(date_part(_year_::text, (orders.o_orderdate)::timestamp without time\nzone)) DESC Sort Method: external merge Disk:\n319512kB -> Hash Join (cost=4708869.23..14666423.78\nrows=6531119 width=57) (actual time=1366753.586..1634128.122 rows=4344197\nloops=3) Hash Cond: (lineitem.l_suppkey =\nsupplier.s_suppkey) -> Hash Join\n(cost=4683027.67..14400582.74 rows=6531119 width=43) (actual\ntime=1328019.213..1623919.675 rows=4344197\nloops=3) Hash Cond: (lineitem.l_orderkey =\norders.o_orderkey) -> Hash Join\n(cost=1993678.29..11279593.98 rows=6531119 width=47) (actual\ntime=245906.330..1316201.213 rows=4344197\nloops=3) Hash Cond:\n((lineitem.l_suppkey = partsupp.ps_suppkey) AND (lineitem.l_partkey =\npartsupp.ps_partkey)) -> Hash Join\n(cost=273200.59..9157211.71 rows=6531119 width=45) (actual\ntime=5103.563..1007657.993 rows=4344197\nloops=3) Hash Cond:\n(lineitem.l_partkey =\npart.p_partkey) -> Parallel Seq\nScan on lineitem (cost=0.00..5861332.93 rows=100005093 width=41) (actual\ntime=3.494..842667.110 rows=80004097\nloops=3) -> Hash\n(cost=263919.95..263919.95 rows=565651 width=4) (actual\ntime=4973.807..4973.807 rows=434469\nloops=3) Buckets: 131072\nBatches: 8 Memory Usage:\n2933kB -> Seq Scan on\npart (cost=0.00..263919.95 rows=565651 width=4) (actual\ntime=11.810..4837.287 rows=434469\nloops=3) Filter:\n((p_name)::text ~~\n_%orchid%_::text)\nRows Removed by Filter: 7565531 ->\nHash (cost=1052983.08..1052983.08 rows=31999708 width=22) (actual\ntime=240711.936..240711.936 rows=32000000\nloops=3) Buckets: 65536\nBatches: 512 Memory Usage:\n3941kB -> Seq Scan on partsupp\n(cost=0.00..1052983.08 rows=31999708 width=22) (actual\ntime=0.033..228828.149 rows=32000000\nloops=3) -> Hash\n(cost=1704962.28..1704962.28 rows=60000728 width=8) (actual\ntime=253669.242..253669.242 rows=60000000\nloops=3) Buckets: 131072 Batches:\n1024 Memory Usage: 3316kB -> Seq\nScan on orders (cost=0.00..1704962.28 rows=60000728 width=8) (actual\ntime=0.038..237545.226 rows=60000000 loops=3) ->\nHash (cost=18106.56..18106.56 rows=400000 width=30) (actual\ntime=277.283..277.283 rows=400000 loops=3)\nBuckets: 65536 Batches: 8 Memory Usage:\n3549kB -> Hash Join (cost=1.56..18106.56\nrows=400000 width=30) (actual time=45.155..205.372 rows=400000\nloops=3) Hash Cond:\n(supplier.s_nationkey =\nnation.n_nationkey) -> Seq Scan on\nsupplier (cost=0.00..13197.00 rows=400000 width=12) (actual\ntime=45.094..129.333 rows=400000\nloops=3) -> Hash (cost=1.25..1.25\nrows=25 width=30) (actual time=0.038..0.038 rows=25\nloops=3) Buckets: 1024 Batches:\n1 Memory Usage: 10kB -> Seq\nScan on nation (cost=0.00..1.25 rows=25 width=30) (actual\ntime=0.026..0.029 rows=25 loops=3)Planning time: 2.251 msExecution time:\n1674790.954 ms\n\n\n--------------------------------------------------Execution plan on HDD\nStorage -------------------------------------------------\nFinalize GroupAggregate (cost=14.865.093.59..14942715.87 rows=60150\nwidth=66) (actual time=763039.932..767231.344 rows=175 loops=1) Group Key:\nnation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp\nwithout time zone)) -> Gather Merge (cost=14865093.59..14940460.24\nrows=120300 width=66) (actual time=763014.187..767230.826 rows=525\nloops=1) Workers Planned: 2 Workers Launched: 2 ->\nPartial GroupAggregate (cost=14864093.57..14925574.61 rows=60150 width=66)\n(actual time=758405.567..762576.512 rows=175 loops=3) Group\nKey: nation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp\nwithout time zone)) -> Sort (cost=14864093.57..14871647.12\nrows=3021421 width=57) (actual time=758348.786..759400.608 rows=4344197\nloops=3) Sort Key: nation.n_name,\n(date_part(_year_::text, (orders.o_orderdate)::timestamp without time\nzone)) DESC Sort Method: external merge Disk:\n324568kB -> Hash Join (cost=4703389.12..14311687.00\nrows=3021421 width=57) (actual time=474033.697..736861.120 rows=4344197\nloops=3) Hash Cond: (lineitem.l_suppkey =\nsupplier.s_suppkey) -> Hash Join\n(cost=4677547.56..14173154.89 rows=3030463 width=43) (actual\ntime=420246.635..728731.259 rows=4344197 loops=3)\nHash Cond: (lineitem.l_orderkey = orders.o_orderkey)\n-> Hash Join (cost=1988224.59..11157928.33 rows=3030463 width=47) (actual\ntime=92246.411..545600.522 rows=4344197 loops=3)\nHash Cond: ((lineitem.l_suppkey = partsupp.ps_suppkey) AND\n(lineitem.l_partkey = partsupp.ps_partkey))\n-> Hash Join (cost=267897.64..9150646.81 rows=3030463 width=45) (actual\ntime=9247.722..368140.568 rows=4344197 loops=3)\nHash Cond: (lineitem.l_partkey = part.p_partkey)\n-> Parallel Seq Scan on lineitem (cost=0.00..5861333.40 rows=100005140\nwidth=41) (actual time=41.805..224438.909 rows=80004097\nloops=3) -> Hash\n(cost=263920.35..263920.35 rows=242423 width=4) (actual\ntime=9181.407..9181.407 rows=434469 loops=3)\nBuckets: 131072 (originally 131072) Batches: 8 (originally 4) Memory\nUsage: 3073kB -> Seq Scan\non part (cost=0.00..263920.35 rows=242423 width=4) (actual\ntime=5.608..9027.871 rows=434469 loops=3)\n Filter: ((p_name)::text ~~\n_%orchid%_::text)\nRows Removed by Filter: 7565531 ->\nHash (cost=1052934.38..1052934.38 rows=31994838 width=22) (actual\ntime=82524.045..82524.045 rows=32000000 loops=3)\nBuckets: 65536 Batches: 512 Memory Usage: 3941kB\n-> Seq Scan on partsupp (cost=0.00..1052934.38 rows=31994838 width=22)\n(actual time=0.037..37865.003 rows=32000000 loops=3)\n-> Hash (cost=1704952.32..1704952.32 rows=59999732 width=8) (actual\ntime=98182.919..98182.919 rows=60000000 loops=3)\nBuckets: 131072 Batches: 1024 Memory Usage: 3316kB\n-> Seq Scan on orders (cost=0.00..1704952.32 rows=59999732 width=8)\n(actual time=0.042..43977.490 rows=60000000 loops=3)\n-> Hash (cost=18106.56..18106.56 rows=400000 width=30) (actual\ntime=555.225..555.225 rows=400000 loops=3)\nBuckets: 65536 Batches: 8 Memory Usage: 3549kB\n-> Hash Join (cost=1.56..18106.56 rows=400000 width=30) (actual\ntime=1.748..484.203 rows=400000 loops=3)\nHash Cond: (supplier.s_nationkey = nation.n_nationkey)\n-> Seq Scan on supplier (cost=0.00..13197.00 rows=400000 width=12)\n(actual time=1.718..408.463 rows=400000 loops=3)\n-> Hash (cost=1.25..1.25 rows=25 width=30) (actual time=0.019..0.019\nrows=25 loops=3) Buckets: 1024\nBatches: 1 Memory Usage: 10kB\n-> Seq Scan on nation (cost=0.00..1.25 rows=25 width=30) (actual\ntime=0.007..0.010 rows=25 loops=3)Planning time: 12.145 msExecution time:\n767503.736 ms\n\n\n-- Query SQL ------------------\n\nselect\n nation,\n o_year,\n sum(amount) as sum_profit\nfrom\n (\n select\n n_name as nation,\n extract(year from o_orderdate) as o_year,\n l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity\nas amount\n from\n part,\n supplier,\n lineitem,\n partsupp,\n orders,\n nation\n where\n s_suppkey = l_suppkey\n and ps_suppkey = l_suppkey\n and ps_partkey = l_partkey\n and p_partkey = l_partkey\n and o_orderkey = l_orderkey\n and s_nationkey = n_nationkey\n and p_name like '%orchid%'\n ) as profit\ngroup by\n nation,\n o_year\norder by\n nation,\n o_year desc\n\nDear allSomeone help me analyze the two execution plans below (Explain ANALYZE used), is the  query 9 of TPC-H benchmark [1].I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB 15 Krpm AND SSD Sansung EVO 500GB.My DBMS parameters presents in postgresql.conf is default, but in SSD I have changed random_page_cost = 1.0.I do not understand, because running on an HDD a query used half the time. I explain better, in HDD spends on average 12 minutes the query execution and on SSD spent 26 minutes.I think maybe the execution plan is using more write operations, and so the HDD SAS 15Krpm has been faster.Anyway, I always thought that an SSD would be equal or faster, but in the case and four more cases we have here, it lost a lot for the HDDs.Any help in understanding, is welcomeBest Regards Neto ----------------- Query execution Time on SSD ---------------execution 1: 00:23:29execution 2: 00:28:38execution 3: 00:27:32execution 4: 00:27:54execution 5: 00:27:35execution 6: 00:26:19Average: 26min 54 seconds------------Query execution Time on HDD -------------------------------------------------------------------------------execution 1: 00:12:44execution 2: 00:12:30execution 3: 00:12:47execution 4: 00:13:02execution 5: 00:13:00execution 6: 00:12:47Average: 12 minutes 48 seconds---------------------------------- EXECUTION PLAN SSD Storage--------------------------------------------------------Finalize GroupAggregate  (cost=15.694.362.41..15842178.65 rows=60150 width=66) (actual time=1670577.649..1674717.444 rows=175 loops=1)  Group Key: nation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp without time zone))  ->  Gather Merge  (cost=15694362.41..15839923.02 rows=120300 width=66) (actual time=1670552.446..1674716.748 rows=525 loops=1)        Workers Planned: 2        Workers Launched: 2        ->  Partial GroupAggregate  (cost=15693362.39..15825037.39 rows=60150 width=66) (actual time=1640482.164..1644619.574 rows=175 loops=3)              Group Key: nation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp without time zone))              ->  Sort  (cost=15693362.39..15709690.19 rows=6531119 width=57) (actual time=1640467.384..1641511.970 rows=4344197 loops=3)                    Sort Key: nation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp without time zone)) DESC                    Sort Method: external merge  Disk: 319512kB                    ->  Hash Join  (cost=4708869.23..14666423.78 rows=6531119 width=57) (actual time=1366753.586..1634128.122 rows=4344197 loops=3)                          Hash Cond: (lineitem.l_suppkey = supplier.s_suppkey)                          ->  Hash Join  (cost=4683027.67..14400582.74 rows=6531119 width=43) (actual time=1328019.213..1623919.675 rows=4344197 loops=3)                                Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)                                ->  Hash Join  (cost=1993678.29..11279593.98 rows=6531119 width=47) (actual time=245906.330..1316201.213 rows=4344197 loops=3)                                      Hash Cond: ((lineitem.l_suppkey = partsupp.ps_suppkey) AND (lineitem.l_partkey = partsupp.ps_partkey))                                      ->  Hash Join  (cost=273200.59..9157211.71 rows=6531119 width=45) (actual time=5103.563..1007657.993 rows=4344197 loops=3)                                            Hash Cond: (lineitem.l_partkey = part.p_partkey)                                            ->  Parallel Seq Scan on lineitem  (cost=0.00..5861332.93 rows=100005093 width=41) (actual time=3.494..842667.110 rows=80004097 loops=3)                                            ->  Hash  (cost=263919.95..263919.95 rows=565651 width=4) (actual time=4973.807..4973.807 rows=434469 loops=3)                                                  Buckets: 131072  Batches: 8  Memory Usage: 2933kB                                                  ->  Seq Scan on part  (cost=0.00..263919.95 rows=565651 width=4) (actual time=11.810..4837.287 rows=434469 loops=3)                                                        Filter: ((p_name)::text ~~ _%orchid%_::text)                                                        Rows Removed by Filter: 7565531                                      ->  Hash  (cost=1052983.08..1052983.08 rows=31999708 width=22) (actual time=240711.936..240711.936 rows=32000000 loops=3)                                            Buckets: 65536  Batches: 512  Memory Usage: 3941kB                                            ->  Seq Scan on partsupp  (cost=0.00..1052983.08 rows=31999708 width=22) (actual time=0.033..228828.149 rows=32000000 loops=3)                                ->  Hash  (cost=1704962.28..1704962.28 rows=60000728 width=8) (actual time=253669.242..253669.242 rows=60000000 loops=3)                                      Buckets: 131072  Batches: 1024  Memory Usage: 3316kB                                      ->  Seq Scan on orders  (cost=0.00..1704962.28 rows=60000728 width=8) (actual time=0.038..237545.226 rows=60000000 loops=3)                          ->  Hash  (cost=18106.56..18106.56 rows=400000 width=30) (actual time=277.283..277.283 rows=400000 loops=3)                                Buckets: 65536  Batches: 8  Memory Usage: 3549kB                                ->  Hash Join  (cost=1.56..18106.56 rows=400000 width=30) (actual time=45.155..205.372 rows=400000 loops=3)                                      Hash Cond: (supplier.s_nationkey = nation.n_nationkey)                                      ->  Seq Scan on supplier  (cost=0.00..13197.00 rows=400000 width=12) (actual time=45.094..129.333 rows=400000 loops=3)                                      ->  Hash  (cost=1.25..1.25 rows=25 width=30) (actual time=0.038..0.038 rows=25 loops=3)                                            Buckets: 1024  Batches: 1  Memory Usage: 10kB                                            ->  Seq Scan on nation  (cost=0.00..1.25 rows=25 width=30) (actual time=0.026..0.029 rows=25 loops=3)Planning time: 2.251 msExecution time: 1674790.954 ms--------------------------------------------------Execution plan on HDD Storage -------------------------------------------------Finalize GroupAggregate  (cost=14.865.093.59..14942715.87 rows=60150 width=66) (actual time=763039.932..767231.344 rows=175 loops=1)  Group Key: nation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp without time zone))  ->  Gather Merge  (cost=14865093.59..14940460.24 rows=120300 width=66) (actual time=763014.187..767230.826 rows=525 loops=1)        Workers Planned: 2        Workers Launched: 2        ->  Partial GroupAggregate  (cost=14864093.57..14925574.61 rows=60150 width=66) (actual time=758405.567..762576.512 rows=175 loops=3)              Group Key: nation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp without time zone))              ->  Sort  (cost=14864093.57..14871647.12 rows=3021421 width=57) (actual time=758348.786..759400.608 rows=4344197 loops=3)                    Sort Key: nation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp without time zone)) DESC                    Sort Method: external merge  Disk: 324568kB                    ->  Hash Join  (cost=4703389.12..14311687.00 rows=3021421 width=57) (actual time=474033.697..736861.120 rows=4344197 loops=3)                          Hash Cond: (lineitem.l_suppkey = supplier.s_suppkey)                          ->  Hash Join  (cost=4677547.56..14173154.89 rows=3030463 width=43) (actual time=420246.635..728731.259 rows=4344197 loops=3)                                Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)                                ->  Hash Join  (cost=1988224.59..11157928.33 rows=3030463 width=47) (actual time=92246.411..545600.522 rows=4344197 loops=3)                                      Hash Cond: ((lineitem.l_suppkey = partsupp.ps_suppkey) AND (lineitem.l_partkey = partsupp.ps_partkey))                                      ->  Hash Join  (cost=267897.64..9150646.81 rows=3030463 width=45) (actual time=9247.722..368140.568 rows=4344197 loops=3)                                            Hash Cond: (lineitem.l_partkey = part.p_partkey)                                            ->  Parallel Seq Scan on lineitem  (cost=0.00..5861333.40 rows=100005140 width=41) (actual time=41.805..224438.909 rows=80004097 loops=3)                                            ->  Hash  (cost=263920.35..263920.35 rows=242423 width=4) (actual time=9181.407..9181.407 rows=434469 loops=3)                                                  Buckets: 131072 (originally 131072)  Batches: 8 (originally 4)  Memory Usage: 3073kB                                                  ->  Seq Scan on part  (cost=0.00..263920.35 rows=242423 width=4) (actual time=5.608..9027.871 rows=434469 loops=3)                                                        Filter: ((p_name)::text ~~ _%orchid%_::text)                                                        Rows Removed by Filter: 7565531                                      ->  Hash  (cost=1052934.38..1052934.38 rows=31994838 width=22) (actual time=82524.045..82524.045 rows=32000000 loops=3)                                            Buckets: 65536  Batches: 512  Memory Usage: 3941kB                                            ->  Seq Scan on partsupp  (cost=0.00..1052934.38 rows=31994838 width=22) (actual time=0.037..37865.003 rows=32000000 loops=3)                                ->  Hash  (cost=1704952.32..1704952.32 rows=59999732 width=8) (actual time=98182.919..98182.919 rows=60000000 loops=3)                                      Buckets: 131072  Batches: 1024  Memory Usage: 3316kB                                      ->  Seq Scan on orders  (cost=0.00..1704952.32 rows=59999732 width=8) (actual time=0.042..43977.490 rows=60000000 loops=3)                          ->  Hash  (cost=18106.56..18106.56 rows=400000 width=30) (actual time=555.225..555.225 rows=400000 loops=3)                                Buckets: 65536  Batches: 8  Memory Usage: 3549kB                                ->  Hash Join  (cost=1.56..18106.56 rows=400000 width=30) (actual time=1.748..484.203 rows=400000 loops=3)                                      Hash Cond: (supplier.s_nationkey = nation.n_nationkey)                                      ->  Seq Scan on supplier  (cost=0.00..13197.00 rows=400000 width=12) (actual time=1.718..408.463 rows=400000 loops=3)                                      ->  Hash  (cost=1.25..1.25 rows=25 width=30) (actual time=0.019..0.019 rows=25 loops=3)                                            Buckets: 1024  Batches: 1  Memory Usage: 10kB                                            ->  Seq Scan on nation  (cost=0.00..1.25 rows=25 width=30) (actual time=0.007..0.010 rows=25 loops=3)Planning time: 12.145 msExecution time: 767503.736 ms-- Query SQL ------------------select    nation,    o_year,    sum(amount) as sum_profitfrom    (        select            n_name as nation,            extract(year from o_orderdate) as o_year,            l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount        from            part,            supplier,            lineitem,            partsupp,            orders,            nation        where            s_suppkey = l_suppkey            and ps_suppkey = l_suppkey            and ps_partkey = l_partkey            and p_partkey = l_partkey            and o_orderkey = l_orderkey            and s_nationkey = n_nationkey            and p_name like '%orchid%'    ) as profitgroup by    nation,    o_yearorder by    nation,    o_year desc", "msg_date": "Sun, 14 Jan 2018 12:44:00 -0800", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "HDD vs SSD without explanation" }, { "msg_contents": "On Sun, Jan 14, 2018 at 12:44:00PM -0800, Neto pr wrote:\n> Dear all\n> \n> Someone help me analyze the two execution plans below (Explain ANALYZE\n> used), is the query 9 of TPC-H benchmark [1].\n>\n> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB\n> 15 Krpm AND SSD Sansung EVO 500GB.\n>\n> I think maybe the execution plan is using more write operations, and so the\n> HDD SAS 15Krpm has been faster.\n\nThe query plan is all garbled by mail , could you resend? Or post a link from\nhttps://explain.depesz.com/\n\nTo see if the query is causing many writes (due to dirty pages, sorts, etc),\nrun with explain(analyze,buffers) \n\nBut from what I could tell, your problems are here:\n\n-> Parallel Seq Scan on lineitem (cost=0.00..5861332.93 rows=100005093 width=41) (actual TIME=3.494..842667.110 rows=80004097 loops=3)\nvs\n-> Parallel Seq Scan on lineitem (cost=0.00..5861333.40 rows=100005140 width=41) (actual TIME=41.805..224438.909 rows=80004097 loops=3)\n\n-> Seq Scan on partsupp (cost=0.00..1052983.08 rows=31999708 width=22) (actual TIME=0.033..228828.149 rows=32000000 loops=3)\nvs\n-> Seq Scan on partsupp (cost=0.00..1052934.38 rows=31994838 width=22) (actual TIME=0.037..37865.003 rows=32000000 loops=3) \n\nCan you reproduce the speed difference using dd ?\ntime sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_size \n\nOr: bonnie++ -f -n0\n\nWhat OS/kernel are you using? LVM? filesystem? I/O scheduler? partitions?\nreadahead? blockdev --getra\n\nIf you're running under linux, maybe you can just send the output of:\nfor a in /sys/block/sdX/queue/*; do echo \"$a `cat $a`\"; done\nor: tail /sys/block/sdX/queue/{minimum_io_size,optimal_io_size,read_ahead_kb,scheduler,rotational,max_sectors_kb,logical_block_size,physical_block_size}\n\nJustin\n\n", "msg_date": "Sun, 14 Jan 2018 15:40:48 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "Thanks for the reply.\nI'll try upload the execution plan with Explain (analyse, buffer) for\nwebsite: https://explain.depesz.com/\n\nI'm make an experiment for a scientific research and this is what I\nfind strange, explaining better, strange HDD performance far outweigh\nthe performance of an SSD.\n\nDo you think that if you run a VACUMM FULL the performance with the\nSSD will be better than a 15Krpm SAS HDD?\n\nBest Regards\nNeto\n<div id=\"DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2\"><br /> <table\nstyle=\"border-top: 1px solid #D3D4DE;\">\n\t<tr>\n <td style=\"width: 55px; padding-top: 18px;\"><a\nhref=\"https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail\"\ntarget=\"_blank\"><img\nsrc=\"https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif\"\nalt=\"\" width=\"46\" height=\"29\" style=\"width: 46px; height: 29px;\"\n/></a></td>\n\t\t<td style=\"width: 470px; padding-top: 17px; color: #41424e;\nfont-size: 13px; font-family: Arial, Helvetica, sans-serif;\nline-height: 18px;\">Livre de vírus. <a\nhref=\"https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail\"\ntarget=\"_blank\" style=\"color: #4453ea;\">www.avast.com</a>. \t\t</td>\n\t</tr>\n</table>\n<a href=\"#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2\" width=\"1\" height=\"1\"></a></div>\n\n2018-01-14 19:40 GMT-02:00 Justin Pryzby <[email protected]>:\n> On Sun, Jan 14, 2018 at 12:44:00PM -0800, Neto pr wrote:\n>> Dear all\n>>\n>> Someone help me analyze the two execution plans below (Explain ANALYZE\n>> used), is the query 9 of TPC-H benchmark [1].\n>>\n>> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB\n>> 15 Krpm AND SSD Sansung EVO 500GB.\n>>\n>> I think maybe the execution plan is using more write operations, and so the\n>> HDD SAS 15Krpm has been faster.\n>\n> The query plan is all garbled by mail , could you resend? Or post a link from\n> https://explain.depesz.com/\n>\n> To see if the query is causing many writes (due to dirty pages, sorts, etc),\n> run with explain(analyze,buffers)\n>\n> But from what I could tell, your problems are here:\n>\n> -> Parallel Seq Scan on lineitem (cost=0.00..5861332.93 rows=100005093 width=41) (actual TIME=3.494..842667.110 rows=80004097 loops=3)\n> vs\n> -> Parallel Seq Scan on lineitem (cost=0.00..5861333.40 rows=100005140 width=41) (actual TIME=41.805..224438.909 rows=80004097 loops=3)\n>\n> -> Seq Scan on partsupp (cost=0.00..1052983.08 rows=31999708 width=22) (actual TIME=0.033..228828.149 rows=32000000 loops=3)\n> vs\n> -> Seq Scan on partsupp (cost=0.00..1052934.38 rows=31994838 width=22) (actual TIME=0.037..37865.003 rows=32000000 loops=3)\n>\n> Can you reproduce the speed difference using dd ?\n> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>\n> Or: bonnie++ -f -n0\n>\n> What OS/kernel are you using? LVM? filesystem? I/O scheduler? partitions?\n> readahead? blockdev --getra\n>\n> If you're running under linux, maybe you can just send the output of:\n> for a in /sys/block/sdX/queue/*; do echo \"$a `cat $a`\"; done\n> or: tail /sys/block/sdX/queue/{minimum_io_size,optimal_io_size,read_ahead_kb,scheduler,rotational,max_sectors_kb,logical_block_size,physical_block_size}\n>\n> Justin\n\n", "msg_date": "Sun, 14 Jan 2018 21:59:05 -0200", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "2018-01-14 13:40 GMT-08:00 Justin Pryzby <[email protected]>:\n> On Sun, Jan 14, 2018 at 12:44:00PM -0800, Neto pr wrote:\n>> Dear all\n>>\n>> Someone help me analyze the two execution plans below (Explain ANALYZE\n>> used), is the query 9 of TPC-H benchmark [1].\n>>\n>> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB\n>> 15 Krpm AND SSD Sansung EVO 500GB.\n>>\n>> I think maybe the execution plan is using more write operations, and so the\n>> HDD SAS 15Krpm has been faster.\n>\n> The query plan is all garbled by mail , could you resend? Or post a link from\n> https://explain.depesz.com/\n>\n> To see if the query is causing many writes (due to dirty pages, sorts, etc),\n> run with explain(analyze,buffers)\n>\n> But from what I could tell, your problems are here:\n>\n> -> Parallel Seq Scan on lineitem (cost=0.00..5861332.93 rows=100005093 width=41) (actual TIME=3.494..842667.110 rows=80004097 loops=3)\n> vs\n> -> Parallel Seq Scan on lineitem (cost=0.00..5861333.40 rows=100005140 width=41) (actual TIME=41.805..224438.909 rows=80004097 loops=3)\n>\n> -> Seq Scan on partsupp (cost=0.00..1052983.08 rows=31999708 width=22) (actual TIME=0.033..228828.149 rows=32000000 loops=3)\n> vs\n> -> Seq Scan on partsupp (cost=0.00..1052934.38 rows=31994838 width=22) (actual TIME=0.037..37865.003 rows=32000000 loops=3)\n>\n> Can you reproduce the speed difference using dd ?\n> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>\n> Or: bonnie++ -f -n0\n>\n> What OS/kernel are you using? LVM? filesystem? I/O scheduler? partitions?\n> readahead? blockdev --getra\n\nOS = Debian 8 64bits - 3.16.0-4\n\nSee below the Disk FileSystem --------------------------------\nroot@hp2ml110deb:/# fdisk -l\nDisk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors\nUnits: sectors of 1 * 512 = 512 bytes\nSector size (logical/physical): 512 bytes / 512 bytes\nI/O size (minimum/optimal): 512 bytes / 512 bytes\nDisklabel type: gpt\nDisk identifier: 26F5EB21-30DB-44E4-B9E2-E8105846B6C4\n\nDevice Start End Sectors Size Type\n/dev/sda1 2048 1050623 1048576 512M EFI System\n/dev/sda2 1050624 1937274879 1936224256 923.3G Linux filesystem\n/dev/sda3 1937274880 1953523711 16248832 7.8G Linux swap\n\nDisk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors\nUnits: sectors of 1 * 512 = 512 bytes\nSector size (logical/physical): 512 bytes / 512 bytes\nI/O size (minimum/optimal): 512 bytes / 512 bytes\n----------------------------------------------------------------------------\n\nThe DBMS and tablespace of users is installed in /dev/sdb SSD.\n\n> If you're running under linux, maybe you can just send the output of:\n> for a in /sys/block/sdX/queue/*; do echo \"$a `cat $a`\"; done\n> or: tail /sys/block/sdX/queue/{minimum_io_size,optimal_io_size,read_ahead_kb,scheduler,rotational,max_sectors_kb,logical_block_size,physical_block_size}\n>\n> Justin\n\n", "msg_date": "Sun, 14 Jan 2018 18:25:40 -0800", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "2018-01-14 15:59 GMT-08:00 Neto pr <[email protected]>:\n> Thanks for the reply.\n> I'll try upload the execution plan with Explain (analyse, buffer) for\n> website: https://explain.depesz.com/\n>\n\nBelow is a new execution plan, with Analyze, BUFFERS. This time,\nwithout changing anything in the configuration of the DBMS, I just\nrebooted the DBMS, the time of 16 minutes was obtained, against the 26\nminutes of another execution. But it still has not managed to exceed\nthe execution time in HDD SAS 15Krpm.\nI was not able to upload to the site, because I'm saving the execution\nplan in the database, and when I retrieve it, it loses the line\nbreaks, and the dxxxx site does not allow uploading.\n\n\n------------------- Execution Plan with Buffers executed on SSD\nStores.---------------------------------------------------------------------------------------------\n\nFinalize GroupAggregate (cost=15822228.33..15980046.69 rows=60150\nwidth=66) (actual time=969248.287..973686.679 rows=175 loops=1) Group\nKey: nation.n_name, (date_part(_year_::text,\n(orders.o_orderdate)::timestamp without time zone)) Buffers: shared\nhit=1327602 read=2305013, temp read=1183857 written=1180940 ->\nGather Merge (cost=15822228.33..15977791.06 rows=120300 width=66)\n(actual time=969222.164..973685.582 rows=525 loops=1) Workers\nPlanned: 2 Workers Launched: 2 Buffers: shared\nhit=1327602 read=2305013, temp read=1183857 written=1180940 ->\nPartial GroupAggregate (cost=15821228.31..15962905.44 rows=60150\nwidth=66) (actual time=941985.137..946403.344 rows=175 loops=3)\n Group Key: nation.n_name, (date_part(_year_::text,\n(orders.o_orderdate)::timestamp without time zone))\nBuffers: shared hit=3773802 read=7120852, temp read=3550293\nwritten=3541542 -> Sort (cost=15821228.31..15838806.37\nrows=7031225 width=57) (actual time=941954.595..943119.850\nrows=4344197 loops=3) Sort Key: nation.n_name,\n(date_part(_year_::text, (orders.o_orderdate)::timestamp without time\nzone)) DESC Sort Method: external merge Disk:\n320784kB Buffers: shared hit=3773802 read=7120852,\ntemp read=3550293 written=3541542 -> Hash Join\n(cost=4708859.28..14719466.13 rows=7031225 width=57) (actual\ntime=619996.638..933725.615 rows=4344197 loops=3)\n Hash Cond: (lineitem.l_suppkey = supplier.s_suppkey)\n Buffers: shared hit=3773732 read=7120852, temp read=3220697\nwritten=3211409 -> Hash Join\n(cost=4683017.71..14434606.65 rows=7071075 width=43) (actual\ntime=579893.395..926348.061 rows=4344197 loops=3)\n Hash Cond: (lineitem.l_orderkey = orders.o_orderkey)\n Buffers: shared hit=3758207 read=7108695, temp\nread=3114271 written=3105025 -> Hash\nJoin (cost=1993687.71..11297331.33 rows=7071075 width=47) (actual\ntime=79741.803..805259.856 rows=4344197 loops=3)\n Hash Cond: ((lineitem.l_suppkey = partsupp.ps_suppkey)\nAND (lineitem.l_partkey = partsupp.ps_partkey))\n Buffers: shared hit=1754251 read=5797780, temp\nread=2369849 written=2366741 ->\nHash Join (cost=273201.71..9157213.44 rows=7071075 width=45) (actual\ntime=5363.078..672302.517 rows=4344197 loops=3)\n Hash Cond: (lineitem.l_partkey = part.p_partkey)\n Buffers: shared hit=325918\nread=5027133, temp read=1742658 written=1742616\n -> Parallel Seq Scan on lineitem\n(cost=0.00..5861333.20 rows=100005120 width=41) (actual\ntime=0.129..536226.436 rows=80004097 loops=3)\n Buffers: shared hit=2 read=4861280\n -> Hash (cost=263921.00..263921.00\nrows=565657 width=4) (actual time=5362.100..5362.100 rows=434469\nloops=3) Buckets:\n131072 Batches: 8 Memory Usage: 2933kB\n Buffers: shared hit=325910 read=165853, temp\nwritten=3327 -> Seq\nScan on part (cost=0.00..263921.00 rows=565657 width=4) (actual\ntime=0.025..5279.959 rows=434469 loops=3)\n Filter: ((p_name)::text ~~ _%orchid%_::text)\n Rows Removed by\nFilter: 7565531\nBuffers: shared hit=325910 read=165853\n -> Hash (cost=1052986.00..1052986.00 rows=32000000 width=22)\n(actual time=74231.061..74231.061 rows=32000000 loops=3)\n Buckets: 65536 Batches: 512 Memory\nUsage: 3941kB Buffers:\nshared hit=1428311 read=770647, temp written=513846\n -> Seq Scan on partsupp\n(cost=0.00..1052986.00 rows=32000000 width=22) (actual\ntime=0.037..66316.652 rows=32000000 loops=3)\n Buffers: shared hit=1428311 read=770647\n -> Hash (cost=1704955.00..1704955.00\nrows=60000000 width=8) (actual time=46310.630..46310.630 rows=60000000\nloops=3) Buckets: 131072\nBatches: 1024 Memory Usage: 3316kB\n Buffers: shared hit=2003950 read=1310915, temp written=613128\n -> Seq Scan on orders\n(cost=0.00..1704955.00 rows=60000000 width=8) (actual\ntime=0.033..34352.493 rows=60000000 loops=3)\n Buffers: shared hit=2003950 read=1310915\n -> Hash (cost=18106.56..18106.56 rows=400000 width=30)\n(actual time=226.360..226.360 rows=400000 loops=3)\n Buckets: 65536 Batches: 8 Memory Usage: 3549kB\n Buffers: shared hit=15437 read=12157, temp\nwritten=6396 -> Hash Join\n(cost=1.56..18106.56 rows=400000 width=30) (actual time=0.037..145.779\nrows=400000 loops=3) Hash Cond:\n(supplier.s_nationkey = nation.n_nationkey)\n Buffers: shared hit=15437 read=12157\n -> Seq Scan on supplier (cost=0.00..13197.00\nrows=400000 width=12) (actual time=0.014..63.768 rows=400000 loops=3)\n Buffers: shared hit=15434\nread=12157 -> Hash\n(cost=1.25..1.25 rows=25 width=30) (actual time=0.015..0.015 rows=25\nloops=3) Buckets: 1024\nBatches: 1 Memory Usage: 10kB\n Buffers: shared hit=3 ->\n Seq Scan on nation (cost=0.00..1.25 rows=25 width=30) (actual\ntime=0.006..0.008 rows=25 loops=3)\n Buffers: shared hit=3Planning time: 16.668 msExecution\ntime: 973799.430 ms\n\n------------------------------------------------------------------------------------------------------------------------------------------------\n\n> I'm make an experiment for a scientific research and this is what I\n> find strange, explaining better, strange HDD performance far outweigh\n> the performance of an SSD.\n>\n> Do you think that if you run a VACUMM FULL the performance with the\n> SSD will be better than a 15Krpm SAS HDD?\n>\n> Best Regards\n> Neto\n> <div id=\"DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2\"><br /> <table\n> style=\"border-top: 1px solid #D3D4DE;\">\n> <tr>\n> <td style=\"width: 55px; padding-top: 18px;\"><a\n> href=\"https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail\"\n> target=\"_blank\"><img\n> src=\"https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif\"\n> alt=\"\" width=\"46\" height=\"29\" style=\"width: 46px; height: 29px;\"\n> /></a></td>\n> <td style=\"width: 470px; padding-top: 17px; color: #41424e;\n> font-size: 13px; font-family: Arial, Helvetica, sans-serif;\n> line-height: 18px;\">Livre de vírus. <a\n> href=\"https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail\"\n> target=\"_blank\" style=\"color: #4453ea;\">www.avast.com</a>. </td>\n> </tr>\n> </table>\n> <a href=\"#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2\" width=\"1\" height=\"1\"></a></div>\n>\n> 2018-01-14 19:40 GMT-02:00 Justin Pryzby <[email protected]>:\n>> On Sun, Jan 14, 2018 at 12:44:00PM -0800, Neto pr wrote:\n>>> Dear all\n>>>\n>>> Someone help me analyze the two execution plans below (Explain ANALYZE\n>>> used), is the query 9 of TPC-H benchmark [1].\n>>>\n>>> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB\n>>> 15 Krpm AND SSD Sansung EVO 500GB.\n>>>\n>>> I think maybe the execution plan is using more write operations, and so the\n>>> HDD SAS 15Krpm has been faster.\n>>\n>> The query plan is all garbled by mail , could you resend? Or post a link from\n>> https://explain.depesz.com/\n>>\n>> To see if the query is causing many writes (due to dirty pages, sorts, etc),\n>> run with explain(analyze,buffers)\n>>\n>> But from what I could tell, your problems are here:\n>>\n>> -> Parallel Seq Scan on lineitem (cost=0.00..5861332.93 rows=100005093 width=41) (actual TIME=3.494..842667.110 rows=80004097 loops=3)\n>> vs\n>> -> Parallel Seq Scan on lineitem (cost=0.00..5861333.40 rows=100005140 width=41) (actual TIME=41.805..224438.909 rows=80004097 loops=3)\n>>\n>> -> Seq Scan on partsupp (cost=0.00..1052983.08 rows=31999708 width=22) (actual TIME=0.033..228828.149 rows=32000000 loops=3)\n>> vs\n>> -> Seq Scan on partsupp (cost=0.00..1052934.38 rows=31994838 width=22) (actual TIME=0.037..37865.003 rows=32000000 loops=3)\n>>\n>> Can you reproduce the speed difference using dd ?\n>> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>>\n>> Or: bonnie++ -f -n0\n>>\n>> What OS/kernel are you using? LVM? filesystem? I/O scheduler? partitions?\n>> readahead? blockdev --getra\n>>\n>> If you're running under linux, maybe you can just send the output of:\n>> for a in /sys/block/sdX/queue/*; do echo \"$a `cat $a`\"; done\n>> or: tail /sys/block/sdX/queue/{minimum_io_size,optimal_io_size,read_ahead_kb,scheduler,rotational,max_sectors_kb,logical_block_size,physical_block_size}\n>>\n>> Justin\n\n", "msg_date": "Sun, 14 Jan 2018 18:36:02 -0800", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "On Sun, Jan 14, 2018 at 06:25:40PM -0800, Neto pr wrote:\n> > The query plan is all garbled by mail , could you resend? Or post a link from\n> > https://explain.depesz.com/\n\nOn Sun, Jan 14, 2018 at 06:36:02PM -0800, Neto pr wrote:\n> I was not able to upload to the site, because I'm saving the execution\n> plan in the database, and when I retrieve it, it loses the line breaks,\n\nThat's why it's an issue for me, too..\n\n> > What OS/kernel are you using? LVM? filesystem? I/O scheduler? partitions?\n> \n> See below the Disk FileSystem --------------------------------\n> root@hp2ml110deb:/# fdisk -l\n> Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors\n> \n> Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors\n> Units: sectors of 1 * 512 = 512 bytes\n> Sector size (logical/physical): 512 bytes / 512 bytes\n> I/O size (minimum/optimal): 512 bytes / 512 bytes\n> ----------------------------------------------------------------------------\nWhat about sdb partitions/FS?\n\nOn Sun, Jan 14, 2018 at 06:25:40PM -0800, Neto pr wrote:\n> The DBMS and tablespace of users is installed in /dev/sdb SSD.\n\nIs that also a temp_tablespace ? Or are your hashes spilling to HDD instead ?\n\nGroup Key: nation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp without time zone))\nBuffers: shared hit=3773802 read=7120852, temp read=3550293 written=3541542\n\nAre your SSD being used for anything else ?\n\nWhat about these?\n\n> > readahead? blockdev --getra\n\n> > If you're running under linux, maybe you can just send the output of:\n> > for a in /sys/block/sdX/queue/*; do echo \"$a `cat $a`\"; done\n> > or: tail /sys/block/sdX/queue/{minimum_io_size,optimal_io_size,read_ahead_kb,scheduler,rotational,max_sectors_kb,logical_block_size,physical_block_size}\n\n> > Can you reproduce the speed difference using dd ?\n> > time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n> >\n> > Or: bonnie++ -f -n0\n\nJustin\n\n", "msg_date": "Sun, 14 Jan 2018 21:09:41 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "2018-01-14 19:09 GMT-08:00 Justin Pryzby <[email protected]>:\n> On Sun, Jan 14, 2018 at 06:25:40PM -0800, Neto pr wrote:\n>> > The query plan is all garbled by mail , could you resend? Or post a link from\n>> > https://explain.depesz.com/\n>\n> On Sun, Jan 14, 2018 at 06:36:02PM -0800, Neto pr wrote:\n>> I was not able to upload to the site, because I'm saving the execution\n>> plan in the database, and when I retrieve it, it loses the line breaks,\n>\n> That's why it's an issue for me, too..\n>\n>> > What OS/kernel are you using? LVM? filesystem? I/O scheduler? partitions?\n>>\n>> See below the Disk FileSystem --------------------------------\n>> root@hp2ml110deb:/# fdisk -l\n>> Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors\n>>\n>> Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors\n>> Units: sectors of 1 * 512 = 512 bytes\n>> Sector size (logical/physical): 512 bytes / 512 bytes\n>> I/O size (minimum/optimal): 512 bytes / 512 bytes\n>> ----------------------------------------------------------------------------\n> What about sdb partitions/FS?\n\nI used EXT4 filesystem in Debian SO.\n\n>\n> On Sun, Jan 14, 2018 at 06:25:40PM -0800, Neto pr wrote:\n>> The DBMS and tablespace of users is installed in /dev/sdb SSD.\n>\n> Is that also a temp_tablespace ? Or are your hashes spilling to HDD instead ?\n>\n\nHow can I find out where my temp_tablesapce is?\nWith the command \\db+ (see below) does not show the location. But the\nDBMS I asked to install inside the SSD, but how can I find out the\nexact location of the temp_tablespace ?\n\n----------------------------------------------------------------------------\ntpch40gnorssd=# \\db+\n List of tablespaces\n Name | Owner | Location | Access\nprivileges | Options | Size | Description\n------------+----------+--------------------------------+-------------------+---------+--------+-------------\n pg_default | postgres | |\n | | 21 MB |\n pg_global | postgres | |\n | | 573 kB |\n tblpgssd | postgres | /media/ssd500gb/dados/pg101ssd |\n | | 206 GB |\n(3 rows)\n------------------------------------------------------------------------------\n\n> Group Key: nation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp without time zone))\n> Buffers: shared hit=3773802 read=7120852, temp read=3550293 written=3541542\n>\n> Are your SSD being used for anything else ?\n>\n> What about these?\n>\n>> > readahead? blockdev --getra\n>\n\nAbout knowing if the SSD is being used by another process, I will\nstill execute the command and send the result.\n\nBut I can say that the SSD is only used by the DBMS.\nExplaining better, My server has an HDD and an SSD. The Debian OS is\ninstalled on the HDD and I installed the DBMS inside the SSD and the\ndata tablespace also inside the SSD .\nThe server is dedicated to the DBMS and when I execute the queries,\nnothing else is executed. I still can not understand how an HDD is\nfaster than an SSD.\nI ran queries again on the SSD and the results were not good see:\n\nexecution 1- 00:16:13\nexecution 2- 00:25:30\nexecution 3- 00:28:09\nexecution 4- 00:24:33\nexecution 5- 00:24:38\n\nRegards\nNeto\n\n\n\n\n>> > If you're running under linux, maybe you can just send the output of:\n>> > for a in /sys/block/sdX/queue/*; do echo \"$a `cat $a`\"; done\n>> > or: tail /sys/block/sdX/queue/{minimum_io_size,optimal_io_size,read_ahead_kb,scheduler,rotational,max_sectors_kb,logical_block_size,physical_block_size}\n>\n>> > Can you reproduce the speed difference using dd ?\n>> > time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>> >\n>> > Or: bonnie++ -f -n0\n>\n> Justin\n\n", "msg_date": "Mon, 15 Jan 2018 03:04:09 -0800", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "2018-01-15 3:04 GMT-08:00 Neto pr <[email protected]>:\n> 2018-01-14 19:09 GMT-08:00 Justin Pryzby <[email protected]>:\n>> On Sun, Jan 14, 2018 at 06:25:40PM -0800, Neto pr wrote:\n>>> > The query plan is all garbled by mail , could you resend? Or post a\nlink from\n>>> > https://explain.depesz.com/\n>>\n>> On Sun, Jan 14, 2018 at 06:36:02PM -0800, Neto pr wrote:\n>>> I was not able to upload to the site, because I'm saving the execution\n>>> plan in the database, and when I retrieve it, it loses the line breaks,\n>>\n>> That's why it's an issue for me, too..\n>>\n>>> > What OS/kernel are you using? LVM? filesystem? I/O scheduler?\n partitions?\n>>>\n>>> See below the Disk FileSystem --------------------------------\n>>> root@hp2ml110deb:/# fdisk -l\n>>> Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors\n>>>\n>>> Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors\n>>> Units: sectors of 1 * 512 = 512 bytes\n>>> Sector size (logical/physical): 512 bytes / 512 bytes\n>>> I/O size (minimum/optimal): 512 bytes / 512 bytes\n>>>\n----------------------------------------------------------------------------\n>> What about sdb partitions/FS?\n>\n> I used EXT4 filesystem in Debian SO.\n>\n>>\n>> On Sun, Jan 14, 2018 at 06:25:40PM -0800, Neto pr wrote:\n>>> The DBMS and tablespace of users is installed in /dev/sdb SSD.\n>>\n>> Is that also a temp_tablespace ? Or are your hashes spilling to HDD\ninstead ?\n>>\n>\n> How can I find out where my temp_tablesapce is?\n> With the command \\db+ (see below) does not show the location. But the\n> DBMS I asked to install inside the SSD, but how can I find out the\n> exact location of the temp_tablespace ?\n>\n>\n----------------------------------------------------------------------------\n> tpch40gnorssd=# \\db+\n> List of tablespaces\n> Name | Owner | Location | Access\n> privileges | Options | Size | Description\n>\n------------+----------+--------------------------------+-------------------+---------+--------+-------------\n> pg_default | postgres | |\n> | | 21 MB |\n> pg_global | postgres | |\n> | | 573 kB |\n> tblpgssd | postgres | /media/ssd500gb/dados/pg101ssd |\n> | | 206 GB |\n> (3 rows)\n>\n------------------------------------------------------------------------------\n>\n\nI checked that the temporary tablespace pg_default is on the SSD, because\nwhen running show temp_tablespaces in psql returns empty, and by the\ndocumentation,\nhttps://www.postgresql.org/docs/current/static/runtime-config-client.html#GUC-TEMP-TABLESPACES\nwill be in the default directory, where I installed the DBMS in:\n/media/ssd500gb/opt/pgv101norssd/data.\n\nThe servers where I executed the query with HDD SAS is not the same one\nwhere I executed the query with SSD, but they are identical Server (HP\nProliant ML110), it has the same model and configuration, only the disks\nthat are not the same, see:\n\nServer 1\n- HDD SAS 15 Krpm - 320 GB (Location where O.S. Debian and Postgresql are\ninstalled)\n\nServer 2\n- Samsung Evo SSD 500 GB (Location where Postgresql is Installed)\n- HDD Sata 7500 Krpm - 1TB (Location where O.S Debian is installed)\n\n\n>> Group Key: nation.n_name, (date_part(_year_::text,\n(orders.o_orderdate)::timestamp without time zone))\n>> Buffers: shared hit=3773802 read=7120852, temp read=3550293\nwritten=3541542\n>>\n>> Are your SSD being used for anything else ?\n>>\n>> What about these?\n>>\n>>> > readahead? blockdev --getra\n>>\n>\n> About knowing if the SSD is being used by another process, I will\n> still execute the command and send the result.\n>\n> But I can say that the SSD is only used by the DBMS.\n> Explaining better, My server has an HDD and an SSD. The Debian OS is\n> installed on the HDD and I installed the DBMS inside the SSD and the\n> data tablespace also inside the SSD .\n> The server is dedicated to the DBMS and when I execute the queries,\n> nothing else is executed. I still can not understand how an HDD is\n> faster than an SSD.\n> I ran queries again on the SSD and the results were not good see:\n>\n> execution 1- 00:16:13\n> execution 2- 00:25:30\n> execution 3- 00:28:09\n> execution 4- 00:24:33\n> execution 5- 00:24:38\n>\n> Regards\n> Neto\n>\n>\n>\n>\n>>> > If you're running under linux, maybe you can just send the output of:\n>>> > for a in /sys/block/sdX/queue/*; do echo \"$a `cat $a`\"; done\n>>> > or: tail\n/sys/block/sdX/queue/{minimum_io_size,optimal_io_size,read_ahead_kb,scheduler,rotational,max_sectors_kb,logical_block_size,physical_block_size}\n>>\n>>> > Can you reproduce the speed difference using dd ?\n>>> > time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K\nskip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>>> >\n>>> > Or: bonnie++ -f -n0\n>>\n>> Justin\n\n2018-01-15 3:04 GMT-08:00 Neto pr <[email protected]>:> 2018-01-14 19:09 GMT-08:00 Justin Pryzby <[email protected]>:>> On Sun, Jan 14, 2018 at 06:25:40PM -0800, Neto pr wrote:>>> > The query plan is all garbled by mail , could you resend?  Or post a link from>>> > https://explain.depesz.com/>>>> On Sun, Jan 14, 2018 at 06:36:02PM -0800, Neto pr wrote:>>> I was not able to upload to the site, because I'm saving the execution>>> plan in the database, and when I retrieve it, it loses the line breaks,>>>> That's why it's an issue for me, too..>>>>> > What OS/kernel are you using?  LVM?  filesystem?  I/O scheduler?  partitions?>>>>>> See below the Disk FileSystem -------------------------------->>> root@hp2ml110deb:/# fdisk -l>>> Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors>>>>>> Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors>>> Units: sectors of 1 * 512 = 512 bytes>>> Sector size (logical/physical): 512 bytes / 512 bytes>>> I/O size (minimum/optimal): 512 bytes / 512 bytes>>> ---------------------------------------------------------------------------->> What about sdb partitions/FS?>> I used EXT4 filesystem in Debian SO.>>>>> On Sun, Jan 14, 2018 at 06:25:40PM -0800, Neto pr wrote:>>> The DBMS and tablespace of users is installed in /dev/sdb  SSD.>>>> Is that also a temp_tablespace ?  Or are your hashes spilling to HDD instead ?>>>> How can I find out where my temp_tablesapce is?> With the command \\db+ (see below) does not show the location. But the> DBMS I asked to install inside the SSD, but how can I find out the> exact location of the temp_tablespace ?>> ----------------------------------------------------------------------------> tpch40gnorssd=# \\db+>                                              List of tablespaces>     Name    |  Owner   |            Location            | Access> privileges | Options |  Size  | Description> ------------+----------+--------------------------------+-------------------+---------+--------+------------->  pg_default | postgres |                                |>      |         | 21 MB  |>  pg_global  | postgres |                                |>      |         | 573 kB |>  tblpgssd   | postgres | /media/ssd500gb/dados/pg101ssd |>      |         | 206 GB |> (3 rows)> ------------------------------------------------------------------------------>I checked that the temporary tablespace pg_default is on the SSD, because when running show temp_tablespaces in psql returns empty, and by the documentation,https://www.postgresql.org/docs/current/static/runtime-config-client.html#GUC-TEMP-TABLESPACES will be in the default directory, where I installed the DBMS in: /media/ssd500gb/opt/pgv101norssd/data.The servers where I executed the query with HDD SAS is not the same one where I executed the query with SSD, but they are identical Server (HP Proliant ML110), it has the same model and configuration, only the disks that are not the same, see:Server 1- HDD SAS 15 Krpm - 320 GB (Location where O.S. Debian and Postgresql are installed)Server 2- Samsung Evo SSD 500 GB (Location where Postgresql is Installed)- HDD Sata 7500 Krpm - 1TB (Location where O.S Debian is installed)>> Group Key: nation.n_name, (date_part(_year_::text, (orders.o_orderdate)::timestamp without time zone))>> Buffers: shared hit=3773802 read=7120852, temp read=3550293 written=3541542>>>> Are your SSD being used for anything else ?>>>> What about these?>>>>> > readahead?  blockdev --getra>>>> About knowing if the SSD is being used by another process, I will> still execute the command and send the result.>> But I can say that the SSD is only used by the DBMS.> Explaining better, My server has an HDD and an SSD. The Debian OS is> installed on the HDD and I installed the DBMS inside the SSD and the> data tablespace also inside the SSD .> The server is dedicated to the DBMS and when I execute the queries,> nothing else is executed. I still can not understand how an HDD is> faster than an SSD.> I ran queries again on the SSD and the results were not good see:>> execution 1- 00:16:13> execution 2- 00:25:30> execution 3- 00:28:09> execution 4- 00:24:33> execution 5- 00:24:38>> Regards> Neto>>>>>>> > If you're running under linux, maybe you can just send the output of:>>> > for a in /sys/block/sdX/queue/*; do echo \"$a `cat $a`\"; done>>> > or: tail /sys/block/sdX/queue/{minimum_io_size,optimal_io_size,read_ahead_kb,scheduler,rotational,max_sectors_kb,logical_block_size,physical_block_size}>>>>> > Can you reproduce the speed difference using dd ?>>> > time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_size>>> >>>> > Or: bonnie++ -f -n0>>>> Justin", "msg_date": "Mon, 15 Jan 2018 04:35:39 -0800", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "\nHello Neto\n\nAm 14.01.2018 um 21:44 schrieb Neto pr:\n> Dear all\n>\n> Someone help me analyze the two execution plans below (Explain ANALYZE \n> used), is the  query 9 of TPC-H benchmark [1].\n> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS \n> 320GB 15 Krpm AND SSD Sansung EVO 500GB.\n> My DBMS parameters presents in postgresql.conf is default, but in SSD \n> I have changed random_page_cost = 1.0.\n>\nyou are comparing a SAS Drive against a SATA SSD. Their interfaces serve \na completely different bandwidth.\nWhile a SAS-3 device does 12 Gbit/s  SATA-3 device  is only able to \ntransfer 6 Gbit/s  (a current SAS-4 reaches 22.5 Gbit/s)\nDo a short research on SAS vs SATA and then use a SAS SSD for comparison :)\n\nregards\nGeorg\n\n", "msg_date": "Mon, 15 Jan 2018 19:32:27 +0100", "msg_from": "\"Georg H.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "we've had the same experience here - with older SATA 2 (3Gbps) - in spite\nof SSD having no spin latency, the bus speed itself was half of the SAS-2\n(6Gbps) we were using at the time which negated SSD perf in this area. HDD\nwas about the same perf as SSD for us.\n\nBiran\n\nOn Mon, Jan 15, 2018 at 1:32 PM, Georg H. <[email protected]> wrote:\n\n>\n> Hello Neto\n>\n> Am 14.01.2018 um 21:44 schrieb Neto pr:\n>\n>> Dear all\n>>\n>> Someone help me analyze the two execution plans below (Explain ANALYZE\n>> used), is the query 9 of TPC-H benchmark [1].\n>> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB\n>> 15 Krpm AND SSD Sansung EVO 500GB.\n>> My DBMS parameters presents in postgresql.conf is default, but in SSD I\n>> have changed random_page_cost = 1.0.\n>>\n>> you are comparing a SAS Drive against a SATA SSD. Their interfaces serve\n> a completely different bandwidth.\n> While a SAS-3 device does 12 Gbit/s SATA-3 device is only able to\n> transfer 6 Gbit/s (a current SAS-4 reaches 22.5 Gbit/s)\n> Do a short research on SAS vs SATA and then use a SAS SSD for comparison :)\n>\n> regards\n> Georg\n>\n>\n\nwe've had the same experience here - with older SATA 2 (3Gbps) - in spite of SSD having no spin latency, the bus speed itself was half of the SAS-2 (6Gbps) we were using at the time which negated SSD perf in this area. HDD was about the same perf as SSD for us. BiranOn Mon, Jan 15, 2018 at 1:32 PM, Georg H. <[email protected]> wrote:\nHello Neto\n\nAm 14.01.2018 um 21:44 schrieb Neto pr:\n\nDear all\n\nSomeone help me analyze the two execution plans below (Explain ANALYZE used), is the  query 9 of TPC-H benchmark [1].\nI'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB 15 Krpm AND SSD Sansung EVO 500GB.\nMy DBMS parameters presents in postgresql.conf is default, but in SSD I have changed random_page_cost = 1.0.\n\n\nyou are comparing a SAS Drive against a SATA SSD. Their interfaces serve a completely different bandwidth.\nWhile a SAS-3 device does 12 Gbit/s  SATA-3 device  is only able to transfer 6 Gbit/s  (a current SAS-4 reaches 22.5 Gbit/s)\nDo a short research on SAS vs SATA and then use a SAS SSD for comparison :)\n\nregards\nGeorg", "msg_date": "Mon, 15 Jan 2018 14:46:56 -0500", "msg_from": "Brian Busch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "2018-01-15 15:32 GMT-03:00 Georg H. <[email protected]>:\n\n>\n> Hello Neto\n>\n> Am 14.01.2018 um 21:44 schrieb Neto pr:\n>\n>> Dear all\n>>\n>> Someone help me analyze the two execution plans below (Explain ANALYZE\n>> used), is the query 9 of TPC-H benchmark [1].\n>> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB\n>> 15 Krpm AND SSD Sansung EVO 500GB.\n>> My DBMS parameters presents in postgresql.conf is default, but in SSD I\n>> have changed random_page_cost = 1.0.\n>>\n>> you are comparing a SAS Drive against a SATA SSD. Their interfaces serve\n> a completely different bandwidth.\n> While a SAS-3 device does 12 Gbit/s SATA-3 device is only able to\n> transfer 6 Gbit/s (a current SAS-4 reaches 22.5 Gbit/s)\n> Do a short research on SAS vs SATA and then use a SAS SSD for comparison :)\n>\n\nThe query being all read operations both drives should perform somewhat\nsimilarly. Therefore, either the SAS drive has some special sauce to it\n(a.k.a very fast built-in cache) or there is something else going on these\nsystems. Otherwise he shouldn't be stressing the 6 Gbit/s interface limit\nwith a single drive, be that the SATA or the SAS drive.\n\nNeto, you have been suggested to provide a number of command outputs to\nknow more about your system. Testing the raw read throughput of both your\ndrives should be first on your list.\n\nCheers.\n\n2018-01-15 15:32 GMT-03:00 Georg H. <[email protected]>:\nHello Neto\n\nAm 14.01.2018 um 21:44 schrieb Neto pr:\n\nDear all\n\nSomeone help me analyze the two execution plans below (Explain ANALYZE used), is the  query 9 of TPC-H benchmark [1].\nI'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB 15 Krpm AND SSD Sansung EVO 500GB.\nMy DBMS parameters presents in postgresql.conf is default, but in SSD I have changed random_page_cost = 1.0.\n\n\nyou are comparing a SAS Drive against a SATA SSD. Their interfaces serve a completely different bandwidth.\nWhile a SAS-3 device does 12 Gbit/s  SATA-3 device  is only able to transfer 6 Gbit/s  (a current SAS-4 reaches 22.5 Gbit/s)\nDo a short research on SAS vs SATA and then use a SAS SSD for comparison :)The query being all read operations both drives should perform somewhat similarly. Therefore, either the SAS drive has some special sauce to it (a.k.a very fast built-in cache) or there is something else going on these systems. Otherwise he shouldn't be stressing the 6 Gbit/s interface limit with a single drive, be that the SATA or the SAS drive.Neto, you have been suggested to provide a number of command outputs to know more about your system. Testing the raw read throughput of both your drives should be first on your list.Cheers.", "msg_date": "Mon, 15 Jan 2018 16:55:25 -0300", "msg_from": "Fernando Hevia <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "Hi Georg,\nYour answer I believe has revealed the real problem.\nI looked at the specification of my SATA SSD, and from my SAS HDD, I\nsaw that the SAS has 12 Gb/s versus 6 Gb/s from the SSD\n\nSSD: Samsung 500 GB SATA III 6Gb/s - Model: 850 Evo\nhttp://www.samsung.com/semiconductor/minisite/ssd/product/consumer/850evo/\n\nHDD: HPE 300GB 12G SAS Part-Number: 737261-B21\nhttps://h20195.www2.hpe.com/v2/GetPDF.aspx%2Fc04111744.pdf\n\nI saw that the SAS band is double, and because of that reason the\ndifference in performance occurred.\n\nAnother question, if I compare the disk below HDD SAS that has a\ntransfer rate of 6Gb/s equal to the SSD SATA 6Gb/s, do you think the\nSSD would be more agile in this case?\nHDD: HP 450GB 6G SAS 15K rpm LFF (3.5-inch) Part-Number: 652615-B21\n\nbest Regards\nNeto\n\n2018-01-15 16:32 GMT-02:00 Georg H. <[email protected]>:\n>\n> Hello Neto\n>\n> Am 14.01.2018 um 21:44 schrieb Neto pr:\n>>\n>> Dear all\n>>\n>> Someone help me analyze the two execution plans below (Explain ANALYZE\n>> used), is the query 9 of TPC-H benchmark [1].\n>> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB\n>> 15 Krpm AND SSD Sansung EVO 500GB.\n>> My DBMS parameters presents in postgresql.conf is default, but in SSD I\n>> have changed random_page_cost = 1.0.\n>>\n> you are comparing a SAS Drive against a SATA SSD. Their interfaces serve a\n> completely different bandwidth.\n> While a SAS-3 device does 12 Gbit/s SATA-3 device is only able to transfer\n> 6 Gbit/s (a current SAS-4 reaches 22.5 Gbit/s)\n> Do a short research on SAS vs SATA and then use a SAS SSD for comparison :)\n>\n> regards\n> Georg\n>\n\n", "msg_date": "Mon, 15 Jan 2018 21:10:17 -0200", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "2018-01-15 17:55 GMT-02:00 Fernando Hevia <[email protected]>:\n>\n>\n> 2018-01-15 15:32 GMT-03:00 Georg H. <[email protected]>:\n>>\n>>\n>> Hello Neto\n>>\n>> Am 14.01.2018 um 21:44 schrieb Neto pr:\n>>>\n>>> Dear all\n>>>\n>>> Someone help me analyze the two execution plans below (Explain ANALYZE\n>>> used), is the query 9 of TPC-H benchmark [1].\n>>> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB\n>>> 15 Krpm AND SSD Sansung EVO 500GB.\n>>> My DBMS parameters presents in postgresql.conf is default, but in SSD I\n>>> have changed random_page_cost = 1.0.\n>>>\n>> you are comparing a SAS Drive against a SATA SSD. Their interfaces serve a\n>> completely different bandwidth.\n>> While a SAS-3 device does 12 Gbit/s SATA-3 device is only able to\n>> transfer 6 Gbit/s (a current SAS-4 reaches 22.5 Gbit/s)\n>> Do a short research on SAS vs SATA and then use a SAS SSD for comparison\n>> :)\n>\n>\n> The query being all read operations both drives should perform somewhat\n> similarly. Therefore, either the SAS drive has some special sauce to it\n> (a.k.a very fast built-in cache) or there is something else going on these\n> systems. Otherwise he shouldn't be stressing the 6 Gbit/s interface limit\n> with a single drive, be that the SATA or the SAS drive.\n>\n> Neto, you have been suggested to provide a number of command outputs to know\n> more about your system. Testing the raw read throughput of both your drives\n> should be first on your list.\n>\n\n\nGuys, sorry for the Top Post, I forgot ....\n\nFernando, I think the difference of 6 Gb/s to 12 Gb/s from SAS is what\ncaused the difference in query execution time.\nBecause looking at the execution plans and the cost estimate, I did\nnot see many differences, in methods of access among other things.\nRegarding the query, none of them use indexes, since I did a first\ntest without indexes.\nDo you think that if I compare the disk below HDD SAS that has a\ntransfer rate of 6Gb/s equal to the SSD SATA 6Gb/s, do you think the\nSSD would be more agile in this case?\n\nHDD: HP 450GB 6G SAS 15K rpm LFF (3.5-inch) Part-Number: 652615-B21\n\nNeto\n\n> Cheers.\n>\n>\n>\n>\n\n", "msg_date": "Mon, 15 Jan 2018 21:25:14 -0200", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "2018-01-15 20:25 GMT-03:00 Neto pr <[email protected]>:\n\n> 2018-01-15 17:55 GMT-02:00 Fernando Hevia <[email protected]>:\n> >\n> >\n> > 2018-01-15 15:32 GMT-03:00 Georg H. <[email protected]>:\n> >>\n> >>\n> >> Hello Neto\n> >>\n> >> Am 14.01.2018 um 21:44 schrieb Neto pr:\n> >>>\n> >>> Dear all\n> >>>\n> >>> Someone help me analyze the two execution plans below (Explain ANALYZE\n> >>> used), is the query 9 of TPC-H benchmark [1].\n> >>> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS\n> 320GB\n> >>> 15 Krpm AND SSD Sansung EVO 500GB.\n> >>> My DBMS parameters presents in postgresql.conf is default, but in SSD I\n> >>> have changed random_page_cost = 1.0.\n> >>>\n> >> you are comparing a SAS Drive against a SATA SSD. Their interfaces\n> serve a\n> >> completely different bandwidth.\n> >> While a SAS-3 device does 12 Gbit/s SATA-3 device is only able to\n> >> transfer 6 Gbit/s (a current SAS-4 reaches 22.5 Gbit/s)\n> >> Do a short research on SAS vs SATA and then use a SAS SSD for comparison\n> >> :)\n> >\n> >\n> > The query being all read operations both drives should perform somewhat\n> > similarly. Therefore, either the SAS drive has some special sauce to it\n> > (a.k.a very fast built-in cache) or there is something else going on\n> these\n> > systems. Otherwise he shouldn't be stressing the 6 Gbit/s interface limit\n> > with a single drive, be that the SATA or the SAS drive.\n> >\n> > Neto, you have been suggested to provide a number of command outputs to\n> know\n> > more about your system. Testing the raw read throughput of both your\n> drives\n> > should be first on your list.\n> >\n>\n>\n> Guys, sorry for the Top Post, I forgot ....\n>\n> Fernando, I think the difference of 6 Gb/s to 12 Gb/s from SAS is what\n> caused the difference in query execution time.\n> Because looking at the execution plans and the cost estimate, I did\n> not see many differences, in methods of access among other things.\n> Regarding the query, none of them use indexes, since I did a first\n> test without indexes.\n> Do you think that if I compare the disk below HDD SAS that has a\n> transfer rate of 6Gb/s equal to the SSD SATA 6Gb/s, do you think the\n> SSD would be more agile in this case?\n>\n> HDD: HP 450GB 6G SAS 15K rpm LFF (3.5-inch) Part-Number: 652615-B21\n>\n> Neto\n>\n\nThe 6 Gb/s interface is capable of a maximum throughput of around 600 Mb/s.\nNone of your drives can achieve that so I don't think you are limited to\nthe interface speed. The 12 Gb/s interface speed advantage kicks in when\nthere are several drives installed and it won't make a diference in a\nsingle drive or even a two drive system.\n\nBut don't take my word for it. Test your drives throughput with the command\nJustin suggested so you know exactly what each drive is capable of:\n\nCan you reproduce the speed difference using dd ?\n> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K\n> skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n\n\nWhile common sense says SSD drive should outperform the mechanical one,\nyour test scenario (large volume sequential reads) evens out the field a\nlot. Still I would have expected somewhat similar results in the outcome,\nso yes, it is weird that the SAS drive doubles the SSD performance. That is\nwhy I think there must be something else going on during your tests on the\nSSD server. It can also be that the SSD isn't working properly or you are\nrunning an suboptimal OS+server+controller configuration for the drive.\n\n2018-01-15 20:25 GMT-03:00 Neto pr <[email protected]>:2018-01-15 17:55 GMT-02:00 Fernando Hevia <[email protected]>:\n>\n>\n> 2018-01-15 15:32 GMT-03:00 Georg H. <[email protected]>:\n>>\n>>\n>> Hello Neto\n>>\n>> Am 14.01.2018 um 21:44 schrieb Neto pr:\n>>>\n>>> Dear all\n>>>\n>>> Someone help me analyze the two execution plans below (Explain ANALYZE\n>>> used), is the  query 9 of TPC-H benchmark [1].\n>>> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS 320GB\n>>> 15 Krpm AND SSD Sansung EVO 500GB.\n>>> My DBMS parameters presents in postgresql.conf is default, but in SSD I\n>>> have changed random_page_cost = 1.0.\n>>>\n>> you are comparing a SAS Drive against a SATA SSD. Their interfaces serve a\n>> completely different bandwidth.\n>> While a SAS-3 device does 12 Gbit/s  SATA-3 device  is only able to\n>> transfer 6 Gbit/s  (a current SAS-4 reaches 22.5 Gbit/s)\n>> Do a short research on SAS vs SATA and then use a SAS SSD for comparison\n>> :)\n>\n>\n> The query being all read operations both drives should perform somewhat\n> similarly. Therefore, either the SAS drive has some special sauce to it\n> (a.k.a very fast built-in cache) or there is something else going on these\n> systems. Otherwise he shouldn't be stressing the 6 Gbit/s interface limit\n> with a single drive, be that the SATA or the SAS drive.\n>\n> Neto, you have been suggested to provide a number of command outputs to know\n> more about your system. Testing the raw read throughput of both your drives\n> should be first on your list.\n>\n\n\nGuys, sorry for the Top Post, I forgot ....\n\nFernando, I think the difference of 6 Gb/s to 12 Gb/s from SAS is what\ncaused the difference in query execution time.\nBecause looking at the execution plans and the cost estimate, I did\nnot see many differences, in methods of access among other things.\nRegarding the query, none of them use indexes, since I did a first\ntest without indexes.\nDo you think that if I compare the disk below HDD SAS that has a\ntransfer rate of 6Gb/s equal to the SSD SATA 6Gb/s, do you think the\nSSD would be more agile in this case?\n\nHDD: HP 450GB 6G SAS 15K rpm LFF (3.5-inch) Part-Number: 652615-B21\n\nNetoThe 6 Gb/s interface is capable of a maximum throughput of around 600 Mb/s. None of your drives can achieve that so I don't think you are limited to the interface speed. The 12 Gb/s interface speed advantage kicks in when there are several drives installed and it won't make a diference in a single drive or even a two drive system.But don't take my word for it. Test your drives throughput with the command Justin suggested so you know exactly what each drive is capable of:Can you reproduce the speed difference using dd ?time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_sizeWhile common sense says SSD drive should outperform the mechanical one, your test scenario (large volume sequential reads) evens out the field a lot. Still I would have expected somewhat similar results in the outcome, so yes, it is weird that the SAS drive doubles the SSD performance. That is why I think there must be something else going on during your tests on the SSD server. It can also be that the SSD isn't working properly or you are running an suboptimal OS+server+controller configuration for the drive.", "msg_date": "Mon, 15 Jan 2018 21:18:30 -0300", "msg_from": "Fernando Hevia <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "2018-01-15 16:18 GMT-08:00 Fernando Hevia <[email protected]>:\n>\n>\n> 2018-01-15 20:25 GMT-03:00 Neto pr <[email protected]>:\n>>\n>> 2018-01-15 17:55 GMT-02:00 Fernando Hevia <[email protected]>:\n>> >\n>> >\n>> > 2018-01-15 15:32 GMT-03:00 Georg H. <[email protected]>:\n>> >>\n>> >>\n>> >> Hello Neto\n>> >>\n>> >> Am 14.01.2018 um 21:44 schrieb Neto pr:\n>> >>>\n>> >>> Dear all\n>> >>>\n>> >>> Someone help me analyze the two execution plans below (Explain ANALYZE\n>> >>> used), is the query 9 of TPC-H benchmark [1].\n>> >>> I'm using a server HP Intel Xeon 2.8GHz/4-core - Memory 8GB HDD SAS\n>> >>> 320GB\n>> >>> 15 Krpm AND SSD Sansung EVO 500GB.\n>> >>> My DBMS parameters presents in postgresql.conf is default, but in SSD\n>> >>> I\n>> >>> have changed random_page_cost = 1.0.\n>> >>>\n>> >> you are comparing a SAS Drive against a SATA SSD. Their interfaces\n>> >> serve a\n>> >> completely different bandwidth.\n>> >> While a SAS-3 device does 12 Gbit/s SATA-3 device is only able to\n>> >> transfer 6 Gbit/s (a current SAS-4 reaches 22.5 Gbit/s)\n>> >> Do a short research on SAS vs SATA and then use a SAS SSD for\n>> >> comparison\n>> >> :)\n>> >\n>> >\n>> > The query being all read operations both drives should perform somewhat\n>> > similarly. Therefore, either the SAS drive has some special sauce to it\n>> > (a.k.a very fast built-in cache) or there is something else going on\n>> > these\n>> > systems. Otherwise he shouldn't be stressing the 6 Gbit/s interface\n>> > limit\n>> > with a single drive, be that the SATA or the SAS drive.\n>> >\n>> > Neto, you have been suggested to provide a number of command outputs to\n>> > know\n>> > more about your system. Testing the raw read throughput of both your\n>> > drives\n>> > should be first on your list.\n>> >\n>>\n>>\n>> Guys, sorry for the Top Post, I forgot ....\n>>\n>> Fernando, I think the difference of 6 Gb/s to 12 Gb/s from SAS is what\n>> caused the difference in query execution time.\n>> Because looking at the execution plans and the cost estimate, I did\n>> not see many differences, in methods of access among other things.\n>> Regarding the query, none of them use indexes, since I did a first\n>> test without indexes.\n>> Do you think that if I compare the disk below HDD SAS that has a\n>> transfer rate of 6Gb/s equal to the SSD SATA 6Gb/s, do you think the\n>> SSD would be more agile in this case?\n>>\n>> HDD: HP 450GB 6G SAS 15K rpm LFF (3.5-inch) Part-Number: 652615-B21\n>>\n>> Neto\n>\n>\n> The 6 Gb/s interface is capable of a maximum throughput of around 600 Mb/s.\n> None of your drives can achieve that so I don't think you are limited to the\n> interface speed. The 12 Gb/s interface speed advantage kicks in when there\n> are several drives installed and it won't make a diference in a single drive\n> or even a two drive system.\n>\n> But don't take my word for it. Test your drives throughput with the command\n> Justin suggested so you know exactly what each drive is capable of:\n>\n>> Can you reproduce the speed difference using dd ?\n>> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K\n>> skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>\n>\n> While common sense says SSD drive should outperform the mechanical one, your\n> test scenario (large volume sequential reads) evens out the field a lot.\n> Still I would have expected somewhat similar results in the outcome, so yes,\n> it is weird that the SAS drive doubles the SSD performance. That is why I\n> think there must be something else going on during your tests on the SSD\n> server. It can also be that the SSD isn't working properly or you are\n> running an suboptimal OS+server+controller configuration for the drive.\n\nOk.\n\nCan you help me to analyze the output of the command: dd if=/dev/sdX\nof=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to\noptimal_io_size\nI put a heavy query running in the DBMS and ran the time sudo command\n... three times for each environment (SAS HDD and SATA SSD), see\nbelow that the SSD had 412,325 and 120 MB/s\nThe HDD SAS had 183,176 and 183 MB/s ... strange that in the end the\nSAS HDD can execute the query faster ... does it have something else\nto analyze in the output below?\n\n-------============ SAS HDD 320 Gb 12 Gb/s ==========--------------\nroot@deb:/etc# time sudo dd if=/dev/sda2 of=/dev/null bs=1M count=32K\nskip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n32768+0 records in\n32768+0 records out\n34359738368 bytes (34 GB) copied, 188.01 s, 183 MB/s\n\nreal 3m8.473s\nuser 0m0.076s\nsys 0m23.628s\nroot@deb:/etc# time sudo dd if=/dev/sda2 of=/dev/null bs=1M count=32K\nskip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n32768+0 records in\n32768+0 records out\n34359738368 bytes (34 GB) copied, 195.582 s, 176 MB/s\n\nreal 3m16.304s\nuser 0m0.056s\nsys 0m19.632s\nroot@deb:/etc# time sudo dd if=/dev/sda2 of=/dev/null bs=1M count=32K\nskip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n32768+0 records in\n32768+0 records out\n34359738368 bytes (34 GB) copied, 187.822 s, 183 MB/s\n\nreal 3m8.457s\nuser 0m0.032s\nsys 0m20.668s\nroot@deb:/etc#\n\n-------============ SATA SSD 500 Gb 6 Gb/s =========----------------\nroot@hp2ml110deb:/etc/postgresql/10# time sudo dd if=/dev/sdb\nof=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to\noptimal_io_size\n\n32768+0 records in\n32768+0 records out\n34359738368 bytes (34 GB) copied, 83.4281 s, 412 MB/s\n\nreal 1m23.693s\nuser 0m0.056s\nsys 0m19.300s\n\nroot@hp2ml110deb:/etc/postgresql/10# time sudo dd if=/dev/sdb\nof=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to\noptimal_io_size\n32768+0 records in\n32768+0 records out\n34359738368 bytes (34 GB) copied, 105.88 s, 325 MB/s\n\nreal 1m46.301s\nuser 0m0.020s\nsys 0m14.676s\n\nroot@hp2ml110deb:/etc/postgresql/10# time sudo dd if=/dev/sdb\nof=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to\noptimal_io_size\n32768+0 records in\n32768+0 records out\n34359738368 bytes (34 GB) copied, 285.959 s, 120 MB/s\n\nreal 4m46.283s\nuser 0m0.036s\nsys 0m15.444s\n\n------------------------------------- END -----------------------------\n\n\n>\n\n", "msg_date": "Mon, 15 Jan 2018 17:19:59 -0800", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "On Mon, Jan 15, 2018 at 05:19:59PM -0800, Neto pr wrote:\n> >> Can you reproduce the speed difference using dd ?\n> >> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K\n> >> skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n> >\n> > Still I would have expected somewhat similar results in the outcome, so yes,\n> > it is weird that the SAS drive doubles the SSD performance. That is why I\n> > think there must be something else going on during your tests on the SSD\n> > server. It can also be that the SSD isn't working properly or you are\n> > running an suboptimal OS+server+controller configuration for the drive.\n> \n> Ok.\n> \n> Can you help me to analyze the output of the command: dd if=/dev/sdX\n> of=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to\n> optimal_io_size\n\nYou should run the \"dd\" without the DB or anything else using the drive. That\ngets peformance of the drive, without the DB.\n\nYou should probably rerun the \"dd\" command using /dev/sdb1 if there's an\npartition table on top (??).\n\nI'm still wondering about these:\n\nOn Sun, Jan 14, 2018 at 09:09:41PM -0600, Justin Pryzby wrote:\n> What about sdb partitions/FS?\n\n> > > readahead? blockdev --getra\n> \n> > > If you're running under linux, maybe you can just send the output of:\n> > > for a in /sys/block/sdX/queue/*; do echo \"$a `cat $a`\"; done\n> > > or: tail /sys/block/sdX/queue/{minimum_io_size,optimal_io_size,read_ahead_kb,scheduler,rotational,max_sectors_kb,logical_block_size,physical_block_size}\n\nJustin\n\n", "msg_date": "Mon, 15 Jan 2018 19:58:25 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "2018-01-15 17:58 GMT-08:00 Justin Pryzby <[email protected]>:\n> On Mon, Jan 15, 2018 at 05:19:59PM -0800, Neto pr wrote:\n>> >> Can you reproduce the speed difference using dd ?\n>> >> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K\n>> >> skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>> >\n>> > Still I would have expected somewhat similar results in the outcome, so yes,\n>> > it is weird that the SAS drive doubles the SSD performance. That is why I\n>> > think there must be something else going on during your tests on the SSD\n>> > server. It can also be that the SSD isn't working properly or you are\n>> > running an suboptimal OS+server+controller configuration for the drive.\n>>\n>> Ok.\n>>\n>> Can you help me to analyze the output of the command: dd if=/dev/sdX\n>> of=/dev/null bs=1M count=32K skip=$((128*$RANDOM/32)) # set bs to\n>> optimal_io_size\n>\n> You should run the \"dd\" without the DB or anything else using the drive. That\n> gets peformance of the drive, without the DB.\n\nOh important observation,..\n\n>\n> You should probably rerun the \"dd\" command using /dev/sdb1 if there's an\n> partition table on top (??).\n>\n> I'm still wondering about these:\n\nSee Below:\n------------========= SSD SATA 500GB 6 Gb/s\n=======------------------------------\nroot@hp2ml110deb:/etc# time sudo dd if=/dev/sdb of=/dev/null bs=1M\ncount=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n32768+0 records in\n32768+0 records out\n34359738368 bytes (34 GB) copied, 71.0047 s, 484 MB/s\n\nreal 1m11.109s\nuser 0m0.008s\nsys 0m16.584s\nroot@hp2ml110deb:/etc# time sudo dd if=/dev/sdb of=/dev/null bs=1M\ncount=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n32768+0 records in\n32768+0 records out\n34359738368 bytes (34 GB) copied, 70.937 s, 484 MB/s\n\nreal 1m11.089s\nuser 0m0.012s\nsys 0m16.312s\nroot@hp2ml110deb:/etc#\n\n\n------------========= HDD SAS 300GB 12 Gb/s\n=======------------------------------\nroot@deb:/home/user1# time sudo dd if=/dev/sda2 of=/dev/null bs=1M\ncount=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n\n32768+0 records in\n32768+0 records out\n34359738368 bytes (34 GB) copied, 147.232 s, 233 MB/s\n\nreal 2m27.277s\nuser 0m0.036s\nsys 0m23.096s\nroot@deb:/home/user1#\nroot@deb:/home/user1# time sudo dd if=/dev/sda2 of=/dev/null bs=1M\ncount=32K skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n32768+0 records in\n32768+0 records out\n34359738368 bytes (34 GB) copied, 153.698 s, 224 MB/s\n\nreal 2m33.766s\nuser 0m0.032s\nsys 0m22.812s\nroot@deb:/home/user1#\n--------------------------------------------- END\n---------------------------------------------------\n\nI had not spoken, but my SAS HDD is connected to the HBA Controler,\nthrough a SATA adapter, because the cable kit I would have to use and\nit would be correct, was no available at the supplier, so it sent the\nSAS HDD with a SATA adapter. I found it strange that the speed of SAS\nwas below the SSD, and even then it can execute the query much faster.\n\n\n\n>\n> On Sun, Jan 14, 2018 at 09:09:41PM -0600, Justin Pryzby wrote:\n>> What about sdb partitions/FS?\n>\n>> > > readahead? blockdev --getra\n>>\n>> > > If you're running under linux, maybe you can just send the output of:\n>> > > for a in /sys/block/sdX/queue/*; do echo \"$a `cat $a`\"; done\n>> > > or: tail /sys/block/sdX/queue/{minimum_io_size,optimal_io_size,read_ahead_kb,scheduler,rotational,max_sectors_kb,logical_block_size,physical_block_size}\n>\n> Justin\n\n", "msg_date": "Mon, 15 Jan 2018 18:25:18 -0800", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "On 16/01/18 13:18, Fernando Hevia wrote:\n\n>\n>\n>\n> The 6 Gb/s interface is capable of a maximum throughput of around 600 \n> Mb/s. None of your drives can achieve that so I don't think you are \n> limited to the interface speed. The 12 Gb/s interface speed advantage \n> kicks in when there are several drives installed and it won't make a \n> diference in a single drive or even a two drive system.\n>\n> But don't take my word for it. Test your drives throughput with the \n> command Justin suggested so you know exactly what each drive is \n> capable of:\n>\n> Can you reproduce the speed difference using dd ?\n> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K\n> skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>\n>\n> While common sense says SSD drive should outperform the mechanical \n> one, your test scenario (large volume sequential reads) evens out the \n> field a lot. Still I would have expected somewhat similar results in \n> the outcome, so yes, it is weird that the SAS drive doubles the SSD \n> performance. That is why I think there must be something else going on \n> during your tests on the SSD server. It can also be that the SSD isn't \n> working properly or you are running an suboptimal OS+server+controller \n> configuration for the drive.\n>\n\nI would second the analysis above - unless you see your read MB/s \nslammed up against 580-600MB/s contunuously then the interface speed is \nnot the issue. We have some similar servers that we replaced 12x SAS \nwith 1x SATA 6 GBit/s (Intel DC S3710) SSD...and the latter way \noutperforms the original 12 SAS drives.\n\nI suspect the problem is the particular SSD you have - I have \nbenchmarked the 256GB EVO variant and was underwhelmed by the \nperformance. These (budget) triple cell nand SSD seem to have highly \nvariable read and write performance (the write is all about when the SLC \nnand cache gets full)...read I'm not so sure of - but it could be crappy \nchipset/firmware combination. In short I'd recommend *not* using that \nparticular SSD for a database workload. I'd recommend one of the Intel \nDatacenter DC range (FWIW I'm not affiliated with Intel in any way...but \ntheir DC stuff works well).\n\nregards\n\nMark\n\n", "msg_date": "Tue, 16 Jan 2018 17:04:17 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "2018-01-15 20:04 GMT-08:00 Mark Kirkwood <[email protected]>:\n> On 16/01/18 13:18, Fernando Hevia wrote:\n>\n>>\n>>\n>>\n>> The 6 Gb/s interface is capable of a maximum throughput of around 600\n>> Mb/s. None of your drives can achieve that so I don't think you are limited\n>> to the interface speed. The 12 Gb/s interface speed advantage kicks in when\n>> there are several drives installed and it won't make a diference in a single\n>> drive or even a two drive system.\n>>\n>> But don't take my word for it. Test your drives throughput with the\n>> command Justin suggested so you know exactly what each drive is capable of:\n>>\n>> Can you reproduce the speed difference using dd ?\n>> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K\n>> skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>>\n>>\n>> While common sense says SSD drive should outperform the mechanical one,\n>> your test scenario (large volume sequential reads) evens out the field a\n>> lot. Still I would have expected somewhat similar results in the outcome, so\n>> yes, it is weird that the SAS drive doubles the SSD performance. That is why\n>> I think there must be something else going on during your tests on the SSD\n>> server. It can also be that the SSD isn't working properly or you are\n>> running an suboptimal OS+server+controller configuration for the drive.\n>>\n>\n> I would second the analysis above - unless you see your read MB/s slammed up\n> against 580-600MB/s contunuously then the interface speed is not the issue.\n> We have some similar servers that we replaced 12x SAS with 1x SATA 6 GBit/s\n> (Intel DC S3710) SSD...and the latter way outperforms the original 12 SAS\n> drives.\n>\n> I suspect the problem is the particular SSD you have - I have benchmarked\n> the 256GB EVO variant and was underwhelmed by the performance. These\n> (budget) triple cell nand SSD seem to have highly variable read and write\n> performance (the write is all about when the SLC nand cache gets\n> full)...read I'm not so sure of - but it could be crappy chipset/firmware\n> combination. In short I'd recommend *not* using that particular SSD for a\n> database workload. I'd recommend one of the Intel Datacenter DC range (FWIW\n> I'm not affiliated with Intel in any way...but their DC stuff works well).\n>\n> regards\n>\n> Mark\n\nHi Mark\nIn other forums one person said me that on samsung evo should be\npartition aligned to 3072 not default 2048 , to start on erase block\nbounduary . And fs block should be 8kb. I am studing this too. Some\nDBAs have reported in other situations that the SSDs when they are\nfull, are very slow. Mine is 85% full, so maybe that is also\ninfluencing. I'm disappointed with this SSD from Samsung, because in\ntheory, the read speed of an SSD should be more than 300 times faster\nthan an HDD and this is not happening.\n\nregards\nNeto\n\n", "msg_date": "Tue, 16 Jan 2018 02:14:09 -0800", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "Le 16/01/2018 à 11:14, Neto pr a écrit :\n> 2018-01-15 20:04 GMT-08:00 Mark Kirkwood <[email protected]>:\n>> On 16/01/18 13:18, Fernando Hevia wrote:\n>>\n>>>\n>>>\n>>> The 6 Gb/s interface is capable of a maximum throughput of around 600\n>>> Mb/s. None of your drives can achieve that so I don't think you are limited\n>>> to the interface speed. The 12 Gb/s interface speed advantage kicks in when\n>>> there are several drives installed and it won't make a diference in a single\n>>> drive or even a two drive system.\n>>>\n>>> But don't take my word for it. Test your drives throughput with the\n>>> command Justin suggested so you know exactly what each drive is capable of:\n>>>\n>>> Can you reproduce the speed difference using dd ?\n>>> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K\n>>> skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>>>\n>>>\n>>> While common sense says SSD drive should outperform the mechanical one,\n>>> your test scenario (large volume sequential reads) evens out the field a\n>>> lot. Still I would have expected somewhat similar results in the outcome, so\n>>> yes, it is weird that the SAS drive doubles the SSD performance. That is why\n>>> I think there must be something else going on during your tests on the SSD\n>>> server. It can also be that the SSD isn't working properly or you are\n>>> running an suboptimal OS+server+controller configuration for the drive.\n>>>\n>> I would second the analysis above - unless you see your read MB/s slammed up\n>> against 580-600MB/s contunuously then the interface speed is not the issue.\n>> We have some similar servers that we replaced 12x SAS with 1x SATA 6 GBit/s\n>> (Intel DC S3710) SSD...and the latter way outperforms the original 12 SAS\n>> drives.\n>>\n>> I suspect the problem is the particular SSD you have - I have benchmarked\n>> the 256GB EVO variant and was underwhelmed by the performance. These\n>> (budget) triple cell nand SSD seem to have highly variable read and write\n>> performance (the write is all about when the SLC nand cache gets\n>> full)...read I'm not so sure of - but it could be crappy chipset/firmware\n>> combination. In short I'd recommend *not* using that particular SSD for a\n>> database workload. I'd recommend one of the Intel Datacenter DC range (FWIW\n>> I'm not affiliated with Intel in any way...but their DC stuff works well).\n>>\n>> regards\n>>\n>> Mark\n> Hi Mark\n> In other forums one person said me that on samsung evo should be\n> partition aligned to 3072 not default 2048 , to start on erase block\n> bounduary . And fs block should be 8kb. I am studing this too. Some\n> DBAs have reported in other situations that the SSDs when they are\n> full, are very slow. Mine is 85% full, so maybe that is also\n> influencing. I'm disappointed with this SSD from Samsung, because in\n> theory, the read speed of an SSD should be more than 300 times faster\n> than an HDD and this is not happening.\n>\n> regards\n> Neto\n>\nHi Neto,\n\nUnfortunately, Samsung 850 Evo is not a particularly fast SSD - \nespecially it's not really consistent in term of performance ( see \nhttps://www.anandtech.com/show/8747/samsung-ssd-850-evo-review/5 and \nhttps://www.anandtech.com/bench/product/1913 ). This is not a product \nfor professional usage, and you should not expect great performance from \nit - as reported by these benchmark, you can have a 34ms latency in very \nintensive usage:\nATSB - The Destroyer (99th Percentile Write Latency)99th Percentile \nLatency in Microseconds - Lower is Better *34923\n\n*Even average write latency of the Samsung 850 Evo is 3,3 ms in \nintensive workload, while the HPE 300 GB 12G SAS is reported to have an \naverage of 2.9 ms, and won't suffer from write amplification\n\nAs long has you stick with a light usage, this SSD will probably be more \nthan capable, but if you want to host a database, you should really look \nat PRO drives\n\nKind regards\nNicolas\n**\n\n\n\n\n\n\n Le 16/01/2018 à 11:14, Neto pr a écrit :\n\n2018-01-15 20:04 GMT-08:00 Mark Kirkwood <[email protected]>:\n\n\nOn 16/01/18 13:18, Fernando Hevia wrote:\n\n\n\n\n\n\nThe 6 Gb/s interface is capable of a maximum throughput of around 600\nMb/s. None of your drives can achieve that so I don't think you are limited\nto the interface speed. The 12 Gb/s interface speed advantage kicks in when\nthere are several drives installed and it won't make a diference in a single\ndrive or even a two drive system.\n\nBut don't take my word for it. Test your drives throughput with the\ncommand Justin suggested so you know exactly what each drive is capable of:\n\n Can you reproduce the speed difference using dd ?\n time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K\n skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n\n\nWhile common sense says SSD drive should outperform the mechanical one,\nyour test scenario (large volume sequential reads) evens out the field a\nlot. Still I would have expected somewhat similar results in the outcome, so\nyes, it is weird that the SAS drive doubles the SSD performance. That is why\nI think there must be something else going on during your tests on the SSD\nserver. It can also be that the SSD isn't working properly or you are\nrunning an suboptimal OS+server+controller configuration for the drive.\n\n\n\n\nI would second the analysis above - unless you see your read MB/s slammed up\nagainst 580-600MB/s contunuously then the interface speed is not the issue.\nWe have some similar servers that we replaced 12x SAS with 1x SATA 6 GBit/s\n(Intel DC S3710) SSD...and the latter way outperforms the original 12 SAS\ndrives.\n\nI suspect the problem is the particular SSD you have - I have benchmarked\nthe 256GB EVO variant and was underwhelmed by the performance. These\n(budget) triple cell nand SSD seem to have highly variable read and write\nperformance (the write is all about when the SLC nand cache gets\nfull)...read I'm not so sure of - but it could be crappy chipset/firmware\ncombination. In short I'd recommend *not* using that particular SSD for a\ndatabase workload. I'd recommend one of the Intel Datacenter DC range (FWIW\nI'm not affiliated with Intel in any way...but their DC stuff works well).\n\nregards\n\nMark\n\n\n\nHi Mark\nIn other forums one person said me that on samsung evo should be\npartition aligned to 3072 not default 2048 , to start on erase block\nbounduary . And fs block should be 8kb. I am studing this too. Some\nDBAs have reported in other situations that the SSDs when they are\nfull, are very slow. Mine is 85% full, so maybe that is also\ninfluencing. I'm disappointed with this SSD from Samsung, because in\ntheory, the read speed of an SSD should be more than 300 times faster\nthan an HDD and this is not happening.\n\nregards\nNeto\n\n\n\n Hi Neto,\n\n Unfortunately, Samsung 850 Evo is not a particularly fast SSD -\n especially it's not really consistent in term of performance ( see\n https://www.anandtech.com/show/8747/samsung-ssd-850-evo-review/5 and\n https://www.anandtech.com/bench/product/1913 ). This is not a\n product for professional usage, and you should not expect great\n performance from it - as reported by these benchmark, you can have a\n 34ms latency in very intensive usage:\nATSB - The Destroyer (99th Percentile Write Latency)\n 99th Percentile Latency in Microseconds - Lower is Better 34923\n \n\nEven average write latency of the Samsung 850 Evo is 3,3 ms\n in intensive workload, while the HPE 300 GB 12G SAS is reported to\n have an average of 2.9 ms, and won't suffer from write amplification\n\n As long has you stick with a light usage, this SSD will probably be\n more than capable, but if you want to host a database, you should\n really look at PRO drives\n\n Kind regards\n Nicolas", "msg_date": "Tue, 16 Jan 2018 16:08:01 +0100", "msg_from": "Nicolas Charles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "\n\nOn 16/01/18 23:14, Neto pr wrote:\n> 2018-01-15 20:04 GMT-08:00 Mark Kirkwood <[email protected]>:\n>> On 16/01/18 13:18, Fernando Hevia wrote:\n>>\n>>>\n>>>\n>>> The 6 Gb/s interface is capable of a maximum throughput of around 600\n>>> Mb/s. None of your drives can achieve that so I don't think you are limited\n>>> to the interface speed. The 12 Gb/s interface speed advantage kicks in when\n>>> there are several drives installed and it won't make a diference in a single\n>>> drive or even a two drive system.\n>>>\n>>> But don't take my word for it. Test your drives throughput with the\n>>> command Justin suggested so you know exactly what each drive is capable of:\n>>>\n>>> Can you reproduce the speed difference using dd ?\n>>> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K\n>>> skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>>>\n>>>\n>>> While common sense says SSD drive should outperform the mechanical one,\n>>> your test scenario (large volume sequential reads) evens out the field a\n>>> lot. Still I would have expected somewhat similar results in the outcome, so\n>>> yes, it is weird that the SAS drive doubles the SSD performance. That is why\n>>> I think there must be something else going on during your tests on the SSD\n>>> server. It can also be that the SSD isn't working properly or you are\n>>> running an suboptimal OS+server+controller configuration for the drive.\n>>>\n>> I would second the analysis above - unless you see your read MB/s slammed up\n>> against 580-600MB/s contunuously then the interface speed is not the issue.\n>> We have some similar servers that we replaced 12x SAS with 1x SATA 6 GBit/s\n>> (Intel DC S3710) SSD...and the latter way outperforms the original 12 SAS\n>> drives.\n>>\n>> I suspect the problem is the particular SSD you have - I have benchmarked\n>> the 256GB EVO variant and was underwhelmed by the performance. These\n>> (budget) triple cell nand SSD seem to have highly variable read and write\n>> performance (the write is all about when the SLC nand cache gets\n>> full)...read I'm not so sure of - but it could be crappy chipset/firmware\n>> combination. In short I'd recommend *not* using that particular SSD for a\n>> database workload. I'd recommend one of the Intel Datacenter DC range (FWIW\n>> I'm not affiliated with Intel in any way...but their DC stuff works well).\n>>\n>> regards\n>>\n>> Mark\n> Hi Mark\n> In other forums one person said me that on samsung evo should be\n> partition aligned to 3072 not default 2048 , to start on erase block\n> bounduary . And fs block should be 8kb. I am studing this too. Some\n> DBAs have reported in other situations that the SSDs when they are\n> full, are very slow. Mine is 85% full, so maybe that is also\n> influencing. I'm disappointed with this SSD from Samsung, because in\n> theory, the read speed of an SSD should be more than 300 times faster\n> than an HDD and this is not happening.\n>\n>\n\nInteresting - I didn't try changing the alignment. However I could get \nthe rated write and read performance on simple benchmarks (provided it \nwas in a PCIe V3 slot)...so figured it was ok with the default aligning. \nHowever once more complex workloads were attempted (databases and \ndistributed object store) the performance was disappointing.\n\nIf the SSD is 85% full that will not help either (also look at the \nexpected lifetime of these EVO's - not that great for a server)!\n\nOne thing worth trying is messing about with the IO scheduler: if you \nare using noop, then try deadline (like I said crappy firmware)...\n\nRealistically, I'd recommend getting an enterprise/DC SSD (put the EVO \nin your workstation, it will be quite nice there)!\n\nCheers\nMark\n\n", "msg_date": "Wed, 17 Jan 2018 10:24:41 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HDD vs SSD without explanation" }, { "msg_contents": "Thanks all, but I still have not figured it out.\nThis is really strange because the tests were done on the same machine\n(I use HP ML110 Proliant 8gb RAM - Xeon 2.8 ghz processor (4\ncores), and POSTGRESQL 10.1.\n- Only the mentioned query running at the time of the test.\n- I repeated the query 7 times and did not change the results.\n- Before running each batch of 7 executions, I discarded the Operating\nSystem cache and restarted DBMS like this:\n(echo 3> / proc / sys / vm / drop_caches;\n\ndiscs:\n- 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)\n- 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)\n\n- The Operating System and the Postgresql DBMS are installed on the SSD disk.\n\nBest Regards\n[ ]`s Neto\n\n\n2018-01-16 13:24 GMT-08:00 Mark Kirkwood <[email protected]>:\n>\n>\n> On 16/01/18 23:14, Neto pr wrote:\n>>\n>> 2018-01-15 20:04 GMT-08:00 Mark Kirkwood <[email protected]>:\n>>>\n>>> On 16/01/18 13:18, Fernando Hevia wrote:\n>>>\n>>>>\n>>>>\n>>>> The 6 Gb/s interface is capable of a maximum throughput of around 600\n>>>> Mb/s. None of your drives can achieve that so I don't think you are\n>>>> limited\n>>>> to the interface speed. The 12 Gb/s interface speed advantage kicks in\n>>>> when\n>>>> there are several drives installed and it won't make a diference in a\n>>>> single\n>>>> drive or even a two drive system.\n>>>>\n>>>> But don't take my word for it. Test your drives throughput with the\n>>>> command Justin suggested so you know exactly what each drive is capable\n>>>> of:\n>>>>\n>>>> Can you reproduce the speed difference using dd ?\n>>>> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K\n>>>> skip=$((128*$RANDOM/32)) # set bs to optimal_io_size\n>>>>\n>>>>\n>>>> While common sense says SSD drive should outperform the mechanical one,\n>>>> your test scenario (large volume sequential reads) evens out the field a\n>>>> lot. Still I would have expected somewhat similar results in the\n>>>> outcome, so\n>>>> yes, it is weird that the SAS drive doubles the SSD performance. That is\n>>>> why\n>>>> I think there must be something else going on during your tests on the\n>>>> SSD\n>>>> server. It can also be that the SSD isn't working properly or you are\n>>>> running an suboptimal OS+server+controller configuration for the drive.\n>>>>\n>>> I would second the analysis above - unless you see your read MB/s slammed\n>>> up\n>>> against 580-600MB/s contunuously then the interface speed is not the\n>>> issue.\n>>> We have some similar servers that we replaced 12x SAS with 1x SATA 6\n>>> GBit/s\n>>> (Intel DC S3710) SSD...and the latter way outperforms the original 12 SAS\n>>> drives.\n>>>\n>>> I suspect the problem is the particular SSD you have - I have benchmarked\n>>> the 256GB EVO variant and was underwhelmed by the performance. These\n>>> (budget) triple cell nand SSD seem to have highly variable read and write\n>>> performance (the write is all about when the SLC nand cache gets\n>>> full)...read I'm not so sure of - but it could be crappy chipset/firmware\n>>> combination. In short I'd recommend *not* using that particular SSD for a\n>>> database workload. I'd recommend one of the Intel Datacenter DC range\n>>> (FWIW\n>>> I'm not affiliated with Intel in any way...but their DC stuff works\n>>> well).\n>>>\n>>> regards\n>>>\n>>> Mark\n>>\n>> Hi Mark\n>> In other forums one person said me that on samsung evo should be\n>> partition aligned to 3072 not default 2048 , to start on erase block\n>> bounduary . And fs block should be 8kb. I am studing this too. Some\n>> DBAs have reported in other situations that the SSDs when they are\n>> full, are very slow. Mine is 85% full, so maybe that is also\n>> influencing. I'm disappointed with this SSD from Samsung, because in\n>> theory, the read speed of an SSD should be more than 300 times faster\n>> than an HDD and this is not happening.\n>>\n>>\n>\n> Interesting - I didn't try changing the alignment. However I could get the\n> rated write and read performance on simple benchmarks (provided it was in a\n> PCIe V3 slot)...so figured it was ok with the default aligning. However once\n> more complex workloads were attempted (databases and distributed object\n> store) the performance was disappointing.\n>\n> If the SSD is 85% full that will not help either (also look at the expected\n> lifetime of these EVO's - not that great for a server)!\n>\n> One thing worth trying is messing about with the IO scheduler: if you are\n> using noop, then try deadline (like I said crappy firmware)...\n>\n> Realistically, I'd recommend getting an enterprise/DC SSD (put the EVO in\n> your workstation, it will be quite nice there)!\n>\n> Cheers\n> Mark\n\n", "msg_date": "Tue, 17 Jul 2018 05:47:07 -0700", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HDD vs SSD without explanation" } ]
[ { "msg_contents": "Hi,\n\nI have installed pgaudit, and configured as:\npgaudit.log = 'ddl,role'\npgaudit.log_level = 'log' (default)\n\nVersions: postgresql96 (9.6.6) , pgaudit96 (1.0.4), postgis 2.3.2, Rhel 7.4\n\nWhen I then install postgis extension in a database it writes a huge amount of logs which slow down the server a lot.\nNot only table creation and functions are logged, even all inserts in spatial_ref_sys are written to the audit-log.\n\nLOG: AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,\"\n......\nINSERT INTO \"\"spatial_ref_sys\"\" (\"\"srid\"\",\"\"auth_name\"\n....\n\nThis behaviour make pgaudit useless in our environment due to the overhead in log-file write.\nI have tried different combinations of pgaudit.log settings (role,-functions), (role), and also changed pgaudit.log_level to warning, but it was not better.\n\nDoes anybody have a useful pgaudit settings which not overflow the log files, even when installing postgis or other extensions?\n\nAlso noticed that setting a session log to none (set pgaudit.log='none';) overrides parameter from postgresql.conf, but does not get logged, and then you can do whatever you want without any audit.\nI supposed this changing of audit session log parameter should be logged to file?\n\n\nRegards,\nPeter\n\n\n\n\n\n\n\nHi,\n\nI have installed pgaudit, and configured as:\npgaudit.log = 'ddl,role'\npgaudit.log_level = 'log'  (default)\n\nVersions:  postgresql96 (9.6.6) , pgaudit96 (1.0.4), postgis 2.3.2,  Rhel 7.4\n\nWhen I then install  postgis extension in a database it writes a huge amount of logs which slow down the server a lot.\nNot only table creation and functions are logged,  even  all inserts in  spatial_ref_sys are written to the audit-log.\n\nLOG:  AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,\"\n......\nINSERT INTO \"\"spatial_ref_sys\"\" (\"\"srid\"\",\"\"auth_name\"\n....\n\nThis behaviour make pgaudit useless in our environment due to the overhead in log-file write.\nI have tried different combinations of  pgaudit.log  settings (role,-functions), (role),  and also changed pgaudit.log_level to  warning, but it was not better.\n\nDoes anybody have a useful  pgaudit settings which not overflow the log files, even when installing postgis or other extensions?\n\nAlso noticed that setting a session log to none (set pgaudit.log='none';)  overrides parameter from postgresql.conf,  but does not get logged, and then you can do whatever you want without any audit.\nI supposed this changing of  audit session log parameter should be logged to file?\n\n\nRegards,\nPeter", "msg_date": "Thu, 18 Jan 2018 12:12:26 +0000", "msg_from": "Svensson Peter <[email protected]>", "msg_from_op": true, "msg_subject": "pgaudit and create postgis extension logs a lot inserts" }, { "msg_contents": "On 01/18/2018 04:12 AM, Svensson Peter wrote:\n> When I then install  postgis extension in a database it writes a huge\n> amount of logs which slow down the server a lot.\n> Not only table creation and functions are logged,  even  all inserts in \n> spatial_ref_sys are written to the audit-log.\n> \n> LOG:  AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,\"\n> ......\n> INSERT INTO \"\"spatial_ref_sys\"\" (\"\"srid\"\",\"\"auth_name\"\n> ....\n> \n> This behaviour make pgaudit useless in our environment due to the\n> overhead in log-file write.\n\nHow often do you intend to install PostGIS? Disable pgaudit, install\nPostGIS, enable pgaudit?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development", "msg_date": "Thu, 18 Jan 2018 08:54:21 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgaudit and create postgis extension logs a lot inserts" }, { "msg_contents": "\nA test to create postgis extension made 4 rsyslog processes run for several minutes with high cpu util,\nand when you have only 8 cpu:s this take lot of resources. \nThe create command also have to wait until all the log are written so there are great impact.\nLog file got 16 GB big only for this.\n\nWe have several databases in the same server, some of them with postgis.\nThose databases are maintained bye different people, and tell them to disable pgaudit\nevery time they are doing something that can cause lot log will create a bad behaviour,\nespecially when we cannot see in the logs that they have disabled pgaudit.\n\nI think postgis extension is not the only extention that creates both tables, functions and insert data,\nbut if there are a way to configure pgaudit so you get rid of the inserts maybe its a way to handle it.\n\n/Peter\n________________________________________\nFrån: Joe Conway [[email protected]]\nSkickat: den 18 januari 2018 17:54\nTill: Svensson Peter; [email protected]\nÄmne: Re: pgaudit and create postgis extension logs a lot inserts\n\nOn 01/18/2018 04:12 AM, Svensson Peter wrote:\n> When I then install postgis extension in a database it writes a huge\n> amount of logs which slow down the server a lot.\n> Not only table creation and functions are logged, even all inserts in\n> spatial_ref_sys are written to the audit-log.\n>\n> LOG: AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,\"\n> ......\n> INSERT INTO \"\"spatial_ref_sys\"\" (\"\"srid\"\",\"\"auth_name\"\n> ....\n>\n> This behaviour make pgaudit useless in our environment due to the\n> overhead in log-file write.\n\nHow often do you intend to install PostGIS? Disable pgaudit, install\nPostGIS, enable pgaudit?\n\nJoe\n\n--\nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Fri, 19 Jan 2018 11:03:42 +0000", "msg_from": "Svensson Peter <[email protected]>", "msg_from_op": true, "msg_subject": "SV: pgaudit and create postgis extension logs a lot inserts" }, { "msg_contents": "On Thu, Jan 18, 2018 at 6:54 PM, Joe Conway <[email protected]> wrote:\n\n> On 01/18/2018 04:12 AM, Svensson Peter wrote:\n> > When I then install postgis extension in a database it writes a huge\n> > amount of logs which slow down the server a lot.\n> > Not only table creation and functions are logged, even all inserts in\n> > spatial_ref_sys are written to the audit-log.\n> >\n> > LOG: AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,\"\n> > ......\n> > INSERT INTO \"\"spatial_ref_sys\"\" (\"\"srid\"\",\"\"auth_name\"\n> > ....\n> >\n> > This behaviour make pgaudit useless in our environment due to the\n> > overhead in log-file write.\n>\n> How often do you intend to install PostGIS? Disable pgaudit, install\n> PostGIS, enable pgaudit?\n>\n\nWould it make sense for pgaudit to, at least by option, not include DDL\nstatements that are generated as \"sub-parts\" of a CREATE EXTENSION? It\nshould still log the CREATE EXTENSION of course, but not necessarily all\nthe contents of it, since that's actually defined in the extension itself\nalready?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Jan 18, 2018 at 6:54 PM, Joe Conway <[email protected]> wrote:On 01/18/2018 04:12 AM, Svensson Peter wrote:\n> When I then install  postgis extension in a database it writes a huge\n> amount of logs which slow down the server a lot.\n> Not only table creation and functions are logged,  even  all inserts in \n> spatial_ref_sys are written to the audit-log.\n>\n> LOG:  AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,\"\n> ......\n> INSERT INTO \"\"spatial_ref_sys\"\" (\"\"srid\"\",\"\"auth_name\"\n> ....\n>\n> This behaviour make pgaudit useless in our environment due to the\n> overhead in log-file write.\n\nHow often do you intend to install PostGIS? Disable pgaudit, install\nPostGIS, enable pgaudit?Would it make sense for pgaudit to, at least by option, not include DDL statements that are generated as \"sub-parts\" of a CREATE EXTENSION? It should still log the CREATE EXTENSION of course, but not necessarily all the contents of it, since that's actually defined in the extension itself already? --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 19 Jan 2018 13:05:43 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgaudit and create postgis extension logs a lot inserts" }, { "msg_contents": "Please remove me from this list. Thanks.\n\nKaren Stone| Technical Services| Eldorado |a Division of MphasiS \n5353 North 16th Street, Suite 400, Phoenix, Arizona 85016-3228 \nTel (928) 892 5735 | www.eldoinc.com | www.mphasis.com |[email protected] \n\n\n-----Original Message-----\nFrom: Svensson Peter [mailto:[email protected]] \nSent: Friday, January 19, 2018 4:04 AM\nTo: Joe Conway <[email protected]>; [email protected]\nSubject: SV: pgaudit and create postgis extension logs a lot inserts\n\n\nA test to create postgis extension made 4 rsyslog processes run for several minutes with high cpu util, and when you have only 8 cpu:s this take lot of resources. \nThe create command also have to wait until all the log are written so there are great impact.\nLog file got 16 GB big only for this.\n\nWe have several databases in the same server, some of them with postgis.\nThose databases are maintained bye different people, and tell them to disable pgaudit every time they are doing something that can cause lot log will create a bad behaviour, especially when we cannot see in the logs that they have disabled pgaudit.\n\nI think postgis extension is not the only extention that creates both tables, functions and insert data, but if there are a way to configure pgaudit so you get rid of the inserts maybe its a way to handle it.\n\n/Peter\n________________________________________\nFrån: Joe Conway [[email protected]]\nSkickat: den 18 januari 2018 17:54\nTill: Svensson Peter; [email protected]\nÄmne: Re: pgaudit and create postgis extension logs a lot inserts\n\nOn 01/18/2018 04:12 AM, Svensson Peter wrote:\n> When I then install postgis extension in a database it writes a huge \n> amount of logs which slow down the server a lot.\n> Not only table creation and functions are logged, even all inserts \n> in spatial_ref_sys are written to the audit-log.\n>\n> LOG: AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,\"\n> ......\n> INSERT INTO \"\"spatial_ref_sys\"\" (\"\"srid\"\",\"\"auth_name\"\n> ....\n>\n> This behaviour make pgaudit useless in our environment due to the \n> overhead in log-file write.\n\nHow often do you intend to install PostGIS? Disable pgaudit, install PostGIS, enable pgaudit?\n\nJoe\n\n--\nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises Consulting, Training, & Open Source Development\n\n\n\n", "msg_date": "Fri, 19 Jan 2018 12:24:49 +0000", "msg_from": "Karen Stone <[email protected]>", "msg_from_op": false, "msg_subject": "RE: pgaudit and create postgis extension logs a lot inserts" }, { "msg_contents": "On 1/19/18 6:05 AM, Magnus Hagander wrote:\n> \n> \n> On Thu, Jan 18, 2018 at 6:54 PM, Joe Conway <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> On 01/18/2018 04:12 AM, Svensson Peter wrote:\n> > When I then install  postgis extension in a database it writes a huge\n> > amount of logs which slow down the server a lot.\n> > Not only table creation and functions are logged,  even  all inserts in \n> > spatial_ref_sys are written to the audit-log.\n> >\n> > LOG:  AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,\"\n> > ......\n> > INSERT INTO \"\"spatial_ref_sys\"\" (\"\"srid\"\",\"\"auth_name\"\n> > ....\n> >\n> > This behaviour make pgaudit useless in our environment due to the\n> > overhead in log-file write.\n> \n> How often do you intend to install PostGIS? Disable pgaudit, install\n> PostGIS, enable pgaudit?\n> \n> \n> Would it make sense for pgaudit to, at least by option, not include DDL\n> statements that are generated as \"sub-parts\" of a CREATE EXTENSION? It\n> should still log the CREATE EXTENSION of course, but not necessarily all\n> the contents of it, since that's actually defined in the extension\n> itself already? \nThat's doable, but I think it could be abused if it was always on and\ninstalling extensions is generally not a daily activity.\n\nIt seems in this case the best action is to disable pgaudit before\ninstalling postgis or install postgis first.\n\nRegards,\n-- \n-David\[email protected]\n\n", "msg_date": "Fri, 19 Jan 2018 08:41:42 -0500", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgaudit and create postgis extension logs a lot inserts" }, { "msg_contents": "Hi Peter,\n\nOn 1/18/18 7:12 AM, Svensson Peter wrote:\n> \n> Also noticed that setting a session log to none (set\n> pgaudit.log='none';)� overrides parameter from postgresql.conf,� but\n> does not get logged, and then you can do whatever you want without any\n> audit.\n> I supposed this changing of� audit session log parameter should be\n> logged to file?\n\npgaudit is not intended to audit the superuser and only a superuser can\nset pgaudit.log.\n\nHowever, you can limit superuser access with the setuser extension:\nhttps://github.com/pgaudit/set_user\n\nRegards,\n-- \n-David\[email protected]\n\n", "msg_date": "Fri, 19 Jan 2018 08:51:20 -0500", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgaudit and create postgis extension logs a lot inserts" }, { "msg_contents": "On Fri, Jan 19, 2018 at 3:41 PM, David Steele <[email protected]> wrote:\n\n> On 1/19/18 6:05 AM, Magnus Hagander wrote:\n> >\n> >\n> > On Thu, Jan 18, 2018 at 6:54 PM, Joe Conway <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > On 01/18/2018 04:12 AM, Svensson Peter wrote:\n> > > When I then install postgis extension in a database it writes a\n> huge\n> > > amount of logs which slow down the server a lot.\n> > > Not only table creation and functions are logged, even all\n> inserts in\n> > > spatial_ref_sys are written to the audit-log.\n> > >\n> > > LOG: AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,\"\n> > > ......\n> > > INSERT INTO \"\"spatial_ref_sys\"\" (\"\"srid\"\",\"\"auth_name\"\n> > > ....\n> > >\n> > > This behaviour make pgaudit useless in our environment due to the\n> > > overhead in log-file write.\n> >\n> > How often do you intend to install PostGIS? Disable pgaudit, install\n> > PostGIS, enable pgaudit?\n> >\n> >\n> > Would it make sense for pgaudit to, at least by option, not include DDL\n> > statements that are generated as \"sub-parts\" of a CREATE EXTENSION? It\n> > should still log the CREATE EXTENSION of course, but not necessarily all\n> > the contents of it, since that's actually defined in the extension\n> > itself already?\n> That's doable, but I think it could be abused if it was always on and\n> installing extensions is generally not a daily activity.\n>\n\nProbably true, yeah. It can certainly be part of a daily activity in say CI\nenvironments etc, but those are not likely environments where pg_audit\nmakes that much sense in the first place.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Jan 19, 2018 at 3:41 PM, David Steele <[email protected]> wrote:On 1/19/18 6:05 AM, Magnus Hagander wrote:\n>\n>\n> On Thu, Jan 18, 2018 at 6:54 PM, Joe Conway <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n>     On 01/18/2018 04:12 AM, Svensson Peter wrote:\n>     > When I then install  postgis extension in a database it writes a huge\n>     > amount of logs which slow down the server a lot.\n>     > Not only table creation and functions are logged,  even  all inserts in \n>     > spatial_ref_sys are written to the audit-log.\n>     >\n>     > LOG:  AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,\"\n>     > ......\n>     > INSERT INTO \"\"spatial_ref_sys\"\" (\"\"srid\"\",\"\"auth_name\"\n>     > ....\n>     >\n>     > This behaviour make pgaudit useless in our environment due to the\n>     > overhead in log-file write.\n>\n>     How often do you intend to install PostGIS? Disable pgaudit, install\n>     PostGIS, enable pgaudit?\n>\n>\n> Would it make sense for pgaudit to, at least by option, not include DDL\n> statements that are generated as \"sub-parts\" of a CREATE EXTENSION? It\n> should still log the CREATE EXTENSION of course, but not necessarily all\n> the contents of it, since that's actually defined in the extension\n> itself already?\nThat's doable, but I think it could be abused if it was always on and\ninstalling extensions is generally not a daily activity.Probably true, yeah. It can certainly be part of a daily activity in say CI environments etc, but those are not likely environments where pg_audit makes that much sense in the first place.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Sat, 20 Jan 2018 15:05:32 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgaudit and create postgis extension logs a lot inserts" }, { "msg_contents": "OK, thanks a lot. \n\nRegards,\nPeter\n________________________________________\nFrån: David Steele [[email protected]]\nSkickat: den 19 januari 2018 14:41\nTill: Magnus Hagander; Joe Conway\nKopia: Svensson Peter; [email protected]\nÄmne: Re: pgaudit and create postgis extension logs a lot inserts\n\nOn 1/19/18 6:05 AM, Magnus Hagander wrote:\n>\n>\n> On Thu, Jan 18, 2018 at 6:54 PM, Joe Conway <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> On 01/18/2018 04:12 AM, Svensson Peter wrote:\n> > When I then install postgis extension in a database it writes a huge\n> > amount of logs which slow down the server a lot.\n> > Not only table creation and functions are logged, even all inserts in\n> > spatial_ref_sys are written to the audit-log.\n> >\n> > LOG: AUDIT: SESSION,1,1,DDL,CREATE FUNCTION,,,\"\n> > ......\n> > INSERT INTO \"\"spatial_ref_sys\"\" (\"\"srid\"\",\"\"auth_name\"\n> > ....\n> >\n> > This behaviour make pgaudit useless in our environment due to the\n> > overhead in log-file write.\n>\n> How often do you intend to install PostGIS? Disable pgaudit, install\n> PostGIS, enable pgaudit?\n>\n>\n> Would it make sense for pgaudit to, at least by option, not include DDL\n> statements that are generated as \"sub-parts\" of a CREATE EXTENSION? It\n> should still log the CREATE EXTENSION of course, but not necessarily all\n> the contents of it, since that's actually defined in the extension\n> itself already?\nThat's doable, but I think it could be abused if it was always on and\ninstalling extensions is generally not a daily activity.\n\nIt seems in this case the best action is to disable pgaudit before\ninstalling postgis or install postgis first.\n\nRegards,\n--\n-David\[email protected]\n\n", "msg_date": "Mon, 22 Jan 2018 08:20:55 +0000", "msg_from": "Svensson Peter <[email protected]>", "msg_from_op": true, "msg_subject": "SV: pgaudit and create postgis extension logs a lot inserts" }, { "msg_contents": "On Fri, Jan 19, 2018 at 11:03:42AM +0000, Svensson Peter wrote:\n> \n> A test to create postgis extension made 4 rsyslog processes run for several minutes with high cpu util,\n> and when you have only 8 cpu:s this take lot of resources. \n> The create command also have to wait until all the log are written so there are great impact.\n> Log file got 16 GB big only for this.\n\nUh, that seems odd. Is rsyslog fsync'ing each write? You should check\nthe docs on that. Here is an example report:\n\n\thttp://kb.monitorware.com/simple-question-what-does-the-dash-t10237.html\n\nI don't see the dash behavior mentioned in my Debian Jessie rsyslogd\nmanual page though.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n", "msg_date": "Tue, 30 Jan 2018 10:26:29 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SV: pgaudit and create postgis extension logs a lot inserts" } ]
[ { "msg_contents": "Hi Team,\n\nwe are seeing idle sessions consuming memory in our database, could you\nplease help me how much memory an idle session can use max and how can we\nfind how much work_mem consuming for single process.\n\nwe are getting out of memory error,for this i'm asking above questions.\n\n\nRegards,\n\nRambabu Vakada.\n\nHi Team,we are seeing idle sessions consuming memory in our database, could you please help me how much memory an idle session can use max and how can we find how much work_mem consuming for single process.we are getting out of memory error,for this i'm asking above questions.Regards,Rambabu Vakada.", "msg_date": "Thu, 18 Jan 2018 20:55:31 +0530", "msg_from": "Rambabu V <[email protected]>", "msg_from_op": true, "msg_subject": "need help on memory allocation" }, { "msg_contents": "Rambabu V wrote:\n> we are seeing idle sessions consuming memory in our database, could you please help me\n> how much memory an idle session can use max and how can we find how much work_mem\n> consuming for single process.\n> \n> we are getting out of memory error,for this i'm asking above questions.\n\nAre you sure that you see the private memory of the process and not the\nshared memory common to all processes?\n\nAn \"idle\" connection should not hav a lot of private memory.\n\nIf you get OOM on the server, the log entry with the memory context dump\nmight be useful information.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 19 Jan 2018 11:07:35 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help on memory allocation" }, { "msg_contents": "Hi Laurenz,\n\nAny Update, this is continuously hitting our production database.\n\nRegards,\nRambabu Vakada,\nPostgreSQL DBA.\n\n\nOn Tue, Jan 23, 2018 at 6:12 PM, Rambabu V <[email protected]> wrote:\n\n> Hi Laurenz,\n>\n> OOM error not recording in server level, it is only recording in our\n> database logs.\n>\n> below is the error message:\n>\n> *cat PostgreSQL-2018-01-23_060000.csv|grep FATAL*\n> 2018-01-23 06:08:01.684 UTC,\"postgres\",\"rpx\",68034,\"[\n> local]\",5a66d141.109c2,2,\"authentication\",2018-01-23 06:08:01\n> UTC,174/89066,0,FATAL,28000,\"Peer authentication failed for user\n> \"\"postgres\"\"\",\"Connection matched pg_hba.conf line 5: \"\"local all all peer\n> map=supers\"\"\",,,,,,,,\"\"\n> 2018-01-23 06:25:52.286 UTC,\"postgres\",\"rpx\",22342,\"[\n> local]\",5a66d570.5746,2,\"authentication\",2018-01-23 06:25:52\n> UTC,173/107122,0,FATAL,28000,\"Peer authentication failed for user\n> \"\"postgres\"\"\",\"Connection matched pg_hba.conf line 5: \"\"local all all peer\n> map=supers\"\"\",,,,,,,,\"\"\n> 2018-01-23 06:37:10.916 UTC,\"portal_etl_app\",\"rpx\",31226,\"\n> 10.50.13.151:41052\",5a66d816.79fa,1,\"authentication\",2018-01-23 06:37:10\n> UTC,,0,FATAL,53200,\"out of memory\",\"Failed on request of size\n> 78336.\",,,,,,,,\"\"\n>\n> *below log from /var/log messages:*\n>\n> root@prp:~# cat /var/log/syslog*|grep 'out of memory'\n> root@prp:~# cat /var/log/syslog*|grep error\n> root@prp:~# cat /var/log/syslog*|grep warning\n> root@prp:~#\n>\n> *$ free -mh*\n> total used free shared buffers cached\n> Mem: 58G 58G 358M 16G 3.6M 41G\n> -/+ buffers/cache: 16G 42G\n> Swap: 9.5G 687M 8.9G\n>\n> *postgresql.conf parametes:*\n> *=====================*\n> work_mem = 256MB # min 64kB\n> maintenance_work_mem = 256MB # min 1MB\n> shared_buffers = 16GB # min 128kB\n> temp_buffers = 16MB # min 800kB\n> wal_buffers = 64MB\n> effective_cache_size = 64GB\n> max_connections = 600\n>\n> *cat /etc/sysctl.conf|grep kernel*\n> #kernel.domainname = example.com\n> #kernel.printk = 3 4 1 3\n> kernel.shmmax = 38654705664\n> kernel.shmall = 8388608\n>\n> *ps -ef|grep postgres|grep idle|wc -l*\n> 171\n>\n> *ps -ef|grep postgres|wc -l*\n> 206\n>\n> *ps -ef|wc -l*\n> 589\n>\n> *Databse Size: 1.5 TB*\n>\n> *below is the htop output:*\n> *-----------------------------------*\n> Mem[|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||17045/60382MB]\n> Tasks: 250, 7 thr; 8 running\n> Swp[||||||\n> 686/9765MB] Load average: 8.63 9.34 8.62\n>\n> Uptime: 52 days, 07:07:07\n>\n> PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command\n> 109063 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 39:55.61 postgres:\n> test sss 10.20.2.228(55174) idle\n> 24910 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 27:45.35 postgres:\n> testl sss 10.20.2.228(55236) idle\n> 115539 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 28:22.89 postgres:\n> test sss 10.20.2.228(55184) idle\n> 9816 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 40:19.57 postgres:\n> test sss 10.20.2.228(55216) idle\n>\n>\n>\n> Please help us on this, how can we over come this OOM issue.\n>\n>\n>\n> Regards,\n>\n> Rambabu Vakada,\n> PostgreSQL DBA,\n> +91 9849137684.\n>\n>\n>\n> On Fri, Jan 19, 2018 at 3:37 PM, Laurenz Albe <[email protected]>\n> wrote:\n>\n>> Rambabu V wrote:\n>> > we are seeing idle sessions consuming memory in our database, could you\n>> please help me\n>> > how much memory an idle session can use max and how can we find how\n>> much work_mem\n>> > consuming for single process.\n>> >\n>> > we are getting out of memory error,for this i'm asking above questions.\n>>\n>> Are you sure that you see the private memory of the process and not the\n>> shared memory common to all processes?\n>>\n>> An \"idle\" connection should not hav a lot of private memory.\n>>\n>> If you get OOM on the server, the log entry with the memory context dump\n>> might be useful information.\n>>\n>> Yours,\n>> Laurenz Albe\n>>\n>>\n>\n\nHi Laurenz,Any Update, this is continuously hitting our production database.Regards,Rambabu Vakada,PostgreSQL DBA.On Tue, Jan 23, 2018 at 6:12 PM, Rambabu V <[email protected]> wrote:Hi Laurenz,OOM error not recording in server level, it is only recording in our database logs.below is the error message:cat PostgreSQL-2018-01-23_060000.csv|grep FATAL2018-01-23 06:08:01.684 UTC,\"postgres\",\"rpx\",68034,\"[local]\",5a66d141.109c2,2,\"authentication\",2018-01-23 06:08:01 UTC,174/89066,0,FATAL,28000,\"Peer authentication failed for user \"\"postgres\"\"\",\"Connection matched pg_hba.conf line 5: \"\"local all all peer map=supers\"\"\",,,,,,,,\"\"2018-01-23 06:25:52.286 UTC,\"postgres\",\"rpx\",22342,\"[local]\",5a66d570.5746,2,\"authentication\",2018-01-23 06:25:52 UTC,173/107122,0,FATAL,28000,\"Peer authentication failed for user \"\"postgres\"\"\",\"Connection matched pg_hba.conf line 5: \"\"local all all peer map=supers\"\"\",,,,,,,,\"\"2018-01-23 06:37:10.916 UTC,\"portal_etl_app\",\"rpx\",31226,\"10.50.13.151:41052\",5a66d816.79fa,1,\"authentication\",2018-01-23 06:37:10 UTC,,0,FATAL,53200,\"out of memory\",\"Failed on request of size 78336.\",,,,,,,,\"\"below log from /var/log messages:root@prp:~# cat /var/log/syslog*|grep 'out of memory'root@prp:~# cat /var/log/syslog*|grep errorroot@prp:~# cat /var/log/syslog*|grep warningroot@prp:~#$ free -mh             total       used       free     shared    buffers     cachedMem:           58G        58G       358M        16G       3.6M        41G-/+ buffers/cache:        16G        42GSwap:         9.5G       687M       8.9Gpostgresql.conf parametes:=====================work_mem = 256MB # min 64kBmaintenance_work_mem = 256MB # min 1MBshared_buffers = 16GB # min 128kBtemp_buffers = 16MB # min 800kBwal_buffers = 64MBeffective_cache_size = 64GBmax_connections = 600cat /etc/sysctl.conf|grep kernel#kernel.domainname = example.com#kernel.printk = 3 4 1 3kernel.shmmax = 38654705664kernel.shmall = 8388608ps -ef|grep postgres|grep idle|wc -l171ps -ef|grep postgres|wc -l206ps -ef|wc -l589Databse Size: 1.5 TBbelow is the htop output:-----------------------------------  Mem[|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||17045/60382MB]     Tasks: 250, 7 thr; 8 running  Swp[||||||                                                                686/9765MB]     Load average: 8.63 9.34 8.62                                                                                            Uptime: 52 days, 07:07:07    PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command 109063 postgres   20   0 16.7G 16.4G 16.3G S  0.0 27.8 39:55.61 postgres: test sss 10.20.2.228(55174) idle  24910 postgres   20   0 16.7G 16.4G 16.3G S  0.0 27.8 27:45.35 postgres: testl sss 10.20.2.228(55236) idle 115539 postgres   20   0 16.7G 16.4G 16.3G S  0.0 27.8 28:22.89 postgres: test sss 10.20.2.228(55184) idle   9816 postgres   20   0 16.7G 16.4G 16.3G S  0.0 27.8 40:19.57 postgres: test sss   10.20.2.228(55216) idlePlease help us on this, how can we over come this OOM issue.Regards,Rambabu Vakada,PostgreSQL DBA,+91 9849137684.On Fri, Jan 19, 2018 at 3:37 PM, Laurenz Albe <[email protected]> wrote:Rambabu V wrote:\n> we are seeing idle sessions consuming memory in our database, could you please help me\n> how much memory an idle session can use max and how can we find how much work_mem\n> consuming for single process.\n>\n> we are getting out of memory error,for this i'm asking above questions.\n\nAre you sure that you see the private memory of the process and not the\nshared memory common to all processes?\n\nAn \"idle\" connection should not hav a lot of private memory.\n\nIf you get OOM on the server, the log entry with the memory context dump\nmight be useful information.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 23 Jan 2018 19:29:31 +0530", "msg_from": "Rambabu V <[email protected]>", "msg_from_op": true, "msg_subject": "Re: need help on memory allocation" }, { "msg_contents": "\n\nAm 23.01.2018 um 14:59 schrieb Rambabu V:\n>              total       used       free     shared buffers     cached\n> Mem:           58G        58G       358M        16G  3.6M        41G\n> -/+ buffers/cache:        16G        42G\n> Swap:         9.5G       687M       8.9G\n>\n> *postgresql.conf parametes:*\n> *=====================*\n> work_mem = 256MB# min 64kB\n> maintenance_work_mem = 256MB# min 1MB\n> shared_buffers = 16GB# min 128kB\n> temp_buffers = 16MB# min 800kB\n> wal_buffers = 64MB\n> effective_cache_size = 64GB\n> max_connections = 600\n>\n\nhow many active concurrent connections do you have? With work_mem = \n256MB and 600 active connections and only 1 allocation of work_mem per \nconnection you will need more than 150GB of RAM.\n\n\nWith other words: you should lowering work_mem and/or max_connections.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n", "msg_date": "Tue, 23 Jan 2018 15:08:07 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help on memory allocation" }, { "msg_contents": "On Tue, 2018-01-23 at 19:29 +0530, Rambabu V wrote:\n> Any Update, this is continuously hitting our production database.\n> \n> > OOM error not recording in server level, it is only recording in our database logs.\n> > \n> > below is the error message:\n> > \n> > cat PostgreSQL-2018-01-23_060000.csv|grep FATAL\n> > 2018-01-23 06:08:01.684 UTC,\"postgres\",\"rpx\",68034,\"[local]\",5a66d141.109c2,2,\"authentication\",2018-01-23 06:08:01 UTC,174/89066,0,FATAL,28000,\"Peer authentication failed for user \"\"postgres\"\"\",\"Connection matched pg_hba.conf line 5: \"\"local\tall\t\tall\t\t\tpeer map=supers\"\"\",,,,,,,,\"\"\n> > 2018-01-23 06:25:52.286 UTC,\"postgres\",\"rpx\",22342,\"[local]\",5a66d570.5746,2,\"authentication\",2018-01-23 06:25:52 UTC,173/107122,0,FATAL,28000,\"Peer authentication failed for user \"\"postgres\"\"\",\"Connection matched pg_hba.conf line 5: \"\"local\tall\t\tall\t\t\tpeer map=supers\"\"\",,,,,,,,\"\"\n> > 2018-01-23 06:37:10.916 UTC,\"portal_etl_app\",\"rpx\",31226,\"10.50.13.151:41052\",5a66d816.79fa,1,\"authentication\",2018-01-23 06:37:10 UTC,,0,FATAL,53200,\"out of memory\",\"Failed on request of size 78336.\",,,,,,,,\"\"\n> > \n> > $ free -mh\n> > total used free shared buffers cached\n> > Mem: 58G 58G 358M 16G 3.6M 41G\n> > -/+ buffers/cache: 16G 42G\n> > Swap: 9.5G 687M 8.9G\n> > \n> > postgresql.conf parametes:\n> > =====================\n> > work_mem = 256MB\t\t\t\t# min 64kB\n> > maintenance_work_mem = 256MB\t\t# min 1MB\n> > shared_buffers = 16GB\t\t\t# min 128kB\n> > temp_buffers = 16MB\t\t\t# min 800kB\n> > wal_buffers = 64MB\n> > effective_cache_size = 64GB\n> > max_connections = 600\n\nIt would be interesting to know the output from\n\n sysctl vm.overcommit_memory\n sysctl vm.overcommit_ratio\n\nAlso interesting:\n\n sar -r 1 1\n\nI think that max_connections = 600 is way to high.\n\nAre you running large, complicated queries on that machine? That could\nbe a problem with such a high connection limit.\n\nIs the machine dedicated to PostgreSQL?\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Tue, 23 Jan 2018 19:36:32 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help on memory allocation" }, { "msg_contents": "On Tue, Jan 23, 2018 at 5:59 AM, Rambabu V <[email protected]> wrote:\n\n> > cat PostgreSQL-2018-01-23_060000.csv|grep FATAL\n\nWhat about ERROR, not just FATAL? Or grep for \"out of memory\"\n\n\n\n>> *$ free -mh*\n>> total used free shared buffers cached\n>> Mem: 58G 58G 358M 16G 3.6M 41G\n>> -/+ buffers/cache: 16G 42G\n>> Swap: 9.5G 687M 8.9G\n>>\n>\nThis does not seem like it should be a problem. Is this data collected\nnear the time of the failure?\n\n\n> work_mem = 256MB # min 64kB\n>> max_connections = 600\n>>\n>\nThese look pretty high, especially in combination. Why do you need that\nnumber of connections? Could you use a connection pooler instead? Or do\njust have an application bug (leaked connection handles) that needs to be\nfixed? Why do you need that amount of work_mem?\n\n\n> *ps -ef|grep postgres|grep idle|wc -l*\n>> 171\n>>\n>> *ps -ef|grep postgres|wc -l*\n>> 206\n>>\n>\nHow close to the time of the problem was this recorded? How many of the\nidle are 'idle in transaction'?\n\n\n>> PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command\n>> 109063 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 39:55.61\n>> postgres: test sss 10.20.2.228(55174) idle\n>> 24910 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 27:45.35\n>> postgres: testl sss 10.20.2.228(55236) idle\n>> 115539 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 28:22.89\n>> postgres: test sss 10.20.2.228(55184) idle\n>> 9816 postgres 20 0 16.7G 16.4G 16.3G S 0.0 27.8 40:19.57\n>> postgres: test sss 10.20.2.228(55216) idle\n>>\n>\nHow close to the time of the problem was this recorded? Nothing here seems\nto be a problem, because almost all the memory they have resident is shared\nmemory.\n\nIt looks like all your clients decide to run a memory hungry query\nsimultaneously, consume a lot of work_mem, and cause a problem. Then by\nthe time you notice the problem and start collecting information, they are\ndone and things are back to normal.\n\nCheers,\n\nJeff\n\nOn Tue, Jan 23, 2018 at 5:59 AM, Rambabu V <[email protected]> wrote:> > cat PostgreSQL-2018-01-23_060000.csv|grep FATALWhat about ERROR, not just FATAL?  Or grep for \"out of memory\"$ free -mh             total       used       free     shared    buffers     cachedMem:           58G        58G       358M        16G       3.6M        41G-/+ buffers/cache:        16G        42GSwap:         9.5G       687M       8.9GThis does not seem like it should be a problem.  Is this data collected near the time of the failure? work_mem = 256MB # min 64kBmax_connections = 600These look pretty high, especially in combination.  Why do you need that number of connections?  Could you use a connection pooler instead?  Or do just have an application bug (leaked connection handles) that needs to be fixed?  Why do you need that amount of work_mem? ps -ef|grep postgres|grep idle|wc -l171ps -ef|grep postgres|wc -l206How close to the time of the problem was this recorded?  How many of the idle are 'idle in transaction'?    PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command 109063 postgres   20   0 16.7G 16.4G 16.3G S  0.0 27.8 39:55.61 postgres: test sss 10.20.2.228(55174) idle  24910 postgres   20   0 16.7G 16.4G 16.3G S  0.0 27.8 27:45.35 postgres: testl sss 10.20.2.228(55236) idle 115539 postgres   20   0 16.7G 16.4G 16.3G S  0.0 27.8 28:22.89 postgres: test sss 10.20.2.228(55184) idle   9816 postgres   20   0 16.7G 16.4G 16.3G S  0.0 27.8 40:19.57 postgres: test sss   10.20.2.228(55216) idle How close to the time of the problem was this recorded?  Nothing here seems to be a problem, because almost all the memory they have resident is shared memory. It looks like all your clients decide to run a memory hungry query simultaneously, consume a lot of work_mem, and cause a problem.  Then by the time you notice the problem and start collecting information, they are done and things are back to normal.Cheers,Jeff", "msg_date": "Tue, 23 Jan 2018 14:36:53 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help on memory allocation" }, { "msg_contents": "Hi Rambabu,\n\nIf you are finding some <IDLE> sessions then of course your database is\nperfectly alright. As <IDLE> sessions won't consume any memory.\n\nKindly specify the issue briefly.\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Tue, 23 Jan 2018 22:54:01 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help on memory allocation" }, { "msg_contents": "On Tue, Jan 23, 2018 at 10:54:01PM -0700, pavan95 wrote:\n> If you are finding some <IDLE> sessions then of course your database is\n> perfectly alright. As <IDLE> sessions won't consume any memory.\n\nThose have a cost as well when building transaction snapshots. Too much\nof them is no good either, let's not forget that.\n--\nMichael", "msg_date": "Wed, 24 Jan 2018 16:15:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help on memory allocation" }, { "msg_contents": "Then we should find like if there are any idle sessions with uncommitted\ntransactions. Those might be the culprits.\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Wed, 24 Jan 2018 00:34:15 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help on memory allocation" }, { "msg_contents": "Hi,\n\nThe following talk describes an issue with how Linux may handle memory \nallocation for Postgres. The issue may cause many hundreds of megabytes \nnot being released in some cases.\n\nPostgreSQL and RAM usage [Feb 27, 2017]\nhttps://www.youtube.com/watch?v=EgQCxERi35A\nsee between minutes 33 and 39 of the talk\n\nRegards,\nVitaliy\n\nOn 18/01/2018 17:25, Rambabu V wrote:\n> Hi Team,\n>\n> we are seeing idle sessions consuming memory in our database, could \n> you please help me how much memory an idle session can use max and how \n> can we find how much work_mem consuming for single process.\n>\n> we are getting out of memory error,for this i'm asking above questions.\n>\n>\n> Regards,\n>\n> Rambabu Vakada.\n\n\n\n", "msg_date": "Wed, 24 Jan 2018 10:03:56 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help on memory allocation" } ]
[ { "msg_contents": "We have a customer project where Postgres is using too many file handles during peak times (around 150.000)\n\nApart from re-configuring the operating system (CentOS) this could also be mitigated by lowering max_files_per_process.\n\nI wonder what performance implications that has on a server with around 50-100 active connections (through pgBouncer).\n\nOne of the reasons (we think) that Postgres needs that many file handles is the fact that the schema is quite large (in terms of tables and indexes) and the sessions are touching many tables during their lifetime.\n\nMy understanding of the documentation is, that Postgres will work just fine if we lower the limit, it simply releases the cached file handles if the limit is reached. But I have no idea how expensive opening a file handle is in Linux.\n\nSo assuming the sessions (and thus the queries) actually do need that many file handles, what kind of performance impact (if any) is to be expected by lowering that value for Postgres to e.g. 500?\n\nRegards\nThomas\n \n\n", "msg_date": "Fri, 19 Jan 2018 17:48:29 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": true, "msg_subject": "Performance impact of lowering max_files_per_process" }, { "msg_contents": "Thomas Kellerer schrieb am 19.01.2018 um 17:48:\n>\n> I wonder what performance implications that has on a server with\n> around 50-100 active connections (through pgBouncer).\n> \n> My understanding of the documentation is, that Postgres will work\n> just fine if we lower the limit, it simply releases the cached file\n> handles if the limit is reached. But I have no idea how expensive\n> opening a file handle is in Linux.\n> \n> So assuming the sessions (and thus the queries) actually do need that\n> many file handles, what kind of performance impact (if any) is to be\n> expected by lowering that value for Postgres to e.g. 500?\n\nI would be really interested in an answer. \n\n\n\n", "msg_date": "Wed, 24 Jan 2018 08:09:32 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance impact of lowering max_files_per_process" } ]
[ { "msg_contents": "Hi all,\nI need to know the actual execution time of a query, but considering\nthat the data is already cached. I also need to make sure that cached\ndata from other queries is cleared.\nI believe that in order to know the real time of a query it will be\nnecessary to \"warm up\" the data to be inserted in cache.\n\nBelow are the steps suggested by a DBA for me:\n\nStep 1- run ANALYZE on all tables involved before the test;\nStep 2- restart the DBMS (to clear the DBMS cache);\nStep 3- erase the S.O. cache;\nStep 4- execute at least 5 times the same query.\n\nAfter the actual execution time of the query, it would have to take\nthe time of the query that is in the \"median\" among all.\n\nExample:\n\nExecution 1: 07m 58s\nExecution 2: 14m 51s\nExecution 3: 17m 59s\nExecution 4: 17m 55s\nExecution 5: 17m 07s\n\nIn this case to calculate the median, you must first order each\nexecution by its time:\nExecution 1: 07m 58s\nExecution 2: 14m 51s\nExecution 5: 17m 07s\nExecution 4: 17m 55s\nExecution 3: 17m 59s\n\nIn this example the median would be execution 5 (17m 07s). Could\nsomeone tell me if this is a good strategy ?\nDue to being a scientific work, if anyone has a reference of any\narticle or book on this subject, it would be very useful.\n\nBest Regards\nNeto\n\n", "msg_date": "Sun, 21 Jan 2018 10:43:53 -0800", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "query execution time (with cache)" } ]
[ { "msg_contents": "Hi,\n\nI have an issue with sporadic slow insert operations with query duration\nmore than 1 sec while it takes about 50ms in average.\n\nConfiguration:\nOS: Centos 7.2.151\nPostgreSQL: 9.6.3\nCPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz\nMemory: total used free shared buff/cache\navailable\nMem: 193166 10324 1856 44522 180985\n137444\nSwap: 0 0 0\nStorage: Well, about 4gb of BBU write cache.\n\nshared_buffers = 32gb\nwork_mem = 128mb\nmax_pred_locks_per_transaction = 8192\n\nThis can occur once a day or not happen for few days while system load is\nthe same. \"Inserts\" are the prepared statement batches with 4-5 inserts.\nNeither excessive memory usage nor disk or cpu utilizations have been\ncatched.\nWal writing rates, checkpoints, anything else from pg_stat_* tables were\nchecked and nothing embarrassing was found.\n\nThere are several scenarious of such long inserts were spotted:\n1. No any locks catched (500ms check intervals)\n2. Wait event is \"buffer_mapping\" - looks like the most common case\n snaphot time | state | trx duration | query duration |\nwait_event_type | wait_event | query\n 2017-12-22 03:16:01.181014 | active | 00:00:00.535309 | 00:00:00.524729 |\nLWLockTranche | buffer_mapping | INSERT INTO table..\n 2017-12-22 03:16:00.65814 | active | 00:00:00.012435 | 00:00:00.001855 |\nLWLockTranche | buffer_mapping | INSERT INTO table..\n3. Wait event is \"SerializablePredicateLockListLock\" (I believe the same\nroot cause as previous case)\n4. No any locks catched, but ~39 other backends in parallel are active\n\nI assumed that it can be somehow related to enabled NUMA, but it looks like\nmemory is allocated evenly, zone_reclaim_mode is 0.\nnumactl --hardware\navailable: 2 nodes (0-1)\nnode 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42\n44 46\nnode 0 size: 130978 MB\nnode 0 free: 1251 MB\nnode 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43\n45 47\nnode 1 size: 65536 MB\nnode 1 free: 42 MB\nnode distances:\nnode 0 1\n 0: 10 21\n 1: 21 10\n\nnumastat -m\n\nPer-node system memory usage (in MBs):\n Node 0 Node 1 Total\n --------------- --------------- ---------------\nMemTotal 130978.34 65536.00 196514.34\nMemFree 1479.07 212.12 1691.20\nMemUsed 129499.27 65323.88 194823.14\nActive 72241.16 37254.56 109495.73\nInactive 47936.24 24205.40 72141.64\nActive(anon) 21162.41 18978.96 40141.37\nInactive(anon) 1061.94 7522.34 8584.27\nActive(file) 51078.76 18275.60 69354.36\nInactive(file) 46874.30 16683.06 63557.36\nUnevictable 0.00 0.00 0.00\nMlocked 0.00 0.00 0.00\nDirty 0.04 0.02 0.05\nWriteback 0.00 0.00 0.00\nFilePages 116511.36 60923.16 177434.52\nMapped 16507.29 23912.82 40420.11\nAnonPages 3661.55 530.26 4191.81\nShmem 18558.28 25964.74 44523.02\nKernelStack 16.98 5.77 22.75\nPageTables 3943.56 1022.25 4965.81\nNFS_Unstable 0.00 0.00 0.00\nBounce 0.00 0.00 0.00\nWritebackTmp 0.00 0.00 0.00\nSlab 2256.09 1291.53 3547.61\nSReclaimable 2108.29 889.85 2998.14\nSUnreclaim 147.80 401.68 549.47\nAnonHugePages 1824.00 284.00 2108.00\nHugePages_Total 0.00 0.00 0.00\nHugePages_Free 0.00 0.00 0.00\nHugePages_Surp 0.00 0.00 0.00\n\n$ cat /proc/62679/numa_maps | grep N0 | grep zero\n7f92509d3000 prefer:0 file=/dev/zero\\040(deleted) dirty=8419116 mapmax=154\nactive=8193350 N0=3890534 N1=4528582 kernelpagesize_kB=4\n\nCould you advise what can cause such occasional long inserts with\nlong-lasting LWlocks?\n\nHi,I have an issue with sporadic slow insert operations with query duration more than 1 sec while it takes about 50ms in average.Configuration:OS: Centos 7.2.151PostgreSQL: 9.6.3CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHzMemory:           total        used        free      shared     buff/cache  available Mem:         193166       10324        1856      44522      180985      137444 Swap:             0           0           0Storage: Well,  about 4gb of BBU write cache.shared_buffers = 32gbwork_mem = 128mbmax_pred_locks_per_transaction = 8192This can occur once a day or not happen for few days while system load is the same. \"Inserts\" are the prepared statement batches with 4-5 inserts.Neither excessive memory usage nor disk or cpu utilizations have been catched.Wal writing rates, checkpoints, anything else from pg_stat_* tables were checked and nothing embarrassing was found.There are several scenarious of such long inserts were spotted:1. No any locks catched (500ms check intervals)2. Wait event is \"buffer_mapping\" - looks like the most common case snaphot time | state | trx duration    | query duration   | wait_event_type | wait_event     | query 2017-12-22 03:16:01.181014 | active | 00:00:00.535309 | 00:00:00.524729  | LWLockTranche   | buffer_mapping | INSERT INTO table.. 2017-12-22 03:16:00.65814  | active | 00:00:00.012435 | 00:00:00.001855  | LWLockTranche   | buffer_mapping | INSERT INTO table..3. Wait event is \"SerializablePredicateLockListLock\" (I believe the same root cause as previous case)4. No any locks catched, but ~39 other backends in parallel are active I assumed that it can be somehow related to enabled NUMA, but it looks like memory is allocated evenly, zone_reclaim_mode is 0.numactl --hardwareavailable: 2 nodes (0-1)node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46node 0 size: 130978 MBnode 0 free: 1251 MBnode 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47node 1 size: 65536 MBnode 1 free: 42 MBnode distances:node   0   1   0:  10  21   1:  21  10   numastat -mPer-node system memory usage (in MBs):                          Node 0          Node 1           Total                 --------------- --------------- ---------------MemTotal               130978.34        65536.00       196514.34MemFree                  1479.07          212.12         1691.20MemUsed                129499.27        65323.88       194823.14Active                  72241.16        37254.56       109495.73Inactive                47936.24        24205.40        72141.64Active(anon)            21162.41        18978.96        40141.37Inactive(anon)           1061.94         7522.34         8584.27Active(file)            51078.76        18275.60        69354.36Inactive(file)          46874.30        16683.06        63557.36Unevictable                 0.00            0.00            0.00Mlocked                     0.00            0.00            0.00Dirty                       0.04            0.02            0.05Writeback                   0.00            0.00            0.00FilePages              116511.36        60923.16       177434.52Mapped                  16507.29        23912.82        40420.11AnonPages                3661.55          530.26         4191.81Shmem                   18558.28        25964.74        44523.02KernelStack                16.98            5.77           22.75PageTables               3943.56         1022.25         4965.81NFS_Unstable                0.00            0.00            0.00Bounce                      0.00            0.00            0.00WritebackTmp                0.00            0.00            0.00Slab                     2256.09         1291.53         3547.61SReclaimable             2108.29          889.85         2998.14SUnreclaim                147.80          401.68          549.47AnonHugePages            1824.00          284.00         2108.00HugePages_Total             0.00            0.00            0.00HugePages_Free              0.00            0.00            0.00HugePages_Surp              0.00            0.00            0.00$ cat /proc/62679/numa_maps | grep N0 | grep zero7f92509d3000 prefer:0 file=/dev/zero\\040(deleted) dirty=8419116 mapmax=154 active=8193350 N0=3890534 N1=4528582 kernelpagesize_kB=4Could you advise what can cause such occasional long inserts with long-lasting LWlocks?", "msg_date": "Mon, 22 Jan 2018 21:43:32 +0300", "msg_from": "Pavel Suderevsky <[email protected]>", "msg_from_op": true, "msg_subject": "PG 9.6 Slow inserts with long-lasting LWLocks" }, { "msg_contents": "Hi, \n\nWell, unfortunately I still need community help.\n\n-- Environment\nOS: Centos CentOS Linux release 7.2.1511\nKernel: 3.10.0-327.36.3.el7.x86_64\nPostgreSQL: 9.6.3\n-- Hardware\nServer: Dell PowerEdge R430\nCPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz\nRaid controller: PERC H730 Mini (1GB cache)\nDisks: 8 x 10K RPM SAS 12GB/s 2.5 (ST1200MM0088) in RAID 6\nRAM: 192GB (M393A2G40DB0-CPB x 16)\nFor more detailed hardware info please see attached configuration.txt\n-- postgresql.conf\nmax_connections = 2048\nshared_buffers = 48GB\ntemp_buffers = 128MB\nwork_mem = 256MB\nmaintenance_work_mem = 512MB\ndynamic_shared_memory_type = posix\nwal_level = hot_standby\nmin_wal_size = 4GB\nmax_wal_size = 32GB\nhuge_pages = on\n+\nnumactl interleave=all\n-- sysctl.conf \nkernel.shmmax=64424509440\nkernel.shmall=4294967296\nkernel.sem = 1024 32767 128 16384\nfs.aio-max-nr=3145728\nfs.file-max = 6815744\nnet.core.rmem_default=262144\nnet.core.rmem_max=4194304\nnet.core.wmem_default=262144\nnet.core.wmem_max=1048586\nvm.nr_hugepages=33000\nvm.dirty_background_bytes=67108864\nvm.dirty_bytes=536870912\nvm.min_free_kbytes=1048576\nzone_reclaim_mode=0\n\nAgain: problem is the occasional long inserts that can happen 1-5 times per day on OLTP system.\nNo autovacuum performed during long inserts. WAL rate is 1-2Gb per hour, no correlation spotted with this issue.\nWait event \"buffer_mapping\" happen for appropriate transactions but not every time (maybe just not every time catched).\nI have two suspects for such behaviour: I/O system and high concurrency.\nThere is a problem with one application that frequently recreates up to 90 sessions but investigation shows that there is no direct correlation between such sessions and long transactions, at least it is not the root cause of the issue (of course such app behaviour will be fixed).\n\nThe investigation and tracing with strace in particular showed that:\n1. The only long event straced from postgres backends was <... semop resumed>.\n2. Seems the whole host gets hung during such events. \n\nExample:\nJava application located on separate host reports several long transactions:\n123336.943 - [1239588mks]: event.insert-table\n123336.943 - [1240827mks]: event.insert-table\n123337.019 - [1292534mks]: event.insert-table\n143353.542 - [5467657mks]: event.insert-table\n143353.543 - [5468884mks]: event.insert-table\n152338.763 - [1264588mks]: event.insert-table\n152338.765 - [2054887mks]: event.insert-table\n\nStrace output for event happened at 14:33 with particular known pid:\n119971 14:33:48.075375 epoll_wait(3, <unfinished ...>\n119971 14:33:48.075696 <... epoll_wait resumed> {{EPOLLIN, {u32=27532016, u64=27532016}}}, 1, -1) = 1 <0.000313>\n119971 14:33:48.075792 recvfrom(9, <unfinished ...>\n119971 14:33:48.075866 <... recvfrom resumed> \"B\\0\\0\\3\\27\\0S_21\\0\\0*\\0\\1\\0\\1\\0\\1\\0\\0\\0\\0\\0\\1\\0\\1\\0\\0\\0\\0\\0\"..., 8192, 0, NULL, NULL) = 807 <0.000066>\n119971 14:33:48.076243 semop(26706044, {{8, -1, 0}}, 1 <unfinished ...>\n120019 14:33:48.119971 recvfrom(9, <unfinished ...>\n119971 14:33:53.491029 <... semop resumed> ) = 0 <5.414772> \n119971 14:33:53.500356 lseek(18, 0, SEEK_END <unfinished ...>\n119971 14:33:53.500436 <... lseek resumed> ) = 107790336 <0.000072>\n119971 14:33:53.500514 lseek(20, 0, SEEK_END <unfinished ...>\n\nChecking strace long semop calls for whole day:\nroot@host [20180314 17:47:36]:/home/user$ egrep \" <[1-9].\" /tmp/strace | grep semop\n119991 12:33:36 <... semop resumed> ) = 0 <1.419394>\n119942 12:33:36 <... semop resumed> ) = 0 <1.422554>\n119930 12:33:36 <... semop resumed> ) = 0 <1.414916>\n119988 12:33:36 <... semop resumed> ) = 0 <1.213309>\n119966 12:33:36 <... semop resumed> ) = 0 <1.237492>\n119958 14:33:53.489398 <... semop resumed> ) = 0 <5.455830>\n120019 14:33:53.490613 <... semop resumed> ) = 0 <5.284505>\n119997 14:33:53.490638 <... semop resumed> ) = 0 <5.111661>\n120000 14:33:53.490649 <... semop resumed> ) = 0 <3.521992>\n119991 14:33:53.490660 <... semop resumed> ) = 0 <2.522460>\n119988 14:33:53.490670 <... semop resumed> ) = 0 <5.252485>\n120044 14:33:53.490834 <... semop resumed> ) = 0 <1.718129>\n119976 14:33:53.490852 <... semop resumed> ) = 0 <2.489563>\n119974 14:33:53.490862 <... semop resumed> ) = 0 <1.520801>\n119984 14:33:53.491011 <... semop resumed> ) = 0 <1.213411>\n119971 14:33:53.491029 <... semop resumed> ) = 0 <5.414772>\n119969 14:33:53.491039 <... semop resumed> ) = 0 <2.275608>\n119966 14:33:53.491048 <... semop resumed> ) = 0 <2.526024>\n119942 14:33:53.491058 <... semop resumed> ) = 0 <5.448506>\n119964 15:23:38.746394 <... semop resumed> ) = 0 <2.034851>\n119960 15:23:38.746426 <... semop resumed> ) = 0 <2.038321>\n119966 15:23:38.752646 <... semop resumed> ) = 0 <1.252342>\n\nAlso it was spotted that WALWriter Postgres backend also spend time in <semop resumed> during hangs.\n\nAlso I have application on db host that performs pg_stat_activity shapshots every 500m and for example I can see that there were no snapshot between 14:33:47 and 14:33:53.\nSeparate simple script on db host every ~100ms checks ps output for this application and writes it into the txt file. And we can see that while it usually performs about 7-8 times per second, between 14:33:47 and 14:33:53 it couldn't even perform enough ps calls. Strace for this backend showed that this process was hung in semop call. So it tells me that whole system gets hung.\n14:33:40 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:41 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:42 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:43 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:44 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:45 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:46 TOTAL=6 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:47 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:48 TOTAL=3 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:49 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:50 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:51 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:52 TOTAL=4 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:53 TOTAL=6 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:54 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:55 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n\nI understand that RAID-6 is not the best option, but I can't catch any evidence telling that system run out of 1GB RAID controller cache on writes.\n\nPlease assist in understanding meaning and nature of long semop calls appearances.\n\n--\nRegards,\nPavel Suderevsky\n\n\n\nFrom: Pavel Suderevsky\nSent: Monday, January 22, 2018 21:43\nTo: [email protected]\nSubject: PG 9.6 Slow inserts with long-lasting LWLocks\n\nHi,\n\nI have an issue with sporadic slow insert operations with query duration more than 1 sec while it takes about 50ms in average.\n\nConfiguration:\nOS: Centos 7.2.151\nPostgreSQL: 9.6.3\nCPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz\nMemory:           total        used        free      shared     buff/cache  available\n\tMem:         193166       10324        1856      44522      180985      137444\n\tSwap:             0           0           0\nStorage: Well,  about 4gb of BBU write cache.\n\nshared_buffers = 32gb\nwork_mem = 128mb\nmax_pred_locks_per_transaction = 8192\n\nThis can occur once a day or not happen for few days while system load is the same. \"Inserts\" are the prepared statement batches with 4-5 inserts.\nNeither excessive memory usage nor disk or cpu utilizations have been catched.\nWal writing rates, checkpoints, anything else from pg_stat_* tables were checked and nothing embarrassing was found.\n\nThere are several scenarious of such long inserts were spotted:\n1. No any locks catched (500ms check intervals)\n2. Wait event is \"buffer_mapping\" - looks like the most common case\n snaphot time\t\t\t\t| state\t | trx duration    | query duration   | wait_event_type | wait_event     | query\n 2017-12-22 03:16:01.181014 | active | 00:00:00.535309 | 00:00:00.524729  | LWLockTranche   | buffer_mapping | INSERT INTO table..\n 2017-12-22 03:16:00.65814  | active | 00:00:00.012435 | 00:00:00.001855  | LWLockTranche   | buffer_mapping | INSERT INTO table..\n3. Wait event is \"SerializablePredicateLockListLock\" (I believe the same root cause as previous case)\n4. No any locks catched, but ~39 other backends in parallel are active \n\nI assumed that it can be somehow related to enabled NUMA, but it looks like memory is allocated evenly, zone_reclaim_mode is 0.\nnumactl --hardware\navailable: 2 nodes (0-1)\nnode 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46\nnode 0 size: 130978 MB\nnode 0 free: 1251 MB\nnode 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47\nnode 1 size: 65536 MB\nnode 1 free: 42 MB\nnode distances:\nnode   0   1 \n  0:  10  21 \n  1:  21  10 \n  \nnumastat -m\n\nPer-node system memory usage (in MBs):\n                          Node 0          Node 1           Total\n                 --------------- --------------- ---------------\nMemTotal               130978.34        65536.00       196514.34\nMemFree                  1479.07          212.12         1691.20\nMemUsed                129499.27        65323.88       194823.14\nActive                  72241.16        37254.56       109495.73\nInactive                47936.24        24205.40        72141.64\nActive(anon)            21162.41        18978.96        40141.37\nInactive(anon)           1061.94         7522.34         8584.27\nActive(file)            51078.76        18275.60        69354.36\nInactive(file)          46874.30        16683.06        63557.36\nUnevictable                 0.00            0.00            0.00\nMlocked                     0.00            0.00            0.00\nDirty                       0.04            0.02            0.05\nWriteback                   0.00            0.00            0.00\nFilePages              116511.36        60923.16       177434.52\nMapped                  16507.29        23912.82        40420.11\nAnonPages                3661.55          530.26         4191.81\nShmem                   18558.28        25964.74        44523.02\nKernelStack                16.98            5.77           22.75\nPageTables               3943.56         1022.25         4965.81\nNFS_Unstable                0.00            0.00            0.00\nBounce                      0.00            0.00            0.00\nWritebackTmp                0.00            0.00            0.00\nSlab                     2256.09         1291.53         3547.61\nSReclaimable             2108.29          889.85         2998.14\nSUnreclaim                147.80          401.68          549.47\nAnonHugePages            1824.00          284.00         2108.00\nHugePages_Total             0.00            0.00            0.00\nHugePages_Free              0.00            0.00            0.00\nHugePages_Surp              0.00            0.00            0.00\n\n$ cat /proc/62679/numa_maps | grep N0 | grep zero\n7f92509d3000 prefer:0 file=/dev/zero\\040(deleted) dirty=8419116 mapmax=154 active=8193350 N0=3890534 N1=4528582 kernelpagesize_kB=4\n\nCould you advise what can cause such occasional long inserts with long-lasting LWlocks?", "msg_date": "Thu, 15 Mar 2018 13:29:33 +0300", "msg_from": "Pavel Suderevsky <[email protected]>", "msg_from_op": true, "msg_subject": "RE: PG 9.6 Slow inserts with long-lasting LWLocks" }, { "msg_contents": "Sporadic insert slowness could be due to lock delays (locktype=extend) \ndue to many concurrent connections trying to insert into the same table \nat the same time. Each insert request may result in an extend lock (8k \nextension), which blocks other writers. What normally happens is the \nthese extend locks happen so fast that you hardly ever see them in the \npg_locks table, except in the case where many concurrent connections are \ntrying to do inserts into the same table. The following query will show \nif this is the case if you execute it during the time the problem is \noccurring.\n\nselect * from pg_locks where granted = false and locktype = 'extend';\n\n\nI don't know if this is your particular problem, but perhaps it is.\n\nRegards,\nMichael Vitale\n> Pavel Suderevsky <mailto:[email protected]>\n> Thursday, March 15, 2018 6:29 AM\n>\n> Hi,\n>\n> Well, unfortunately I still need community help.\n>\n> -- Environment\n>\n> OS: Centos CentOS Linux release 7.2.1511\n>\n> Kernel: 3.10.0-327.36.3.el7.x86_64\n>\n> PostgreSQL: 9.6.3\n>\n> -- Hardware\n>\n> Server: Dell PowerEdge R430\n>\n> CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz\n>\n> Raid controller: PERC H730 Mini (1GB cache)\n>\n> Disks: 8 x 10K RPM SAS 12GB/s 2.5 (ST1200MM0088) in RAID 6\n>\n> RAM: 192GB (M393A2G40DB0-CPB x 16)\n>\n> For more detailed hardware info please see attached configuration.txt\n>\n> -- postgresql.conf\n>\n> max_connections = 2048\n>\n> shared_buffers = 48GB\n>\n> temp_buffers = 128MB\n>\n> work_mem = 256MB\n>\n> maintenance_work_mem = 512MB\n>\n> dynamic_shared_memory_type = posix\n>\n> wal_level = hot_standby\n>\n> min_wal_size = 4GB\n>\n> max_wal_size = 32GB\n>\n> huge_pages = on\n>\n> +\n>\n> numactl interleave=all\n>\n> -- sysctl.conf\n>\n> kernel.shmmax=64424509440\n>\n> kernel.shmall=4294967296\n>\n> kernel.sem = 1024 32767 128 16384\n>\n> fs.aio-max-nr=3145728\n>\n> fs.file-max = 6815744\n>\n> net.core.rmem_default=262144\n>\n> net.core.rmem_max=4194304\n>\n> net.core.wmem_default=262144\n>\n> net.core.wmem_max=1048586\n>\n> vm.nr_hugepages=33000\n>\n> vm.dirty_background_bytes=67108864\n>\n> vm.dirty_bytes=536870912\n>\n> vm.min_free_kbytes=1048576\n>\n> zone_reclaim_mode=0\n>\n> Again: problem is the occasional long inserts that can happen 1-5 \n> times per day on OLTP system.\n>\n> No autovacuum performed during long inserts. WAL rate is 1-2Gb per \n> hour, no correlation spotted with this issue.\n>\n> Wait event \"buffer_mapping\" happen for appropriate transactions but \n> not every time (maybe just not every time catched).\n>\n> I have two suspects for such behaviour: I/O system and high concurrency.\n>\n> There is a problem with one application that frequently recreates up \n> to 90 sessions but investigation shows that there is no direct \n> correlation between such sessions and long transactions, at least it \n> is not the root cause of the issue (of course such app behaviour will \n> be fixed).\n>\n> The investigation and tracing with strace in particular showed that:\n>\n> 1. The only long event straced from postgres backends was <... semop \n> resumed>.\n>\n> 2. Seems the whole host gets hung during such events.\n>\n> Example:\n>\n> Java application located on separate host reports several long \n> transactions:\n>\n> 123336.943 - [1239588mks]: event.insert-table\n>\n> 123336.943 - [1240827mks]: event.insert-table\n>\n> 123337.019 - [1292534mks]: event.insert-table\n>\n> 143353.542 - [5467657mks]: event.insert-table\n>\n> 143353.543 - [5468884mks]: event.insert-table\n>\n> 152338.763 - [1264588mks]: event.insert-table\n>\n> 152338.765 - [2054887mks]: event.insert-table\n>\n> Strace output for event happened at 14:33 with particular known pid:\n>\n> 119971 14:33:48.075375 epoll_wait(3, <unfinished ...>\n>\n> 119971 14:33:48.075696 <... epoll_wait resumed> {{EPOLLIN, \n> {u32=27532016, u64=27532016}}}, 1, -1) = 1 <0.000313>\n>\n> 119971 14:33:48.075792 recvfrom(9, <unfinished ...>\n>\n> 119971 14:33:48.075866 <... recvfrom resumed> \n> \"B\\0\\0\\3\\27\\0S_21\\0\\0*\\0\\1\\0\\1\\0\\1\\0\\0\\0\\0\\0\\1\\0\\1\\0\\0\\0\\0\\0\"..., \n> 8192, 0, NULL, NULL) = 807 <0.000066>\n>\n> 119971 14:33:48.076243 semop(26706044, {{8, -1, 0}}, 1 <unfinished ...>\n>\n> 120019 14:33:48.119971 recvfrom(9, <unfinished ...>\n>\n> 119971 14:33:53.491029 <... semop resumed> ) = 0 <5.414772>\n>\n> 119971 14:33:53.500356 lseek(18, 0, SEEK_END <unfinished ...>\n>\n> 119971 14:33:53.500436 <... lseek resumed> ) = 107790336 <0.000072>\n>\n> 119971 14:33:53.500514 lseek(20, 0, SEEK_END <unfinished ...>\n>\n> Checking strace long semop calls for whole day:\n>\n> root@host [20180314 17:47:36]:/home/user$ egrep \" <[1-9].\" /tmp/strace \n> | grep semop\n>\n> 119991 12:33:36 <... semop resumed> ) = 0 <1.419394>\n>\n> 119942 12:33:36 <... semop resumed> ) = 0 <1.422554>\n>\n> 119930 12:33:36 <... semop resumed> ) = 0 <1.414916>\n>\n> 119988 12:33:36 <... semop resumed> ) = 0 <1.213309>\n>\n> 119966 12:33:36 <... semop resumed> ) = 0 <1.237492>\n>\n> 119958 14:33:53.489398 <... semop resumed> ) = 0 <5.455830>\n>\n> 120019 14:33:53.490613 <... semop resumed> ) = 0 <5.284505>\n>\n> 119997 14:33:53.490638 <... semop resumed> ) = 0 <5.111661>\n>\n> 120000 14:33:53.490649 <... semop resumed> ) = 0 <3.521992>\n>\n> 119991 14:33:53.490660 <... semop resumed> ) = 0 <2.522460>\n>\n> 119988 14:33:53.490670 <... semop resumed> ) = 0 <5.252485>\n>\n> 120044 14:33:53.490834 <... semop resumed> ) = 0 <1.718129>\n>\n> 119976 14:33:53.490852 <... semop resumed> ) = 0 <2.489563>\n>\n> 119974 14:33:53.490862 <... semop resumed> ) = 0 <1.520801>\n>\n> 119984 14:33:53.491011 <... semop resumed> ) = 0 <1.213411>\n>\n> 119971 14:33:53.491029 <... semop resumed> ) = 0 <5.414772>\n>\n> 119969 14:33:53.491039 <... semop resumed> ) = 0 <2.275608>\n>\n> 119966 14:33:53.491048 <... semop resumed> ) = 0 <2.526024>\n>\n> 119942 14:33:53.491058 <... semop resumed> ) = 0 <5.448506>\n>\n> 119964 15:23:38.746394 <... semop resumed> ) = 0 <2.034851>\n>\n> 119960 15:23:38.746426 <... semop resumed> ) = 0 <2.038321>\n>\n> 119966 15:23:38.752646 <... semop resumed> ) = 0 <1.252342>\n>\n> Also it was spotted that WALWriter Postgres backend also spend time in \n> <semop resumed> during hangs.\n>\n> Also I have application on db host that performs pg_stat_activity \n> shapshots every 500m and for example I can see that there were no \n> snapshot between 14:33:47 and 14:33:53.\n>\n> Separate simple script on db host every ~100ms checks ps output for \n> this application and writes it into the txt file. And we can see that \n> while it usually performs about 7-8 times per second, between 14:33:47 \n> and 14:33:53 it couldn't even perform enough ps calls. Strace for this \n> backend showed that this process was hung in semop call. So it tells \n> me that whole system gets hung.\n>\n> 14:33:40 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:41 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:42 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:43 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:44 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:45 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:46 TOTAL=6 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:47 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:48 TOTAL=3 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:49 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:50 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:51 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:52 TOTAL=4 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:53 TOTAL=6 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:54 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> 14:33:55 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 \n> get_request=0 sleep_on_buffer=0\n>\n> I understand that RAID-6 is not the best option, but I can't catch any \n> evidence telling that system run out of 1GB RAID controller cache on \n> writes.\n>\n> Please assist in understanding meaning and nature of long semop calls \n> appearances.\n>\n> --\n>\n> Regards,\n>\n> Pavel Suderevsky\n>\n> *From: *Pavel Suderevsky <mailto:[email protected]>\n> *Sent: *Monday, January 22, 2018 21:43\n> *To: *[email protected] \n> <mailto:[email protected]>\n> *Subject: *PG 9.6 Slow inserts with long-lasting LWLocks\n>\n> Hi,\n>\n> I have an issue with sporadic slow insert operations with query \n> duration more than 1 sec while it takes about 50ms in average.\n>\n> Configuration:\n>\n> OS: Centos 7.2.151\n>\n> PostgreSQL: 9.6.3\n>\n> CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz\n>\n> Memory: total used free shared \n> buff/cache available\n>\n> Mem: 193166 10324 1856 \n> 44522 180985 137444\n>\n> Swap: 0 0 0\n>\n> Storage: Well, about 4gb of BBU write cache.\n>\n> shared_buffers = 32gb\n>\n> work_mem = 128mb\n>\n> max_pred_locks_per_transaction = 8192\n>\n> This can occur once a day or not happen for few days while system load \n> is the same. \"Inserts\" are the prepared statement batches with 4-5 \n> inserts.\n>\n> Neither excessive memory usage nor disk or cpu utilizations have been \n> catched.\n>\n> Wal writing rates, checkpoints, anything else from pg_stat_* tables \n> were checked and nothing embarrassing was found.\n>\n> There are several scenarious of such long inserts were spotted:\n>\n> 1. No any locks catched (500ms check intervals)\n>\n> 2. Wait event is \"buffer_mapping\" - looks like the most common case\n>\n> snaphot time | \n> state | trx duration | query duration | wait_event_type | \n> wait_event | query\n>\n> 2017-12-22 03:16:01.181014 | active | 00:00:00.535309 | \n> 00:00:00.524729 | LWLockTranche | buffer_mapping | INSERT INTO table..\n>\n> 2017-12-22 03:16:00.65814 | active | 00:00:00.012435 | \n> 00:00:00.001855 | LWLockTranche | buffer_mapping | INSERT INTO table..\n>\n> 3. Wait event is \"SerializablePredicateLockListLock\" (I believe the \n> same root cause as previous case)\n>\n> 4. No any locks catched, but ~39 other backends in parallel are active\n>\n> I assumed that it can be somehow related to enabled NUMA, but it looks \n> like memory is allocated evenly, zone_reclaim_mode is 0.\n>\n> numactl --hardware\n>\n> available: 2 nodes (0-1)\n>\n> node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 \n> 42 44 46\n>\n> node 0 size: 130978 MB\n>\n> node 0 free: 1251 MB\n>\n> node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 \n> 43 45 47\n>\n> node 1 size: 65536 MB\n>\n> node 1 free: 42 MB\n>\n> node distances:\n>\n> node 0 1\n>\n> 0: 10 21\n>\n> 1: 21 10\n>\n> numastat -m\n>\n> Per-node system memory usage (in MBs):\n>\n> Node 0 Node 1 Total\n>\n> --------------- --------------- ---------------\n>\n> MemTotal 130978.34 65536.00 196514.34\n>\n> MemFree 1479.07 212.12 1691.20\n>\n> MemUsed 129499.27 65323.88 194823.14\n>\n> Active 72241.16 37254.56 109495.73\n>\n> Inactive 47936.24 24205.40 72141.64\n>\n> Active(anon) 21162.41 18978.96 40141.37\n>\n> Inactive(anon) 1061.94 7522.34 8584.27\n>\n> Active(file) 51078.76 18275.60 69354.36\n>\n> Inactive(file) 46874.30 16683.06 63557.36\n>\n> Unevictable 0.00 0.00 0.00\n>\n> Mlocked 0.00 0.00 0.00\n>\n> Dirty 0.04 0.02 0.05\n>\n> Writeback 0.00 0.00 0.00\n>\n> FilePages 116511.36 60923.16 177434.52\n>\n> Mapped 16507.29 23912.82 40420.11\n>\n> AnonPages 3661.55 530.26 4191.81\n>\n> Shmem 18558.28 25964.74 44523.02\n>\n> KernelStack 16.98 5.77 22.75\n>\n> PageTables 3943.56 1022.25 4965.81\n>\n> NFS_Unstable 0.00 0.00 0.00\n>\n> Bounce 0.00 0.00 0.00\n>\n> WritebackTmp 0.00 0.00 0.00\n>\n> Slab 2256.09 1291.53 3547.61\n>\n> SReclaimable 2108.29 889.85 2998.14\n>\n> SUnreclaim 147.80 401.68 549.47\n>\n> AnonHugePages 1824.00 284.00 2108.00\n>\n> HugePages_Total 0.00 0.00 0.00\n>\n> HugePages_Free 0.00 0.00 0.00\n>\n> HugePages_Surp 0.00 0.00 0.00\n>\n> $ cat /proc/62679/numa_maps | grep N0 | grep zero\n>\n> 7f92509d3000 prefer:0 file=/dev/zero\\040(deleted) dirty=8419116 \n> mapmax=154 active=8193350 N0=3890534 N1=4528582 kernelpagesize_kB=4\n>\n> Could you advise what can cause such occasional long inserts with \n> long-lasting LWlocks?\n>\n\n\n\n\n\nSporadic\n insert slowness could be due to lock delays (locktype=extend) due to \nmany concurrent connections trying to insert into the same table at the \nsame time. Each insert request may result in an extend lock (8k \nextension), which blocks other writers. What normally happens is the \nthese extend locks happen so fast that you hardly ever see them in the \npg_locks table, except in the case where many concurrent connections are\n trying to do inserts into the same table. The following query will show\n if this is the case if you execute it during the time the problem is \noccurring.\n\nselect * from pg_locks where granted = false and locktype = 'extend';\n\n\nI don't know if this is your particular problem, but perhaps it \nis.\n\nRegards,\nMichael Vitale\n\n \nPavel Suderevsky Thursday,\n March 15, 2018 6:29 AM \nHi,  Well, \nunfortunately I still need community help. -- EnvironmentOS: Centos \nCentOS Linux release 7.2.1511Kernel:  \n3.10.0-327.36.3.el7.x86_64PostgreSQL: 9.6.3-- HardwareServer: Dell \nPowerEdge R430CPU: Intel(R) Xeon(R) CPU E5-2680\n v3 @ 2.50GHzRaid controller: PERC H730 Mini \n(1GB cache)Disks: 8 x 10K RPM SAS 12GB/s 2.5 \n(ST1200MM0088) in RAID 6RAM: 192GB \n(M393A2G40DB0-CPB x 16)For more detailed \nhardware info please see attached configuration.txt-- postgresql.confmax_connections\n = 2048shared_buffers = 48GBtemp_buffers = 128MBwork_mem =\n 256MBmaintenance_work_mem = 512MBdynamic_shared_memory_type = posixwal_level = hot_standbymin_wal_size\n = 4GBmax_wal_size = 32GBhuge_pages = on+numactl interleave=all-- \nsysctl.conf kernel.shmmax=64424509440kernel.shmall=4294967296kernel.sem\n = 1024 32767 128 16384fs.aio-max-nr=3145728fs.file-max = 6815744net.core.rmem_default=262144net.core.rmem_max=4194304net.core.wmem_default=262144net.core.wmem_max=1048586vm.nr_hugepages=33000vm.dirty_background_bytes=67108864vm.dirty_bytes=536870912vm.min_free_kbytes=1048576zone_reclaim_mode=0 Again: problem is the occasional long inserts that \ncan happen 1-5 times per day on OLTP system.No \nautovacuum performed during long inserts. WAL rate is 1-2Gb per hour, no\n correlation spotted with this issue.Wait event\n \"buffer_mapping\" happen for appropriate transactions but not every time\n (maybe just not every time catched).I have two\n suspects for such behaviour: I/O system and high concurrency.There is a problem with one application that \nfrequently recreates up to 90 sessions but investigation shows that \nthere is no direct correlation between such sessions and long \ntransactions, at least it is not the root cause of the issue (of course \nsuch app behaviour will be fixed). The investigation and tracing with strace in \nparticular showed that:1. The only long event \nstraced from postgres backends was <... semop resumed>.2. Seems the whole host gets hung during such events.  Example:Java application located on separate host reports \nseveral long transactions:123336.943 - \n[1239588mks]: event.insert-table123336.943 - \n[1240827mks]: event.insert-table123337.019 - \n[1292534mks]: event.insert-table143353.542 - \n[5467657mks]: event.insert-table143353.543 - \n[5468884mks]: event.insert-table152338.763 - \n[1264588mks]: event.insert-table152338.765 - \n[2054887mks]: event.insert-table Strace output for event happened at 14:33 with \nparticular known pid:119971 14:33:48.075375 \nepoll_wait(3,  <unfinished ...>119971 \n14:33:48.075696 <... epoll_wait resumed> {{EPOLLIN, {u32=27532016,\n u64=27532016}}}, 1, -1) = 1 <0.000313>119971\n 14:33:48.075792 recvfrom(9,  <unfinished ...>119971 14:33:48.075866 <... recvfrom resumed> \n\"B\\0\\0\\3\\27\\0S_21\\0\\0*\\0\\1\\0\\1\\0\\1\\0\\0\\0\\0\\0\\1\\0\\1\\0\\0\\0\\0\\0\"..., 8192, \n0, NULL, NULL) = 807 <0.000066>119971 \n14:33:48.076243 semop(26706044, {{8, -1, 0}}, 1 <unfinished ...>120019 14:33:48.119971 recvfrom(9,  <unfinished \n...>119971 14:33:53.491029 <... semop \nresumed> ) = 0 <5.414772> 119971 \n14:33:53.500356 lseek(18, 0, SEEK_END <unfinished ...>119971 14:33:53.500436 <... lseek resumed> ) = \n107790336 <0.000072>119971 \n14:33:53.500514 lseek(20, 0, SEEK_END <unfinished ...> Checking strace \nlong semop calls for whole day:root@host \n[20180314 17:47:36]:/home/user$ egrep \" <[1-9].\" /tmp/strace | grep \nsemop119991 12:33:36 <... semop resumed> \n)   = 0 <1.419394>119942 12:33:36 <...\n semop resumed> )   = 0 <1.422554>119930\n 12:33:36 <... semop resumed> )   = 0 <1.414916>119988 12:33:36 <... semop resumed> )   = 0 \n<1.213309>119966 12:33:36 <... semop \nresumed> )   = 0 <1.237492>119958 \n14:33:53.489398 <... semop resumed> ) = 0 <5.455830>120019 14:33:53.490613 <... semop resumed> ) = 0\n <5.284505>119997 14:33:53.490638 <...\n semop resumed> ) = 0 <5.111661>120000\n 14:33:53.490649 <... semop resumed> ) = 0 <3.521992>119991 14:33:53.490660 <... semop resumed> ) = 0\n <2.522460>119988 14:33:53.490670 <...\n semop resumed> ) = 0 <5.252485>120044\n 14:33:53.490834 <... semop resumed> ) = 0 <1.718129>119976 14:33:53.490852 <... semop resumed> ) = 0\n <2.489563>119974 14:33:53.490862 <...\n semop resumed> ) = 0 <1.520801>119984\n 14:33:53.491011 <... semop resumed> ) = 0 <1.213411>119971 14:33:53.491029 <... semop resumed> ) = 0\n <5.414772>119969 14:33:53.491039 <...\n semop resumed> ) = 0 <2.275608>119966\n 14:33:53.491048 <... semop resumed> ) = 0 <2.526024>119942 14:33:53.491058 <... semop resumed> ) = 0\n <5.448506>119964 15:23:38.746394 <...\n semop resumed> ) = 0 <2.034851>119960\n 15:23:38.746426 <... semop resumed> ) = 0 <2.038321>119966 15:23:38.752646 <... semop resumed> ) = 0\n <1.252342> Also it was spotted that WALWriter Postgres backend \nalso spend time in <semop resumed> during hangs. Also I have \napplication on db host that performs pg_stat_activity shapshots every \n500m and for example I can see that there were no snapshot between \n14:33:47 and 14:33:53.Separate simple script on\n db host every ~100ms checks ps output for this application and writes \nit into the txt file. And we can see that while it usually performs \nabout 7-8 times per second, between 14:33:47 and 14:33:53 it couldn't \neven perform enough ps calls. Strace for this backend showed that this \nprocess was hung in semop call. So it tells me that whole system gets \nhung.14:33:40 TOTAL=7 wait_transaction_locked=0\n sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:41\n TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:42 TOTAL=7 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:43 TOTAL=7 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:44 TOTAL=7 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:45 TOTAL=7 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:46 TOTAL=6 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:47 TOTAL=2 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:48 TOTAL=3 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:49 TOTAL=2 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:50 TOTAL=2 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:51 TOTAL=2 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:52 TOTAL=4 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:53 TOTAL=6 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:54 TOTAL=7 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=014:33:55 TOTAL=7 \nwait_transaction_locked=0 sleep_on_page=0 get_request=0 \nsleep_on_buffer=0 I understand that RAID-6 is not the best option, but I\n can't catch any evidence telling that system run out of 1GB RAID \ncontroller cache on writes. Please assist in understanding meaning and nature of \nlong semop calls appearances. --Regards,Pavel Suderevsky   From: Pavel SuderevskySent: Monday,\n January 22, 2018 21:43To: [email protected]:\n PG 9.6 Slow inserts with long-lasting LWLocks Hi, I \nhave an issue with sporadic slow insert operations with query duration \nmore than 1 sec while it takes about 50ms in average. Configuration:OS: Centos 7.2.151PostgreSQL:\n 9.6.3CPU: Intel(R) Xeon(R) CPU \nE5-2680 v3 @ 2.50GHzMemory:         \n  total        used        free      shared     buff/cache  available                Mem:         193166       10324       \n 1856      44522      180985      137444                Swap:             0           0       \n    0Storage: Well,  about 4gb of \nBBU write cache. shared_buffers = 32gbwork_mem = 128mbmax_pred_locks_per_transaction\n = 8192 This can occur once a day or not happen for few days \nwhile system load is the same. \"Inserts\" are the prepared statement \nbatches with 4-5 inserts.Neither \nexcessive memory usage nor disk or cpu utilizations have been catched.Wal writing rates, checkpoints, anything else from \npg_stat_* tables were checked and nothing embarrassing was found. There\n are several scenarious of such long inserts were spotted:1. No any locks catched (500ms check intervals)2. Wait event is \"buffer_mapping\" - looks like the \nmost common case snaphot \ntime                                                    | state  | trx \nduration    | query duration   | wait_event_type | wait_event     | \nquery 2017-12-22 03:16:01.181014 | \nactive | 00:00:00.535309 | 00:00:00.524729  | LWLockTranche   | \nbuffer_mapping | INSERT INTO table.. 2017-12-22\n 03:16:00.65814  | active | 00:00:00.012435 | 00:00:00.001855  | \nLWLockTranche   | buffer_mapping | INSERT INTO table..3. Wait event is \"SerializablePredicateLockListLock\" \n(I believe the same root cause as previous case)4. No any locks catched, but ~39 other backends in \nparallel are active  I assumed that it can be somehow related to enabled \nNUMA, but it looks like memory is allocated evenly, zone_reclaim_mode is\n 0.numactl --hardwareavailable: 2 nodes (0-1)node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 \n30 32 34 36 38 40 42 44 46node 0 \nsize: 130978 MBnode 0 free: 1251 MBnode 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 \n31 33 35 37 39 41 43 45 47node 1 \nsize: 65536 MBnode 1 free: 42 MBnode distances:node \n  0   1   0:  10  21   1:  21  10   numastat -m Per-node system memory usage (in MBs):                          Node 0          Node 1     \n      Total                \n --------------- --------------- ---------------MemTotal               130978.34        65536.00      \n 196514.34MemFree                  \n1479.07          212.12         1691.20MemUsed                129499.27        65323.88      \n 194823.14Active                  \n72241.16        37254.56       109495.73Inactive                47936.24        24205.40       \n 72141.64Active(anon)            \n21162.41        18978.96        40141.37Inactive(anon)           1061.94         7522.34       \n  8584.27Active(file)            \n51078.76        18275.60        69354.36Inactive(file)          46874.30        16683.06       \n 63557.36Unevictable                \n 0.00            0.00            0.00Mlocked \n                    0.00            0.00            0.00Dirty                       0.04            0.02     \n       0.05Writeback                 \n  0.00            0.00            0.00FilePages              116511.36        60923.16      \n 177434.52Mapped                  \n16507.29        23912.82        40420.11AnonPages                3661.55          530.26       \n  4191.81Shmem                  \n 18558.28        25964.74        44523.02KernelStack                16.98            5.77       \n    22.75PageTables              \n 3943.56         1022.25         4965.81NFS_Unstable                0.00            0.00       \n     0.00Bounce                     \n 0.00            0.00            0.00WritebackTmp \n               0.00            0.00            0.00Slab                     2256.09         1291.53       \n  3547.61SReclaimable            \n 2108.29          889.85         2998.14SUnreclaim                147.80          401.68       \n   549.47AnonHugePages            \n1824.00          284.00         2108.00HugePages_Total             0.00            0.00       \n     0.00HugePages_Free             \n 0.00            0.00            0.00HugePages_Surp \n             0.00            0.00            0.00 $ cat \n/proc/62679/numa_maps | grep N0 | grep zero7f92509d3000 prefer:0 file=/dev/zero\\040(deleted) \ndirty=8419116 mapmax=154 active=8193350 N0=3890534 N1=4528582 \nkernelpagesize_kB=4 Could you advise what can cause such occasional long \ninserts with long-lasting LWlocks?", "msg_date": "Fri, 16 Mar 2018 09:42:34 -0400", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 9.6 Slow inserts with long-lasting LWLocks" }, { "msg_contents": "Michael, thanks for your answer. \nLooks like it is not my case because issue is reproducible also for table with 100% single writer backend. Also as it was mentioned whole system gets hung. \n\nRegards,\nPavel Suderevsky\n\nFrom: MichaelDBA\nSent: Friday, March 16, 2018 16:42\nTo: Pavel Suderevsky\nCc: [email protected]\nSubject: Re: PG 9.6 Slow inserts with long-lasting LWLocks\n\nSporadic insert slowness could be due to lock delays (locktype=extend) due to many concurrent connections trying to insert into the same table at the same time. Each insert request may result in an extend lock (8k extension), which blocks other writers. What normally happens is the these extend locks happen so fast that you hardly ever see them in the pg_locks table, except in the case where many concurrent connections are trying to do inserts into the same table. The following query will show if this is the case if you execute it during the time the problem is occurring.\nselect * from pg_locks where granted = false and locktype = 'extend';\n\nI don't know if this is your particular problem, but perhaps it is.\n\nRegards,\nMichael Vitale\n\nPavel Suderevsky\nThursday, March 15, 2018 6:29 AM\nHi, \n \nWell, unfortunately I still need community help.\n \n-- Environment\nOS: Centos CentOS Linux release 7.2.1511\nKernel:  3.10.0-327.36.3.el7.x86_64\nPostgreSQL: 9.6.3\n-- Hardware\nServer: Dell PowerEdge R430\nCPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz\nRaid controller: PERC H730 Mini (1GB cache)\nDisks: 8 x 10K RPM SAS 12GB/s 2.5 (ST1200MM0088) in RAID 6\nRAM: 192GB (M393A2G40DB0-CPB x 16)\nFor more detailed hardware info please see attached configuration.txt\n-- postgresql.conf\nmax_connections = 2048\nshared_buffers = 48GB\ntemp_buffers = 128MB\nwork_mem = 256MB\nmaintenance_work_mem = 512MB\ndynamic_shared_memory_type = posix\nwal_level = hot_standby\nmin_wal_size = 4GB\nmax_wal_size = 32GB\nhuge_pages = on\n+\nnumactl interleave=all\n-- sysctl.conf \nkernel.shmmax=64424509440\nkernel.shmall=4294967296\nkernel.sem = 1024 32767 128 16384\nfs.aio-max-nr=3145728\nfs.file-max = 6815744\nnet.core.rmem_default=262144\nnet.core.rmem_max=4194304\nnet.core.wmem_default=262144\nnet.core.wmem_max=1048586\nvm.nr_hugepages=33000\nvm.dirty_background_bytes=67108864\nvm.dirty_bytes=536870912\nvm.min_free_kbytes=1048576\nzone_reclaim_mode=0\n \nAgain: problem is the occasional long inserts that can happen 1-5 times per day on OLTP system.\nNo autovacuum performed during long inserts. WAL rate is 1-2Gb per hour, no correlation spotted with this issue.\nWait event \"buffer_mapping\" happen for appropriate transactions but not every time (maybe just not every time catched).\nI have two suspects for such behaviour: I/O system and high concurrency.\nThere is a problem with one application that frequently recreates up to 90 sessions but investigation shows that there is no direct correlation between such sessions and long transactions, at least it is not the root cause of the issue (of course such app behaviour will be fixed).\n \nThe investigation and tracing with strace in particular showed that:\n1. The only long event straced from postgres backends was <... semop resumed>.\n2. Seems the whole host gets hung during such events. \n \nExample:\nJava application located on separate host reports several long transactions:\n123336.943 - [1239588mks]: event.insert-table\n123336.943 - [1240827mks]: event.insert-table\n123337.019 - [1292534mks]: event.insert-table\n143353.542 - [5467657mks]: event.insert-table\n143353.543 - [5468884mks]: event.insert-table\n152338.763 - [1264588mks]: event.insert-table\n152338.765 - [2054887mks]: event.insert-table\n \nStrace output for event happened at 14:33 with particular known pid:\n119971 14:33:48.075375 epoll_wait(3,  <unfinished ...>\n119971 14:33:48.075696 <... epoll_wait resumed> {{EPOLLIN, {u32=27532016, u64=27532016}}}, 1, -1) = 1 <0.000313>\n119971 14:33:48.075792 recvfrom(9,  <unfinished ...>\n119971 14:33:48.075866 <... recvfrom resumed> \"B\\0\\0\\3\\27\\0S_21\\0\\0*\\0\\1\\0\\1\\0\\1\\0\\0\\0\\0\\0\\1\\0\\1\\0\\0\\0\\0\\0\"..., 8192, 0, NULL, NULL) = 807 <0.000066>\n119971 14:33:48.076243 semop(26706044, {{8, -1, 0}}, 1 <unfinished ...>\n120019 14:33:48.119971 recvfrom(9,  <unfinished ...>\n119971 14:33:53.491029 <... semop resumed> ) = 0 <5.414772> \n119971 14:33:53.500356 lseek(18, 0, SEEK_END <unfinished ...>\n119971 14:33:53.500436 <... lseek resumed> ) = 107790336 <0.000072>\n119971 14:33:53.500514 lseek(20, 0, SEEK_END <unfinished ...>\n \nChecking strace long semop calls for whole day:\nroot@host [20180314 17:47:36]:/home/user$ egrep \" <[1-9].\" /tmp/strace | grep semop\n119991 12:33:36 <... semop resumed> )   = 0 <1.419394>\n119942 12:33:36 <... semop resumed> )   = 0 <1.422554>\n119930 12:33:36 <... semop resumed> )   = 0 <1.414916>\n119988 12:33:36 <... semop resumed> )   = 0 <1.213309>\n119966 12:33:36 <... semop resumed> )   = 0 <1.237492>\n119958 14:33:53.489398 <... semop resumed> ) = 0 <5.455830>\n120019 14:33:53.490613 <... semop resumed> ) = 0 <5.284505>\n119997 14:33:53.490638 <... semop resumed> ) = 0 <5.111661>\n120000 14:33:53.490649 <... semop resumed> ) = 0 <3.521992>\n119991 14:33:53.490660 <... semop resumed> ) = 0 <2.522460>\n119988 14:33:53.490670 <... semop resumed> ) = 0 <5.252485>\n120044 14:33:53.490834 <... semop resumed> ) = 0 <1.718129>\n119976 14:33:53.490852 <... semop resumed> ) = 0 <2.489563>\n119974 14:33:53.490862 <... semop resumed> ) = 0 <1.520801>\n119984 14:33:53.491011 <... semop resumed> ) = 0 <1.213411>\n119971 14:33:53.491029 <... semop resumed> ) = 0 <5.414772>\n119969 14:33:53.491039 <... semop resumed> ) = 0 <2.275608>\n119966 14:33:53.491048 <... semop resumed> ) = 0 <2.526024>\n119942 14:33:53.491058 <... semop resumed> ) = 0 <5.448506>\n119964 15:23:38.746394 <... semop resumed> ) = 0 <2.034851>\n119960 15:23:38.746426 <... semop resumed> ) = 0 <2.038321>\n119966 15:23:38.752646 <... semop resumed> ) = 0 <1.252342>\n \nAlso it was spotted that WALWriter Postgres backend also spend time in <semop resumed> during hangs.\n \nAlso I have application on db host that performs pg_stat_activity shapshots every 500m and for example I can see that there were no snapshot between 14:33:47 and 14:33:53.\nSeparate simple script on db host every ~100ms checks ps output for this application and writes it into the txt file. And we can see that while it usually performs about 7-8 times per second, between 14:33:47 and 14:33:53 it couldn't even perform enough ps calls. Strace for this backend showed that this process was hung in semop call. So it tells me that whole system gets hung.\n14:33:40 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:41 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:42 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:43 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:44 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:45 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:46 TOTAL=6 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:47 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:48 TOTAL=3 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:49 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:50 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:51 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:52 TOTAL=4 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:53 TOTAL=6 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:54 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n14:33:55 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0\n \nI understand that RAID-6 is not the best option, but I can't catch any evidence telling that system run out of 1GB RAID controller cache on writes.\n \nPlease assist in understanding meaning and nature of long semop calls appearances.\n \n--\nRegards,\nPavel Suderevsky\n \n \n \nFrom: Pavel Suderevsky\nSent: Monday, January 22, 2018 21:43\nTo: [email protected]\nSubject: PG 9.6 Slow inserts with long-lasting LWLocks\n \nHi,\n \nI have an issue with sporadic slow insert operations with query duration more than 1 sec while it takes about 50ms in average.\n \nConfiguration:\nOS: Centos 7.2.151\nPostgreSQL: 9.6.3\nCPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz\nMemory:           total        used        free      shared     buff/cache  available\n                Mem:         193166       10324        1856      44522      180985      137444\n                Swap:             0           0           0\nStorage: Well,  about 4gb of BBU write cache.\n \nshared_buffers = 32gb\nwork_mem = 128mb\nmax_pred_locks_per_transaction = 8192\n \nThis can occur once a day or not happen for few days while system load is the same. \"Inserts\" are the prepared statement batches with 4-5 inserts.\nNeither excessive memory usage nor disk or cpu utilizations have been catched.\nWal writing rates, checkpoints, anything else from pg_stat_* tables were checked and nothing embarrassing was found.\n \nThere are several scenarious of such long inserts were spotted:\n1. No any locks catched (500ms check intervals)\n2. Wait event is \"buffer_mapping\" - looks like the most common case\n snaphot time                                                    | state  | trx duration    | query duration   | wait_event_type | wait_event     | query\n 2017-12-22 03:16:01.181014 | active | 00:00:00.535309 | 00:00:00.524729  | LWLockTranche   | buffer_mapping | INSERT INTO table..\n 2017-12-22 03:16:00.65814  | active | 00:00:00.012435 | 00:00:00.001855  | LWLockTranche   | buffer_mapping | INSERT INTO table..\n3. Wait event is \"SerializablePredicateLockListLock\" (I believe the same root cause as previous case)\n4. No any locks catched, but ~39 other backends in parallel are active \n \nI assumed that it can be somehow related to enabled NUMA, but it looks like memory is allocated evenly, zone_reclaim_mode is 0.\nnumactl --hardware\navailable: 2 nodes (0-1)\nnode 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46\nnode 0 size: 130978 MB\nnode 0 free: 1251 MB\nnode 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47\nnode 1 size: 65536 MB\nnode 1 free: 42 MB\nnode distances:\nnode   0   1 \n  0:  10  21 \n  1:  21  10 \n  \nnumastat -m\n \nPer-node system memory usage (in MBs):\n                          Node 0          Node 1           Total\n                 --------------- --------------- ---------------\nMemTotal               130978.34        65536.00       196514.34\nMemFree                  1479.07          212.12         1691.20\nMemUsed                129499.27        65323.88       194823.14\nActive                  72241.16        37254.56       109495.73\nInactive                47936.24        24205.40        72141.64\nActive(anon)            21162.41        18978.96        40141.37\nInactive(anon)           1061.94         7522.34         8584.27\nActive(file)            51078.76        18275.60        69354.36\nInactive(file)          46874.30        16683.06        63557.36\nUnevictable                 0.00            0.00            0.00\nMlocked                     0.00            0.00            0.00\nDirty                       0.04            0.02            0.05\nWriteback                   0.00            0.00            0.00\nFilePages              116511.36        60923.16       177434.52\nMapped                  16507.29        23912.82        40420.11\nAnonPages                3661.55          530.26         4191.81\nShmem                   18558.28        25964.74        44523.02\nKernelStack                16.98            5.77           22.75\nPageTables               3943.56         1022.25         4965.81\nNFS_Unstable                0.00            0.00            0.00\nBounce                      0.00            0.00            0.00\nWritebackTmp                0.00            0.00            0.00\nSlab                     2256.09         1291.53         3547.61\nSReclaimable             2108.29          889.85         2998.14\nSUnreclaim                147.80          401.68          549.47\nAnonHugePages            1824.00          284.00         2108.00\nHugePages_Total             0.00            0.00            0.00\nHugePages_Free              0.00            0.00            0.00\nHugePages_Surp              0.00            0.00            0.00\n \n$ cat /proc/62679/numa_maps | grep N0 | grep zero\n7f92509d3000 prefer:0 file=/dev/zero\\040(deleted) dirty=8419116 mapmax=154 active=8193350 N0=3890534 N1=4528582 kernelpagesize_kB=4\n \nCould you advise what can cause such occasional long inserts with long-lasting LWlocks?\n \n \n\n\n\nMichael, thanks for your answer. Looks like it is not my case because issue is reproducible also for table with 100% single writer backend. Also as it was mentioned whole system gets hung.  Regards,Pavel Suderevsky From: MichaelDBASent: Friday, March 16, 2018 16:42To: Pavel SuderevskyCc: [email protected]: Re: PG 9.6 Slow inserts with long-lasting LWLocks Sporadic insert slowness could be due to lock delays (locktype=extend) due to many concurrent connections trying to insert into the same table at the same time. Each insert request may result in an extend lock (8k extension), which blocks other writers. What normally happens is the these extend locks happen so fast that you hardly ever see them in the pg_locks table, except in the case where many concurrent connections are trying to do inserts into the same table. The following query will show if this is the case if you execute it during the time the problem is occurring.select * from pg_locks where granted = false and locktype = 'extend';I don't know if this is your particular problem, but perhaps it is.Regards,Michael VitalePavel SuderevskyThursday, March 15, 2018 6:29 AMHi,  Well, unfortunately I still need community help. -- EnvironmentOS: Centos CentOS Linux release 7.2.1511Kernel:  3.10.0-327.36.3.el7.x86_64PostgreSQL: 9.6.3-- HardwareServer: Dell PowerEdge R430CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHzRaid controller: PERC H730 Mini (1GB cache)Disks: 8 x 10K RPM SAS 12GB/s 2.5 (ST1200MM0088) in RAID 6RAM: 192GB (M393A2G40DB0-CPB x 16)For more detailed hardware info please see attached configuration.txt-- postgresql.confmax_connections = 2048shared_buffers = 48GBtemp_buffers = 128MBwork_mem = 256MBmaintenance_work_mem = 512MBdynamic_shared_memory_type = posixwal_level = hot_standbymin_wal_size = 4GBmax_wal_size = 32GBhuge_pages = on+numactl interleave=all-- sysctl.conf kernel.shmmax=64424509440kernel.shmall=4294967296kernel.sem = 1024 32767 128 16384fs.aio-max-nr=3145728fs.file-max = 6815744net.core.rmem_default=262144net.core.rmem_max=4194304net.core.wmem_default=262144net.core.wmem_max=1048586vm.nr_hugepages=33000vm.dirty_background_bytes=67108864vm.dirty_bytes=536870912vm.min_free_kbytes=1048576zone_reclaim_mode=0 Again: problem is the occasional long inserts that can happen 1-5 times per day on OLTP system.No autovacuum performed during long inserts. WAL rate is 1-2Gb per hour, no correlation spotted with this issue.Wait event \"buffer_mapping\" happen for appropriate transactions but not every time (maybe just not every time catched).I have two suspects for such behaviour: I/O system and high concurrency.There is a problem with one application that frequently recreates up to 90 sessions but investigation shows that there is no direct correlation between such sessions and long transactions, at least it is not the root cause of the issue (of course such app behaviour will be fixed). The investigation and tracing with strace in particular showed that:1. The only long event straced from postgres backends was <... semop resumed>.2. Seems the whole host gets hung during such events.  Example:Java application located on separate host reports several long transactions:123336.943 - [1239588mks]: event.insert-table123336.943 - [1240827mks]: event.insert-table123337.019 - [1292534mks]: event.insert-table143353.542 - [5467657mks]: event.insert-table143353.543 - [5468884mks]: event.insert-table152338.763 - [1264588mks]: event.insert-table152338.765 - [2054887mks]: event.insert-table Strace output for event happened at 14:33 with particular known pid:119971 14:33:48.075375 epoll_wait(3,  <unfinished ...>119971 14:33:48.075696 <... epoll_wait resumed> {{EPOLLIN, {u32=27532016, u64=27532016}}}, 1, -1) = 1 <0.000313>119971 14:33:48.075792 recvfrom(9,  <unfinished ...>119971 14:33:48.075866 <... recvfrom resumed> \"B\\0\\0\\3\\27\\0S_21\\0\\0*\\0\\1\\0\\1\\0\\1\\0\\0\\0\\0\\0\\1\\0\\1\\0\\0\\0\\0\\0\"..., 8192, 0, NULL, NULL) = 807 <0.000066>119971 14:33:48.076243 semop(26706044, {{8, -1, 0}}, 1 <unfinished ...>120019 14:33:48.119971 recvfrom(9,  <unfinished ...>119971 14:33:53.491029 <... semop resumed> ) = 0 <5.414772> 119971 14:33:53.500356 lseek(18, 0, SEEK_END <unfinished ...>119971 14:33:53.500436 <... lseek resumed> ) = 107790336 <0.000072>119971 14:33:53.500514 lseek(20, 0, SEEK_END <unfinished ...> Checking strace long semop calls for whole day:root@host [20180314 17:47:36]:/home/user$ egrep \" <[1-9].\" /tmp/strace | grep semop119991 12:33:36 <... semop resumed> )   = 0 <1.419394>119942 12:33:36 <... semop resumed> )   = 0 <1.422554>119930 12:33:36 <... semop resumed> )   = 0 <1.414916>119988 12:33:36 <... semop resumed> )   = 0 <1.213309>119966 12:33:36 <... semop resumed> )   = 0 <1.237492>119958 14:33:53.489398 <... semop resumed> ) = 0 <5.455830>120019 14:33:53.490613 <... semop resumed> ) = 0 <5.284505>119997 14:33:53.490638 <... semop resumed> ) = 0 <5.111661>120000 14:33:53.490649 <... semop resumed> ) = 0 <3.521992>119991 14:33:53.490660 <... semop resumed> ) = 0 <2.522460>119988 14:33:53.490670 <... semop resumed> ) = 0 <5.252485>120044 14:33:53.490834 <... semop resumed> ) = 0 <1.718129>119976 14:33:53.490852 <... semop resumed> ) = 0 <2.489563>119974 14:33:53.490862 <... semop resumed> ) = 0 <1.520801>119984 14:33:53.491011 <... semop resumed> ) = 0 <1.213411>119971 14:33:53.491029 <... semop resumed> ) = 0 <5.414772>119969 14:33:53.491039 <... semop resumed> ) = 0 <2.275608>119966 14:33:53.491048 <... semop resumed> ) = 0 <2.526024>119942 14:33:53.491058 <... semop resumed> ) = 0 <5.448506>119964 15:23:38.746394 <... semop resumed> ) = 0 <2.034851>119960 15:23:38.746426 <... semop resumed> ) = 0 <2.038321>119966 15:23:38.752646 <... semop resumed> ) = 0 <1.252342> Also it was spotted that WALWriter Postgres backend also spend time in <semop resumed> during hangs. Also I have application on db host that performs pg_stat_activity shapshots every 500m and for example I can see that there were no snapshot between 14:33:47 and 14:33:53.Separate simple script on db host every ~100ms checks ps output for this application and writes it into the txt file. And we can see that while it usually performs about 7-8 times per second, between 14:33:47 and 14:33:53 it couldn't even perform enough ps calls. Strace for this backend showed that this process was hung in semop call. So it tells me that whole system gets hung.14:33:40 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:41 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:42 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:43 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:44 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:45 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:46 TOTAL=6 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:47 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:48 TOTAL=3 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:49 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:50 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:51 TOTAL=2 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:52 TOTAL=4 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:53 TOTAL=6 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:54 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=014:33:55 TOTAL=7 wait_transaction_locked=0 sleep_on_page=0 get_request=0 sleep_on_buffer=0 I understand that RAID-6 is not the best option, but I can't catch any evidence telling that system run out of 1GB RAID controller cache on writes. Please assist in understanding meaning and nature of long semop calls appearances. --Regards,Pavel Suderevsky   From: Pavel SuderevskySent: Monday, January 22, 2018 21:43To: [email protected]: PG 9.6 Slow inserts with long-lasting LWLocks Hi, I have an issue with sporadic slow insert operations with query duration more than 1 sec while it takes about 50ms in average. Configuration:OS: Centos 7.2.151PostgreSQL: 9.6.3CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHzMemory:           total        used        free      shared     buff/cache  available                Mem:         193166       10324        1856      44522      180985      137444                Swap:             0           0           0Storage: Well,  about 4gb of BBU write cache. shared_buffers = 32gbwork_mem = 128mbmax_pred_locks_per_transaction = 8192 This can occur once a day or not happen for few days while system load is the same. \"Inserts\" are the prepared statement batches with 4-5 inserts.Neither excessive memory usage nor disk or cpu utilizations have been catched.Wal writing rates, checkpoints, anything else from pg_stat_* tables were checked and nothing embarrassing was found. There are several scenarious of such long inserts were spotted:1. No any locks catched (500ms check intervals)2. Wait event is \"buffer_mapping\" - looks like the most common case snaphot time                                                    | state  | trx duration    | query duration   | wait_event_type | wait_event     | query 2017-12-22 03:16:01.181014 | active | 00:00:00.535309 | 00:00:00.524729  | LWLockTranche   | buffer_mapping | INSERT INTO table.. 2017-12-22 03:16:00.65814  | active | 00:00:00.012435 | 00:00:00.001855  | LWLockTranche   | buffer_mapping | INSERT INTO table..3. Wait event is \"SerializablePredicateLockListLock\" (I believe the same root cause as previous case)4. No any locks catched, but ~39 other backends in parallel are active  I assumed that it can be somehow related to enabled NUMA, but it looks like memory is allocated evenly, zone_reclaim_mode is 0.numactl --hardwareavailable: 2 nodes (0-1)node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46node 0 size: 130978 MBnode 0 free: 1251 MBnode 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47node 1 size: 65536 MBnode 1 free: 42 MBnode distances:node   0   1   0:  10  21   1:  21  10   numastat -m Per-node system memory usage (in MBs):                          Node 0          Node 1           Total                 --------------- --------------- ---------------MemTotal               130978.34        65536.00       196514.34MemFree                  1479.07          212.12         1691.20MemUsed                129499.27        65323.88       194823.14Active                  72241.16        37254.56       109495.73Inactive                47936.24        24205.40        72141.64Active(anon)            21162.41        18978.96        40141.37Inactive(anon)           1061.94         7522.34         8584.27Active(file)            51078.76        18275.60        69354.36Inactive(file)          46874.30        16683.06        63557.36Unevictable                 0.00            0.00            0.00Mlocked                     0.00            0.00            0.00Dirty                       0.04            0.02            0.05Writeback                   0.00            0.00            0.00FilePages              116511.36        60923.16       177434.52Mapped                  16507.29        23912.82        40420.11AnonPages                3661.55          530.26         4191.81Shmem                   18558.28        25964.74        44523.02KernelStack                16.98            5.77           22.75PageTables               3943.56         1022.25         4965.81NFS_Unstable                0.00            0.00            0.00Bounce                      0.00            0.00            0.00WritebackTmp                0.00            0.00            0.00Slab                     2256.09         1291.53         3547.61SReclaimable             2108.29          889.85         2998.14SUnreclaim                147.80          401.68          549.47AnonHugePages            1824.00          284.00         2108.00HugePages_Total             0.00            0.00            0.00HugePages_Free              0.00            0.00            0.00HugePages_Surp              0.00            0.00            0.00 $ cat /proc/62679/numa_maps | grep N0 | grep zero7f92509d3000 prefer:0 file=/dev/zero\\040(deleted) dirty=8419116 mapmax=154 active=8193350 N0=3890534 N1=4528582 kernelpagesize_kB=4 Could you advise what can cause such occasional long inserts with long-lasting LWlocks?", "msg_date": "Mon, 19 Mar 2018 20:26:32 +0300", "msg_from": "Pavel Suderevsky <[email protected]>", "msg_from_op": true, "msg_subject": "RE: PG 9.6 Slow inserts with long-lasting LWLocks" } ]
[ { "msg_contents": "Hello all,\n\nSo I have a view, for which I can select all rows in about 3s (returns ~80k\nrows), but if I add a where clause on a column, it takes +300s to return\nthe ~8k lines.\n\n From the plan, I see that it expects to return only 1 row and so choose to\nperform some nested loops. Of course, I did run \"ANALYZE\", but with no\nsuccess.\n\nI managed to speed things up with \"set enable_nestloop = false;\", but is\nthat the only choice I have ? Should I report a bug ?\n\nThe view is this :\n\nCREATE VIEW export_contract_par_region AS\nSELECT\n contractLine.id as id_contrat,\n partner.id as id_partner,\n partner.name,\n title.name AS contact_civ,\n mc.name AS contact_nom,\n mc.first_name AS contact_prenom,\n (CASE WHEN is_physique(partner.person_category_select) THEN\ncoalesce(mc.email,mc.email_pro) ELSE coalesce(mc.email_pro,mc.email) END)\nAS contact_email,\n (CASE WHEN is_physique(partner.person_category_select)\n THEN concat_ws('/',mc.fixed_phone1,mc.mobile_phone_perso)\n ELSE concat_ws('/',mc.fixed_phone_pro,mc.mobile_phone_pro)\nEND) AS contact_phones,\n adr_contact.addressl2 AS contact_addressl2,\n adr_contact.addressl3 AS contact_addressl3,\n adr_contact.addressl4num AS contact_addressl4num,\n adr_contact.addressl4street AS contact_addressl4street,\n adr_contact.addressl5 AS contact_addressl5,\n adr_contact.addressl6zip AS contact_addressl6zip,\n adr_contact.addressl6city AS contact_addressl6city,\n coalesce(npai.moved_ok,false) AS npai,\n coalesce(mc.address,mc.address_pro) IS NULL AS sans_adresse,\n amendment.user_sub_segment_select as type_select,\n UserSegment.code as user_segment,\n contractLine.real_start_date AS date_mise_en_service,\n to_char(contractLine.real_start_date,'YYYY/MM') AS datemes_yyyymm,\n (ws.created_on::date) AS date_souscription,\n status.name AS statut,\n power.first AS subscribed_power,\n a.addressl2 AS pdl_addressl2,\n a.addressl3 AS pdl_addressl3,\n a.addressl4num AS pdl_addressl4num,\n a.addressl4street AS pdl_addressl4street,\n a.addressl5 AS pdl_addressl5,\n a.addressl6zip AS pdl_adressel6zip,\n a.addressl6city AS pdl_adressel6city,\n a.dept AS pdl_code_dept,\n a.dept_name AS pdl_nom_dept,\n a.region_code AS pdl_code_region,\n a.region AS pdl_nom_region,\n businessProvider.business_provider_code AS codeCoop,\n soc.soc AS company_societaire,\n co.code AS connu_enercoop,\n ClientNature.name as segment_client,\n to_char(ws.created_on,'YYYY') as annee_souscription,\n to_char(ws.created_on,'MM') as mois_souscription,\n mesProductSubFamily.name as type_mes\n FROM contract_contract_line contractLine\n JOIN contract_contract contract on contractLine.contract = contract.id\n JOIN contact_partner partner on partner.id =\ncontract.main_client_partner\n JOIN contact_partner businessProvider on businessProvider.id =\ncontractLine.business_provider_partner\n LEFT JOIN contact_client_nature ClientNature on ClientNature.id =\npartner.client_nature\n JOIN contract_amendment amendment on contractLine.amendment =\namendment.id\n JOIN territory_mpt mpt on contractLine.mpt = mpt.id\n LEFT JOIN subscribed_power power ON power.amendment = amendment.id\n LEFT JOIN contract_user_segment UserSegment ON UserSegment.id =\namendment.user_segment\n LEFT JOIN contact_company company on company.id = contract.company\n LEFT JOIN address a on mpt.address = a.id\n LEFT JOIN administration_status status ON status.id =\ncontractLine.status\n LEFT JOIN shareholder_summary soc ON soc.partner = partner.id\n LEFT JOIN shareholder_web_subscription ws ON ws.contract_line =\ncontractLine.id\n LEFT JOIN crm_origin co ON co.id = ws.how_meet_enercoop\n LEFT JOIN contact_contact mc ON partner.main_contact = mc.id\n LEFT JOIN contact_title title ON mc.title = title.id\n LEFT JOIN contact_address adr_contact ON adr_contact.id = (CASE WHEN\nis_physique(partner.person_category_select) THEN\ncoalesce(mc.address,mc.address_pro) ELSE\ncoalesce(mc.address_pro,mc.address) END)\n LEFT JOIN contact_contact_address cca ON cca.contact = mc.id AND\ncca.address = adr_contact.id\n LEFT JOIN contact_contact_address_status npai ON\ncca.contact_address_status = npai.id\n LEFT JOIN crm_crm_request mesRequest ON\nmesRequest.original_contract_line = contractLine.id\n LEFT JOIN sale_product_sub_family mesProductSubFamily ON\nmesProductSubFamily.id = mesRequest.product_sub_family AND\nmesProductSubFamily.new_contract_ok is true\n ORDER BY subscribed_power DESC, statut,id_contrat;\n\nAnd the query is : select * from export_contract_par_region where codecoop\n= 'BRZH';\n\nHere is the default plan :\n\n\nQUERY\nPLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=39200.76..39200.76 rows=1 width=1066) (actual\ntime=341273.300..341274.244 rows=7359 loops=1)\n Sort Key: ((array_agg(subscribed_power.subscribed_power))[1]) DESC,\nstatus.name, contractline.id\n Sort Method: quicksort Memory: 3930kB\n -> Nested Loop Left Join (cost=32069.19..39200.75 rows=1 width=1066)\n(actual time=342.806..341203.151 rows=7359 loops=1)\n -> Nested Loop Left Join (cost=32069.05..39200.50 rows=1\nwidth=508) (actual time=342.784..341102.848 rows=7359 loops=1)\n -> Nested Loop Left Join (cost=32068.77..39200.20 rows=1\nwidth=500) (actual time=342.778..341070.310 rows=7359 loops=1)\n -> Nested Loop Left Join (cost=32068.64..39200.04\nrows=1 width=507) (actual time=342.776..341058.256 rows=7359 loops=1)\n Join Filter: (cca.address = adr_contact.id)\n Rows Removed by Join Filter: 2254\n -> Nested Loop Left Join\n(cost=32068.22..39199.55 rows=1 width=515) (actual time=342.767..340997.058\nrows=7359 loops=1)\n -> Nested Loop Left Join\n(cost=32067.79..39198.84 rows=1 width=447) (actual time=342.753..340932.286\nrows=7359 loops=1)\n -> Nested Loop Left Join\n(cost=32067.65..39198.67 rows=1 width=421) (actual time=342.748..340896.132\nrows=7359 loops=1)\n -> Nested Loop Left Join\n(cost=32067.23..39198.01 rows=1 width=279) (actual time=342.739..340821.987\nrows=7359 loops=1)\n -> Nested Loop Left\nJoin (cost=32067.09..39197.85 rows=1 width=276) (actual\ntime=342.725..340775.031 rows=7359 loops=1)\n Join Filter:\n(sh.share_holder_partner = partner.id)\n Rows Removed by\nJoin Filter: 204915707\n -> Nested Loop\nLeft Join (cost=28514.61..34092.46 rows=1 width=244) (actual\ntime=287.323..610.192 rows=7359 loops=1)\n -> Nested\nLoop Left Join (cost=28514.47..34092.30 rows=1 width=239) (actual\ntime=287.318..573.234 rows=7359 loops=1)\n ->\nHash Right Join (cost=28513.48..34090.65 rows=1 width=159) (actual\ntime=287.293..379.564 rows=7359 loops=1)\n\nHash Cond: (ws.contract_line = contractline.id)\n\n-> Seq Scan on shareholder_web_subscription ws (cost=0.00..5378.84\nrows=52884 width=24) (actual time=0.006..12.307 rows=52884 loops=1)\n\n-> Hash (cost=28513.47..28513.47 rows=1 width=143) (actual\ntime=287.243..287.243 rows=7359 loops=1)\n\nBuckets: 8192 (originally 1024) Batches: 1 (originally 1) Memory Usage:\n1173kB\n\n-> Nested Loop Left Join (cost=17456.16..28513.47 rows=1 width=143)\n(actual time=85.005..284.689 rows=7359 loops=1)\n\n-> Nested Loop (cost=17456.03..28513.31 rows=1 width=148) (actual\ntime=85.000..276.599 rows=7359 loops=1)\n\n-> Nested Loop Left Join (cost=17455.73..28512.84 rows=1 width=148)\n(actual time=84.993..261.954 rows=7359 loops=1)\n\n-> Nested Loop (cost=17455.60..28512.67 rows=1 width=140) (actual\ntime=84.989..253.715 rows=7359 loops=1)\n\n-> Nested Loop (cost=17455.18..28511.93 rows=1 width=93) (actual\ntime=84.981..230.977 rows=7359 loops=1)\n\n-> Merge Right Join (cost=17454.89..28511.52 rows=1 width=93) (actual\ntime=84.974..211.200 rows=7359 loops=1)\n\nMerge Cond: (subscribed_power.amendment = amendment.id)\n\n-> GroupAggregate (cost=12457.78..22574.03 rows=75229 width=168) (actual\ntime=57.500..175.674 rows=83432 loops=1)\n\nGroup Key: subscribed_power.amendment\n\n-> Merge Join (cost=12457.78..20764.08 rows=173917 width=12) (actual\ntime=57.479..129.530 rows=87938 loops=1)\n\nMerge Cond: (subscribed_power.amendment = amendment_1.id)\n\n-> Index Scan using contract_subscribed_power_amendment_idx on\ncontract_subscribed_power subscribed_power (cost=0.42..13523.09\nrows=173917 width=12) (actual time=0.009..33.704 rows=87963 loops=1)\n\n-> Sort (cost=12457.36..12666.43 rows=83629 width=8) (actual\ntime=57.467..67.071 rows=88019 loops=1)\n\nSort Key: amendment_1.id\n\nSort Method: quicksort Memory: 6988kB\n\n-> Hash Join (cost=10.21..5619.97 rows=83629 width=8) (actual\ntime=0.112..40.965 rows=83532 loops=1)\n\nHash Cond: (amendment_1.pricing = pricing.id)\n\n-> Seq Scan on contract_amendment amendment_1 (cost=0.00..4460.29\nrows=83629 width=16) (actual time=0.004..6.988 rows=83629 loops=1)\n\n-> Hash (cost=8.43..8.43 rows=142 width=8) (actual time=0.095..0.095\nrows=141 loops=1)\n\nBuckets: 1024 Batches: 1 Memory Usage: 14kB\n\n-> Hash Join (cost=1.07..8.43 rows=142 width=8) (actual time=0.012..0.078\nrows=141 loops=1)\n\nHash Cond: (pricing.elec_range = elec_range.id)\n\n-> Seq Scan on pricing_pricing pricing (cost=0.00..5.42 rows=142\nwidth=16) (actual time=0.003..0.015 rows=142 loops=1)\n\n-> Hash (cost=1.03..1.03 rows=3 width=8) (actual time=0.006..0.006 rows=3\nloops=1)\n\nBuckets: 1024 Batches: 1 Memory Usage: 9kB\n\n-> Seq Scan on fluid_elec_range elec_range (cost=0.00..1.03 rows=3\nwidth=8) (actual time=0.003..0.005 rows=3 loops=1)\n\n-> Sort (cost=4997.11..4997.11 rows=1 width=69) (actual\ntime=27.427..28.896 rows=7359 loops=1)\n\nSort Key: amendment.id\n\nSort Method: quicksort Memory: 1227kB\n\n-> Nested Loop (cost=183.44..4997.10 rows=1 width=69) (actual\ntime=1.115..24.616 rows=7359 loops=1)\n\n-> Nested Loop (cost=183.15..4996.59 rows=1 width=49) (actual\ntime=1.107..9.091 rows=7360 loops=1)\n\n-> Index Scan using uk_3b1y5vw9gmh7u3jj8aa2uy0b9 on contact_partner\nbusinessprovider (cost=0.42..8.44 rows=1 width=13) (actual\ntime=0.010..0.010 rows=1 loops=1)\n\nIndex Cond: ((business_provider_code)::text = 'BRZH'::text)\n\n-> Bitmap Heap Scan on contract_contract_line contractline\n(cost=182.73..4907.58 rows=8057 width=52) (actual time=1.086..5.231\nrows=7360 loops=1)\n\nRecheck Cond: (business_provider_partner = businessprovider.id)\n\nHeap Blocks: exact=3586\n\n-> Bitmap Index Scan on\ncontract_contract_line_business_provider_partner_idx (cost=0.00..180.72\nrows=8057 width=0) (actual time=0.655..0.655 rows=7360 loops=1)\n\nIndex Cond: (business_provider_partner = businessprovider.id)\n\n-> Index Scan using contract_amendment_pkey on contract_amendment\namendment (cost=0.29..0.50 rows=1 width=28) (actual time=0.001..0.002\nrows=1 loops=7360)\n\nIndex Cond: (id = contractline.amendment)\n\n-> Index Scan using contract_contract_pkey on contract_contract contract\n(cost=0.29..0.40 rows=1 width=24) (actual time=0.002..0.002 rows=1\nloops=7359)\n\nIndex Cond: (id = contractline.contract)\n\n-> Index Scan using contact_partner_pkey on contact_partner partner\n(cost=0.42..0.74 rows=1 width=55) (actual time=0.002..0.002 rows=1\nloops=7359)\n\nIndex Cond: (id = contract.main_client_partner)\n\n-> Index Scan using contact_client_nature_pkey on contact_client_nature\nclientnature (cost=0.14..0.15 rows=1 width=24) (actual time=0.001..0.001\nrows=1 loops=7359)\n\nIndex Cond: (id = partner.client_nature)\n\n-> Index Scan using territory_mpt_pkey on territory_mpt mpt\n(cost=0.29..0.46 rows=1 width=16) (actual time=0.001..0.001 rows=1\nloops=7359)\n\nIndex Cond: (id = contractline.mpt)\n\n-> Index Scan using contract_user_segment_pkey on contract_user_segment\nusersegment (cost=0.14..0.15 rows=1 width=11) (actual time=0.001..0.001\nrows=1 loops=7359)\n\nIndex Cond: (id = amendment.user_segment)\n ->\nNested Loop Left Join (cost=0.99..1.64 rows=1 width=96) (actual\ntime=0.021..0.025 rows=1 loops=7359)\n\n-> Nested Loop Left Join (cost=0.85..1.35 rows=1 width=89) (actual\ntime=0.017..0.020 rows=1 loops=7359)\n\n-> Nested Loop Left Join (cost=0.71..1.18 rows=1 width=76) (actual\ntime=0.013..0.014 rows=1 loops=7359)\n\n-> Index Scan using contact_address_pkey on contact_address a\n(cost=0.42..0.85 rows=1 width=84) (actual time=0.005..0.006 rows=1\nloops=7359)\n\nIndex Cond: (mpt.address = id)\n\n-> Index Scan using territory_commune_pkey on territory_commune commune\n(cost=0.29..0.32 rows=1 width=16) (actual time=0.005..0.006 rows=1\nloops=7359)\n\nIndex Cond: (a.commune = id)\n\n-> Index Scan using territory_department_pkey on territory_department\ndept (cost=0.14..0.16 rows=1 width=37) (actual time=0.003..0.004 rows=1\nloops=7359)\n\nIndex Cond: (commune.department = id)\n\n-> Index Scan using territory_region_pkey on territory_region reg\n(cost=0.14..0.27 rows=1 width=23) (actual time=0.003..0.003 rows=1\nloops=7359)\n\nIndex Cond: (dept.region = id)\n -> Index\nScan using administration_status_pkey on administration_status status\n(cost=0.14..0.16 rows=1 width=21) (actual time=0.003..0.003 rows=1\nloops=7359)\n Index\nCond: (id = contractline.status)\n ->\nGroupAggregate (cost=3552.48..4479.27 rows=27827 width=80) (actual\ntime=0.006..44.205 rows=27846 loops=7359)\n Group Key:\nsh.share_holder_partner\n -> Sort\n(cost=3552.48..3624.85 rows=28948 width=17) (actual time=0.003..2.913\nrows=28946 loops=7359)\n Sort\nKey: sh.share_holder_partner\n Sort\nMethod: quicksort Memory: 3030kB\n ->\nHash Join (cost=2.23..1407.26 rows=28948 width=17) (actual\ntime=0.024..12.296 rows=28946 loops=1)\n\nHash Cond: (sh.company = sh_coop.id)\n\n-> Seq Scan on shareholder_share_holder sh (cost=0.00..1007.00 rows=28948\nwidth=20) (actual time=0.007..5.495 rows=28946 loops=1)\n\nFilter: (nb_share > 0)\n\nRows Removed by Filter: 1934\n\n-> Hash (cost=2.10..2.10 rows=10 width=13) (actual time=0.009..0.009\nrows=10 loops=1)\n\nBuckets: 1024 Batches: 1 Memory Usage: 9kB\n\n-> Seq Scan on contact_company sh_coop (cost=0.00..2.10 rows=10 width=13)\n(actual time=0.003..0.006 rows=10 loops=1)\n -> Index Scan using\ncrm_origin_pkey on crm_origin co (cost=0.14..0.16 rows=1 width=19) (actual\ntime=0.004..0.004 rows=1 loops=7359)\n Index Cond: (id =\nws.how_meet_enercoop)\n -> Index Scan using\ncontact_contact_pkey on contact_contact mc (cost=0.42..0.65 rows=1\nwidth=150) (actual time=0.007..0.008 rows=1 loops=7359)\n Index Cond:\n(partner.main_contact = id)\n -> Index Scan using\ncontact_title_pkey on contact_title title (cost=0.14..0.16 rows=1\nwidth=42) (actual time=0.003..0.003 rows=1 loops=7359)\n Index Cond: (mc.title = id)\n -> Index Scan using contact_address_pkey\non contact_address adr_contact (cost=0.43..0.70 rows=1 width=68) (actual\ntime=0.005..0.005 rows=1 loops=7359)\n Index Cond: (id = CASE WHEN (CASE\nWHEN ((partner.person_category_select)::text = 'naturalPerson'::text) THEN\n'P'::text WHEN ((partner.person_category_select)::text =\n'legalPerson'::text) THEN 'M'::text ELSE '?????'::text END = 'P'::text)\nTHEN COALESCE(mc.address, mc.address_pro) ELSE COALESCE(mc.address_pro,\nmc.address) END)\n -> Index Scan using\ncontact_contact_address_contact_idx on contact_contact_address cca\n(cost=0.42..0.48 rows=1 width=24) (actual time=0.006..0.006 rows=1\nloops=7359)\n Index Cond: (contact = mc.id)\n -> Index Scan using\ncontact_contact_address_status_pkey on contact_contact_address_status npai\n(cost=0.13..0.15 rows=1 width=9) (actual time=0.000..0.000 rows=0\nloops=7359)\n Index Cond: (cca.contact_address_status = id)\n -> Index Scan using\ncrm_crm_request_original_contract_line_idx on crm_crm_request mesrequest\n(cost=0.28..0.29 rows=1 width=16) (actual time=0.003..0.003 rows=0\nloops=7359)\n Index Cond: (original_contract_line = contractline.id)\n -> Index Scan using sale_product_sub_family_pkey on\nsale_product_sub_family mesproductsubfamily (cost=0.14..0.20 rows=1\nwidth=62) (actual time=0.000..0.000 rows=0 loops=7359)\n Index Cond: (id = mesrequest.product_sub_family)\n Filter: (new_contract_ok IS TRUE)\n Planning time: 21.106 ms\n Execution time: 341275.027 ms\n(118 lignes)\n\nAnd the one I get without the where clause :\n\n\nQUERY\nPLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=144636.25..144837.81 rows=80627 width=1066)\n Sort Key: ((array_agg(subscribed_power.subscribed_power))[1]) DESC,\nstatus.name, contractline.id\n -> Hash Left Join (cost=130533.89..138065.56 rows=80627 width=1066)\n Hash Cond: (cca.contact_address_status = npai.id)\n -> Hash Right Join (cost=130532.78..135132.88 rows=80627\nwidth=561)\n Hash Cond: ((cca.contact = mc.id) AND (cca.address =\nadr_contact.id))\n -> Seq Scan on contact_contact_address cca\n(cost=0.00..3424.05 rows=156805 width=24)\n -> Hash (cost=129323.37..129323.37 rows=80627 width=569)\n -> Hash Left Join (cost=127873.96..129323.37\nrows=80627 width=569)\n Hash Cond: (CASE WHEN (CASE WHEN\n((partner.person_category_select)::text = 'naturalPerson'::text) THEN\n'P'::text WHEN ((partner.person_category_select)::text =\n'legalPerson'::text) THEN 'M'::text ELSE '?????'::text END = 'P'::text)\nTHEN COALESCE(mc.address, mc.address_pro) ELSE COALESCE(mc.address_pro,\nmc.address) END = adr_contact.id)\n -> Hash Right Join (cost=114435.97..114474.41\nrows=80627 width=501)\n Hash Cond:\n(mesrequest.original_contract_line = contractline.id)\n -> Hash Left Join (cost=7.49..43.37\nrows=681 width=62)\n Hash Cond:\n(mesrequest.product_sub_family = mesproductsubfamily.id)\n -> Seq Scan on crm_crm_request\nmesrequest (cost=0.00..32.81 rows=681 width=16)\n -> Hash (cost=7.28..7.28 rows=17\nwidth=62)\n -> Seq Scan on\nsale_product_sub_family mesproductsubfamily (cost=0.00..7.28 rows=17\nwidth=62)\n Filter: (new_contract_ok\nIS TRUE)\n -> Hash (cost=113420.64..113420.64\nrows=80627 width=447)\n -> Hash Left Join\n(cost=98148.14..113420.64 rows=80627 width=447)\n Hash Cond: (mc.title = title.id\n)\n -> Hash Left Join\n(cost=98145.72..112484.37 rows=80627 width=421)\n Hash Cond:\n(contractline.status = status.id)\n -> Hash Left Join\n(cost=98143.30..111373.33 rows=80627 width=416)\n Hash Cond:\n(mpt.address = a.id)\n -> Hash Left\nJoin (cost=79299.88..91422.10 rows=80627 width=336)\n Hash Cond: (\ncontractline.id = ws.contract_line)\n -> Hash\nRight Join (cost=72530.89..83867.87 rows=80627 width=317)\n Hash\nCond: (mc.id = partner.main_contact)\n ->\nSeq Scan on contact_contact mc (cost=0.00..8524.65 rows=229265 width=150)\n ->\nHash (cost=71523.05..71523.05 rows=80627 width=175)\n\n-> Hash Right Join (cost=70040.37..71523.05 rows=80627 width=175)\n\nHash Cond: (sh.share_holder_partner = partner.id)\n\n-> GroupAggregate (cost=3552.48..4479.27 rows=27827 width=80)\n\nGroup Key: sh.share_holder_partner\n\n-> Sort (cost=3552.48..3624.85 rows=28948 width=17)\n\nSort Key: sh.share_holder_partner\n\n-> Hash Join (cost=2.23..1407.26 rows=28948 width=17)\n\nHash Cond: (sh.company = sh_coop.id)\n\n-> Seq Scan on shareholder_share_holder sh (cost=0.00..1007.00 rows=28948\nwidth=20)\n\nFilter: (nb_share > 0)\n\n-> Hash (cost=2.10..2.10 rows=10 width=13)\n\n-> Seq Scan on contact_company sh_coop (cost=0.00..2.10 rows=10 width=13)\n\n-> Hash (cost=65480.05..65480.05 rows=80627 width=143)\n\n-> Hash Left Join (cost=47310.33..65480.05 rows=80627 width=143)\n\nHash Cond: (amendment.user_segment = usersegment.id)\n\n-> Hash Join (cost=47309.02..64370.12 rows=80627 width=148)\n\nHash Cond: (contractline.mpt = mpt.id)\n\n-> Hash Left Join (cost=42733.67..58686.26 rows=80627 width=148)\n\nHash Cond: (partner.client_nature = clientnature.id)\n\n-> Hash Join (cost=42732.36..57971.72 rows=80627 width=140)\n\nHash Cond: (contractline.business_provider_partner = businessprovider.id)\n\n-> Hash Join (cost=35201.74..49333.07 rows=80627 width=143)\n\nHash Cond: (contractline.contract = contract.id)\n\n-> Hash Join (cost=24290.54..37313.25 rows=80627 width=96)\n\nHash Cond: (amendment.id = contractline.amendment)\n\n-> Hash Right Join (cost=17963.43..29866.37 rows=83629 width=60)\n\nHash Cond: (subscribed_power.amendment = amendment.id)\n\n-> GroupAggregate (cost=12457.78..22574.03 rows=75229 width=168)\n\nGroup Key: subscribed_power.amendment\n\n-> Merge Join (cost=12457.78..20764.08 rows=173917 width=12)\n\nMerge Cond: (subscribed_power.amendment = amendment_1.id)\n\n-> Index Scan using contract_subscribed_power_amendment_idx on\ncontract_subscribed_power subscribed_power (cost=0.42..13523.09\nrows=173917 width=12)\n\n-> Sort (cost=12457.36..12666.43 rows=83629 width=8)\n\nSort Key: amendment_1.id\n\n-> Hash Join (cost=10.21..5619.97 rows=83629 width=8)\n\nHash Cond: (amendment_1.pricing = pricing.id)\n\n-> Seq Scan on contract_amendment amendment_1 (cost=0.00..4460.29\nrows=83629 width=16)\n\n-> Hash (cost=8.43..8.43 rows=142 width=8)\n\n-> Hash Join (cost=1.07..8.43 rows=142 width=8)\n\nHash Cond: (pricing.elec_range = elec_range.id)\n\n-> Seq Scan on pricing_pricing pricing (cost=0.00..5.42 rows=142 width=16)\n\n-> Hash (cost=1.03..1.03 rows=3 width=8)\n\n-> Seq Scan on fluid_elec_range elec_range (cost=0.00..1.03 rows=3\nwidth=8)\n\n-> Hash (cost=4460.29..4460.29 rows=83629 width=28)\n\n-> Seq Scan on contract_amendment amendment (cost=0.00..4460.29\nrows=83629 width=28)\n\n-> Hash (cost=5319.27..5319.27 rows=80627 width=52)\n\n-> Seq Scan on contract_contract_line contractline (cost=0.00..5319.27\nrows=80627 width=52)\n\n-> Hash (cost=10091.85..10091.85 rows=65548 width=63)\n\n-> Hash Join (cost=3038.83..10091.85 rows=65548 width=63)\n\nHash Cond: (partner.id = contract.main_client_partner)\n\n-> Seq Scan on contact_partner partner (cost=0.00..5911.94 rows=129494\nwidth=55)\n\n-> Hash (cost=2219.48..2219.48 rows=65548 width=24)\n\n-> Seq Scan on contract_contract contract (cost=0.00..2219.48 rows=65548\nwidth=24)\n\n-> Hash (cost=5911.94..5911.94 rows=129494 width=13)\n\n-> Seq Scan on contact_partner businessprovider (cost=0.00..5911.94\nrows=129494 width=13)\n\n-> Hash (cost=1.14..1.14 rows=14 width=24)\n\n-> Seq Scan on contact_client_nature clientnature (cost=0.00..1.14\nrows=14 width=24)\n\n-> Hash (cost=3602.93..3602.93 rows=77793 width=16)\n\n-> Seq Scan on territory_mpt mpt (cost=0.00..3602.93 rows=77793 width=16)\n\n-> Hash (cost=1.14..1.14 rows=14 width=11)\n\n-> Seq Scan on contract_user_segment usersegment (cost=0.00..1.14 rows=14\nwidth=11)\n -> Hash\n(cost=6107.94..6107.94 rows=52884 width=27)\n ->\nHash Left Join (cost=1.94..6107.94 rows=52884 width=27)\n\nHash Cond: (ws.how_meet_enercoop = co.id)\n\n-> Seq Scan on shareholder_web_subscription ws (cost=0.00..5378.84\nrows=52884 width=24)\n\n-> Hash (cost=1.42..1.42 rows=42 width=19)\n\n-> Seq Scan on crm_origin co (cost=0.00..1.42 rows=42 width=19)\n -> Hash\n(cost=15431.77..15431.77 rows=272933 width=96)\n -> Hash\nLeft Join (cost=2101.31..15431.77 rows=272933 width=96)\n Hash\nCond: (a.commune = commune.id)\n ->\nSeq Scan on contact_address a (cost=0.00..10026.33 rows=272933 width=84)\n ->\nHash (cost=1641.83..1641.83 rows=36758 width=36)\n\n-> Hash Left Join (cost=7.27..1641.83 rows=36758 width=36)\n\nHash Cond: (commune.department = dept.id)\n\n-> Seq Scan on territory_commune commune (cost=0.00..1129.58 rows=36758\nwidth=16)\n\n-> Hash (cost=6.01..6.01 rows=101 width=36)\n\n-> Hash Left Join (cost=1.61..6.01 rows=101 width=36)\n\nHash Cond: (dept.region = reg.id)\n\n-> Seq Scan on territory_department dept (cost=0.00..3.01 rows=101\nwidth=37)\n\n-> Hash (cost=1.27..1.27 rows=27 width=23)\n\n-> Seq Scan on territory_region reg (cost=0.00..1.27 rows=27 width=23)\n -> Hash\n(cost=1.63..1.63 rows=63 width=21)\n -> Seq Scan on\nadministration_status status (cost=0.00..1.63 rows=63 width=21)\n -> Hash (cost=1.63..1.63\nrows=63 width=42)\n -> Seq Scan on\ncontact_title title (cost=0.00..1.63 rows=63 width=42)\n -> Hash (cost=10026.33..10026.33 rows=272933\nwidth=68)\n -> Seq Scan on contact_address\nadr_contact (cost=0.00..10026.33 rows=272933 width=68)\n -> Hash (cost=1.05..1.05 rows=5 width=9)\n -> Seq Scan on contact_contact_address_status npai\n(cost=0.00..1.05 rows=5 width=9)\n(120 lignes)\n\n\n\n--\nhttp://www.laurentmartelli.com // http://www.imprimart.fr\n\nHello all,So I have a view, for which I can select all rows in about 3s (returns ~80k rows), but if I add a where clause on a column, it takes +300s to return the ~8k lines.From the plan, I see that it expects to return only 1 row and so choose to perform some nested loops. Of course, I did run \"ANALYZE\", but with no success. I managed to speed things up with \"set enable_nestloop = false;\", but is that the only choice I have ? Should I report a bug ?The view is this : CREATE VIEW export_contract_par_region ASSELECT    contractLine.id as id_contrat,    partner.id as id_partner,    partner.name,    title.name AS contact_civ,    mc.name AS contact_nom,    mc.first_name AS contact_prenom,    (CASE WHEN is_physique(partner.person_category_select) THEN coalesce(mc.email,mc.email_pro) ELSE coalesce(mc.email_pro,mc.email) END) AS contact_email,    (CASE WHEN is_physique(partner.person_category_select)               THEN concat_ws('/',mc.fixed_phone1,mc.mobile_phone_perso)               ELSE concat_ws('/',mc.fixed_phone_pro,mc.mobile_phone_pro) END) AS contact_phones,    adr_contact.addressl2 AS contact_addressl2,    adr_contact.addressl3 AS contact_addressl3,    adr_contact.addressl4num AS contact_addressl4num,    adr_contact.addressl4street AS contact_addressl4street,    adr_contact.addressl5 AS contact_addressl5,    adr_contact.addressl6zip AS contact_addressl6zip,    adr_contact.addressl6city AS contact_addressl6city,    coalesce(npai.moved_ok,false) AS npai,    coalesce(mc.address,mc.address_pro) IS NULL AS sans_adresse,    amendment.user_sub_segment_select as type_select,    UserSegment.code as user_segment,    contractLine.real_start_date AS date_mise_en_service,    to_char(contractLine.real_start_date,'YYYY/MM') AS datemes_yyyymm,    (ws.created_on::date) AS date_souscription,    status.name AS statut,    power.first AS subscribed_power,    a.addressl2 AS pdl_addressl2,    a.addressl3 AS pdl_addressl3,    a.addressl4num AS pdl_addressl4num,    a.addressl4street AS pdl_addressl4street,    a.addressl5 AS pdl_addressl5,    a.addressl6zip AS pdl_adressel6zip,    a.addressl6city AS pdl_adressel6city,    a.dept AS pdl_code_dept,    a.dept_name AS pdl_nom_dept,    a.region_code AS pdl_code_region,    a.region AS pdl_nom_region,    businessProvider.business_provider_code AS codeCoop,    soc.soc AS company_societaire,    co.code AS connu_enercoop,    ClientNature.name as segment_client,    to_char(ws.created_on,'YYYY') as annee_souscription,    to_char(ws.created_on,'MM') as mois_souscription,    mesProductSubFamily.name as type_mes    FROM contract_contract_line contractLine    JOIN contract_contract contract on contractLine.contract = contract.id    JOIN contact_partner partner on partner.id = contract.main_client_partner    JOIN contact_partner businessProvider on businessProvider.id = contractLine.business_provider_partner    LEFT JOIN contact_client_nature ClientNature on ClientNature.id = partner.client_nature    JOIN contract_amendment amendment on contractLine.amendment = amendment.id    JOIN territory_mpt mpt on contractLine.mpt = mpt.id    LEFT JOIN subscribed_power power ON power.amendment = amendment.id    LEFT JOIN contract_user_segment UserSegment ON UserSegment.id = amendment.user_segment    LEFT JOIN contact_company company on company.id = contract.company    LEFT JOIN address a on mpt.address = a.id    LEFT JOIN administration_status status ON status.id = contractLine.status    LEFT JOIN shareholder_summary soc ON soc.partner = partner.id    LEFT JOIN shareholder_web_subscription ws ON ws.contract_line = contractLine.id    LEFT JOIN crm_origin co ON co.id = ws.how_meet_enercoop    LEFT JOIN contact_contact mc ON partner.main_contact = mc.id    LEFT JOIN contact_title title ON mc.title = title.id    LEFT JOIN contact_address adr_contact ON adr_contact.id = (CASE WHEN is_physique(partner.person_category_select) THEN coalesce(mc.address,mc.address_pro) ELSE coalesce(mc.address_pro,mc.address) END)    LEFT JOIN contact_contact_address cca ON cca.contact = mc.id AND cca.address = adr_contact.id    LEFT JOIN contact_contact_address_status npai ON cca.contact_address_status = npai.id    LEFT JOIN crm_crm_request mesRequest ON mesRequest.original_contract_line = contractLine.id    LEFT JOIN sale_product_sub_family mesProductSubFamily ON mesProductSubFamily.id = mesRequest.product_sub_family AND mesProductSubFamily.new_contract_ok is true    ORDER BY subscribed_power DESC, statut,id_contrat;And the query is : select * from export_contract_par_region where codecoop = 'BRZH';Here is the default plan :                                                                                                                                                                                   QUERY PLAN                                                                                                                                                                                  ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Sort  (cost=39200.76..39200.76 rows=1 width=1066) (actual time=341273.300..341274.244 rows=7359 loops=1)   Sort Key: ((array_agg(subscribed_power.subscribed_power))[1]) DESC, status.name, contractline.id   Sort Method: quicksort  Memory: 3930kB   ->  Nested Loop Left Join  (cost=32069.19..39200.75 rows=1 width=1066) (actual time=342.806..341203.151 rows=7359 loops=1)         ->  Nested Loop Left Join  (cost=32069.05..39200.50 rows=1 width=508) (actual time=342.784..341102.848 rows=7359 loops=1)               ->  Nested Loop Left Join  (cost=32068.77..39200.20 rows=1 width=500) (actual time=342.778..341070.310 rows=7359 loops=1)                     ->  Nested Loop Left Join  (cost=32068.64..39200.04 rows=1 width=507) (actual time=342.776..341058.256 rows=7359 loops=1)                           Join Filter: (cca.address = adr_contact.id)                           Rows Removed by Join Filter: 2254                           ->  Nested Loop Left Join  (cost=32068.22..39199.55 rows=1 width=515) (actual time=342.767..340997.058 rows=7359 loops=1)                                 ->  Nested Loop Left Join  (cost=32067.79..39198.84 rows=1 width=447) (actual time=342.753..340932.286 rows=7359 loops=1)                                       ->  Nested Loop Left Join  (cost=32067.65..39198.67 rows=1 width=421) (actual time=342.748..340896.132 rows=7359 loops=1)                                             ->  Nested Loop Left Join  (cost=32067.23..39198.01 rows=1 width=279) (actual time=342.739..340821.987 rows=7359 loops=1)                                                   ->  Nested Loop Left Join  (cost=32067.09..39197.85 rows=1 width=276) (actual time=342.725..340775.031 rows=7359 loops=1)                                                         Join Filter: (sh.share_holder_partner = partner.id)                                                         Rows Removed by Join Filter: 204915707                                                         ->  Nested Loop Left Join  (cost=28514.61..34092.46 rows=1 width=244) (actual time=287.323..610.192 rows=7359 loops=1)                                                               ->  Nested Loop Left Join  (cost=28514.47..34092.30 rows=1 width=239) (actual time=287.318..573.234 rows=7359 loops=1)                                                                     ->  Hash Right Join  (cost=28513.48..34090.65 rows=1 width=159) (actual time=287.293..379.564 rows=7359 loops=1)                                                                           Hash Cond: (ws.contract_line = contractline.id)                                                                           ->  Seq Scan on shareholder_web_subscription ws  (cost=0.00..5378.84 rows=52884 width=24) (actual time=0.006..12.307 rows=52884 loops=1)                                                                           ->  Hash  (cost=28513.47..28513.47 rows=1 width=143) (actual time=287.243..287.243 rows=7359 loops=1)                                                                                 Buckets: 8192 (originally 1024)  Batches: 1 (originally 1)  Memory Usage: 1173kB                                                                                 ->  Nested Loop Left Join  (cost=17456.16..28513.47 rows=1 width=143) (actual time=85.005..284.689 rows=7359 loops=1)                                                                                       ->  Nested Loop  (cost=17456.03..28513.31 rows=1 width=148) (actual time=85.000..276.599 rows=7359 loops=1)                                                                                             ->  Nested Loop Left Join  (cost=17455.73..28512.84 rows=1 width=148) (actual time=84.993..261.954 rows=7359 loops=1)                                                                                                   ->  Nested Loop  (cost=17455.60..28512.67 rows=1 width=140) (actual time=84.989..253.715 rows=7359 loops=1)                                                                                                         ->  Nested Loop  (cost=17455.18..28511.93 rows=1 width=93) (actual time=84.981..230.977 rows=7359 loops=1)                                                                                                               ->  Merge Right Join  (cost=17454.89..28511.52 rows=1 width=93) (actual time=84.974..211.200 rows=7359 loops=1)                                                                                                                     Merge Cond: (subscribed_power.amendment = amendment.id)                                                                                                                     ->  GroupAggregate  (cost=12457.78..22574.03 rows=75229 width=168) (actual time=57.500..175.674 rows=83432 loops=1)                                                                                                                           Group Key: subscribed_power.amendment                                                                                                                           ->  Merge Join  (cost=12457.78..20764.08 rows=173917 width=12) (actual time=57.479..129.530 rows=87938 loops=1)                                                                                                                                 Merge Cond: (subscribed_power.amendment = amendment_1.id)                                                                                                                                 ->  Index Scan using contract_subscribed_power_amendment_idx on contract_subscribed_power subscribed_power  (cost=0.42..13523.09 rows=173917 width=12) (actual time=0.009..33.704 rows=87963 loops=1)                                                                                                                                 ->  Sort  (cost=12457.36..12666.43 rows=83629 width=8) (actual time=57.467..67.071 rows=88019 loops=1)                                                                                                                                       Sort Key: amendment_1.id                                                                                                                                       Sort Method: quicksort  Memory: 6988kB                                                                                                                                       ->  Hash Join  (cost=10.21..5619.97 rows=83629 width=8) (actual time=0.112..40.965 rows=83532 loops=1)                                                                                                                                             Hash Cond: (amendment_1.pricing = pricing.id)                                                                                                                                             ->  Seq Scan on contract_amendment amendment_1  (cost=0.00..4460.29 rows=83629 width=16) (actual time=0.004..6.988 rows=83629 loops=1)                                                                                                                                             ->  Hash  (cost=8.43..8.43 rows=142 width=8) (actual time=0.095..0.095 rows=141 loops=1)                                                                                                                                                   Buckets: 1024  Batches: 1  Memory Usage: 14kB                                                                                                                                                   ->  Hash Join  (cost=1.07..8.43 rows=142 width=8) (actual time=0.012..0.078 rows=141 loops=1)                                                                                                                                                         Hash Cond: (pricing.elec_range = elec_range.id)                                                                                                                                                         ->  Seq Scan on pricing_pricing pricing  (cost=0.00..5.42 rows=142 width=16) (actual time=0.003..0.015 rows=142 loops=1)                                                                                                                                                         ->  Hash  (cost=1.03..1.03 rows=3 width=8) (actual time=0.006..0.006 rows=3 loops=1)                                                                                                                                                               Buckets: 1024  Batches: 1  Memory Usage: 9kB                                                                                                                                                               ->  Seq Scan on fluid_elec_range elec_range  (cost=0.00..1.03 rows=3 width=8) (actual time=0.003..0.005 rows=3 loops=1)                                                                                                                     ->  Sort  (cost=4997.11..4997.11 rows=1 width=69) (actual time=27.427..28.896 rows=7359 loops=1)                                                                                                                           Sort Key: amendment.id                                                                                                                           Sort Method: quicksort  Memory: 1227kB                                                                                                                           ->  Nested Loop  (cost=183.44..4997.10 rows=1 width=69) (actual time=1.115..24.616 rows=7359 loops=1)                                                                                                                                 ->  Nested Loop  (cost=183.15..4996.59 rows=1 width=49) (actual time=1.107..9.091 rows=7360 loops=1)                                                                                                                                       ->  Index Scan using uk_3b1y5vw9gmh7u3jj8aa2uy0b9 on contact_partner businessprovider  (cost=0.42..8.44 rows=1 width=13) (actual time=0.010..0.010 rows=1 loops=1)                                                                                                                                             Index Cond: ((business_provider_code)::text = 'BRZH'::text)                                                                                                                                       ->  Bitmap Heap Scan on contract_contract_line contractline  (cost=182.73..4907.58 rows=8057 width=52) (actual time=1.086..5.231 rows=7360 loops=1)                                                                                                                                             Recheck Cond: (business_provider_partner = businessprovider.id)                                                                                                                                             Heap Blocks: exact=3586                                                                                                                                             ->  Bitmap Index Scan on contract_contract_line_business_provider_partner_idx  (cost=0.00..180.72 rows=8057 width=0) (actual time=0.655..0.655 rows=7360 loops=1)                                                                                                                                                   Index Cond: (business_provider_partner = businessprovider.id)                                                                                                                                 ->  Index Scan using contract_amendment_pkey on contract_amendment amendment  (cost=0.29..0.50 rows=1 width=28) (actual time=0.001..0.002 rows=1 loops=7360)                                                                                                                                       Index Cond: (id = contractline.amendment)                                                                                                               ->  Index Scan using contract_contract_pkey on contract_contract contract  (cost=0.29..0.40 rows=1 width=24) (actual time=0.002..0.002 rows=1 loops=7359)                                                                                                                     Index Cond: (id = contractline.contract)                                                                                                         ->  Index Scan using contact_partner_pkey on contact_partner partner  (cost=0.42..0.74 rows=1 width=55) (actual time=0.002..0.002 rows=1 loops=7359)                                                                                                               Index Cond: (id = contract.main_client_partner)                                                                                                   ->  Index Scan using contact_client_nature_pkey on contact_client_nature clientnature  (cost=0.14..0.15 rows=1 width=24) (actual time=0.001..0.001 rows=1 loops=7359)                                                                                                         Index Cond: (id = partner.client_nature)                                                                                             ->  Index Scan using territory_mpt_pkey on territory_mpt mpt  (cost=0.29..0.46 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=7359)                                                                                                   Index Cond: (id = contractline.mpt)                                                                                       ->  Index Scan using contract_user_segment_pkey on contract_user_segment usersegment  (cost=0.14..0.15 rows=1 width=11) (actual time=0.001..0.001 rows=1 loops=7359)                                                                                             Index Cond: (id = amendment.user_segment)                                                                     ->  Nested Loop Left Join  (cost=0.99..1.64 rows=1 width=96) (actual time=0.021..0.025 rows=1 loops=7359)                                                                           ->  Nested Loop Left Join  (cost=0.85..1.35 rows=1 width=89) (actual time=0.017..0.020 rows=1 loops=7359)                                                                                 ->  Nested Loop Left Join  (cost=0.71..1.18 rows=1 width=76) (actual time=0.013..0.014 rows=1 loops=7359)                                                                                       ->  Index Scan using contact_address_pkey on contact_address a  (cost=0.42..0.85 rows=1 width=84) (actual time=0.005..0.006 rows=1 loops=7359)                                                                                             Index Cond: (mpt.address = id)                                                                                       ->  Index Scan using territory_commune_pkey on territory_commune commune  (cost=0.29..0.32 rows=1 width=16) (actual time=0.005..0.006 rows=1 loops=7359)                                                                                             Index Cond: (a.commune = id)                                                                                 ->  Index Scan using territory_department_pkey on territory_department dept  (cost=0.14..0.16 rows=1 width=37) (actual time=0.003..0.004 rows=1 loops=7359)                                                                                       Index Cond: (commune.department = id)                                                                           ->  Index Scan using territory_region_pkey on territory_region reg  (cost=0.14..0.27 rows=1 width=23) (actual time=0.003..0.003 rows=1 loops=7359)                                                                                 Index Cond: (dept.region = id)                                                               ->  Index Scan using administration_status_pkey on administration_status status  (cost=0.14..0.16 rows=1 width=21) (actual time=0.003..0.003 rows=1 loops=7359)                                                                     Index Cond: (id = contractline.status)                                                         ->  GroupAggregate  (cost=3552.48..4479.27 rows=27827 width=80) (actual time=0.006..44.205 rows=27846 loops=7359)                                                               Group Key: sh.share_holder_partner                                                               ->  Sort  (cost=3552.48..3624.85 rows=28948 width=17) (actual time=0.003..2.913 rows=28946 loops=7359)                                                                     Sort Key: sh.share_holder_partner                                                                     Sort Method: quicksort  Memory: 3030kB                                                                     ->  Hash Join  (cost=2.23..1407.26 rows=28948 width=17) (actual time=0.024..12.296 rows=28946 loops=1)                                                                           Hash Cond: (sh.company = sh_coop.id)                                                                           ->  Seq Scan on shareholder_share_holder sh  (cost=0.00..1007.00 rows=28948 width=20) (actual time=0.007..5.495 rows=28946 loops=1)                                                                                 Filter: (nb_share > 0)                                                                                 Rows Removed by Filter: 1934                                                                           ->  Hash  (cost=2.10..2.10 rows=10 width=13) (actual time=0.009..0.009 rows=10 loops=1)                                                                                 Buckets: 1024  Batches: 1  Memory Usage: 9kB                                                                                 ->  Seq Scan on contact_company sh_coop  (cost=0.00..2.10 rows=10 width=13) (actual time=0.003..0.006 rows=10 loops=1)                                                   ->  Index Scan using crm_origin_pkey on crm_origin co  (cost=0.14..0.16 rows=1 width=19) (actual time=0.004..0.004 rows=1 loops=7359)                                                         Index Cond: (id = ws.how_meet_enercoop)                                             ->  Index Scan using contact_contact_pkey on contact_contact mc  (cost=0.42..0.65 rows=1 width=150) (actual time=0.007..0.008 rows=1 loops=7359)                                                   Index Cond: (partner.main_contact = id)                                       ->  Index Scan using contact_title_pkey on contact_title title  (cost=0.14..0.16 rows=1 width=42) (actual time=0.003..0.003 rows=1 loops=7359)                                             Index Cond: (mc.title = id)                                 ->  Index Scan using contact_address_pkey on contact_address adr_contact  (cost=0.43..0.70 rows=1 width=68) (actual time=0.005..0.005 rows=1 loops=7359)                                       Index Cond: (id = CASE WHEN (CASE WHEN ((partner.person_category_select)::text = 'naturalPerson'::text) THEN 'P'::text WHEN ((partner.person_category_select)::text = 'legalPerson'::text) THEN 'M'::text ELSE '?????'::text END = 'P'::text) THEN COALESCE(mc.address, mc.address_pro) ELSE COALESCE(mc.address_pro, mc.address) END)                           ->  Index Scan using contact_contact_address_contact_idx on contact_contact_address cca  (cost=0.42..0.48 rows=1 width=24) (actual time=0.006..0.006 rows=1 loops=7359)                                 Index Cond: (contact = mc.id)                     ->  Index Scan using contact_contact_address_status_pkey on contact_contact_address_status npai  (cost=0.13..0.15 rows=1 width=9) (actual time=0.000..0.000 rows=0 loops=7359)                           Index Cond: (cca.contact_address_status = id)               ->  Index Scan using crm_crm_request_original_contract_line_idx on crm_crm_request mesrequest  (cost=0.28..0.29 rows=1 width=16) (actual time=0.003..0.003 rows=0 loops=7359)                     Index Cond: (original_contract_line = contractline.id)         ->  Index Scan using sale_product_sub_family_pkey on sale_product_sub_family mesproductsubfamily  (cost=0.14..0.20 rows=1 width=62) (actual time=0.000..0.000 rows=0 loops=7359)               Index Cond: (id = mesrequest.product_sub_family)               Filter: (new_contract_ok IS TRUE) Planning time: 21.106 ms Execution time: 341275.027 ms(118 lignes)And the one I get without the where clause :                                                                                                                                                                                  QUERY PLAN                                                                                                                                                                                  ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort  (cost=144636.25..144837.81 rows=80627 width=1066)   Sort Key: ((array_agg(subscribed_power.subscribed_power))[1]) DESC, status.name, contractline.id   ->  Hash Left Join  (cost=130533.89..138065.56 rows=80627 width=1066)         Hash Cond: (cca.contact_address_status = npai.id)         ->  Hash Right Join  (cost=130532.78..135132.88 rows=80627 width=561)               Hash Cond: ((cca.contact = mc.id) AND (cca.address = adr_contact.id))               ->  Seq Scan on contact_contact_address cca  (cost=0.00..3424.05 rows=156805 width=24)               ->  Hash  (cost=129323.37..129323.37 rows=80627 width=569)                     ->  Hash Left Join  (cost=127873.96..129323.37 rows=80627 width=569)                           Hash Cond: (CASE WHEN (CASE WHEN ((partner.person_category_select)::text = 'naturalPerson'::text) THEN 'P'::text WHEN ((partner.person_category_select)::text = 'legalPerson'::text) THEN 'M'::text ELSE '?????'::text END = 'P'::text) THEN COALESCE(mc.address, mc.address_pro) ELSE COALESCE(mc.address_pro, mc.address) END = adr_contact.id)                           ->  Hash Right Join  (cost=114435.97..114474.41 rows=80627 width=501)                                 Hash Cond: (mesrequest.original_contract_line = contractline.id)                                 ->  Hash Left Join  (cost=7.49..43.37 rows=681 width=62)                                       Hash Cond: (mesrequest.product_sub_family = mesproductsubfamily.id)                                       ->  Seq Scan on crm_crm_request mesrequest  (cost=0.00..32.81 rows=681 width=16)                                       ->  Hash  (cost=7.28..7.28 rows=17 width=62)                                             ->  Seq Scan on sale_product_sub_family mesproductsubfamily  (cost=0.00..7.28 rows=17 width=62)                                                   Filter: (new_contract_ok IS TRUE)                                 ->  Hash  (cost=113420.64..113420.64 rows=80627 width=447)                                       ->  Hash Left Join  (cost=98148.14..113420.64 rows=80627 width=447)                                             Hash Cond: (mc.title = title.id)                                             ->  Hash Left Join  (cost=98145.72..112484.37 rows=80627 width=421)                                                   Hash Cond: (contractline.status = status.id)                                                   ->  Hash Left Join  (cost=98143.30..111373.33 rows=80627 width=416)                                                         Hash Cond: (mpt.address = a.id)                                                         ->  Hash Left Join  (cost=79299.88..91422.10 rows=80627 width=336)                                                               Hash Cond: (contractline.id = ws.contract_line)                                                               ->  Hash Right Join  (cost=72530.89..83867.87 rows=80627 width=317)                                                                     Hash Cond: (mc.id = partner.main_contact)                                                                     ->  Seq Scan on contact_contact mc  (cost=0.00..8524.65 rows=229265 width=150)                                                                     ->  Hash  (cost=71523.05..71523.05 rows=80627 width=175)                                                                           ->  Hash Right Join  (cost=70040.37..71523.05 rows=80627 width=175)                                                                                 Hash Cond: (sh.share_holder_partner = partner.id)                                                                                 ->  GroupAggregate  (cost=3552.48..4479.27 rows=27827 width=80)                                                                                       Group Key: sh.share_holder_partner                                                                                       ->  Sort  (cost=3552.48..3624.85 rows=28948 width=17)                                                                                             Sort Key: sh.share_holder_partner                                                                                             ->  Hash Join  (cost=2.23..1407.26 rows=28948 width=17)                                                                                                   Hash Cond: (sh.company = sh_coop.id)                                                                                                   ->  Seq Scan on shareholder_share_holder sh  (cost=0.00..1007.00 rows=28948 width=20)                                                                                                         Filter: (nb_share > 0)                                                                                                   ->  Hash  (cost=2.10..2.10 rows=10 width=13)                                                                                                         ->  Seq Scan on contact_company sh_coop  (cost=0.00..2.10 rows=10 width=13)                                                                                 ->  Hash  (cost=65480.05..65480.05 rows=80627 width=143)                                                                                       ->  Hash Left Join  (cost=47310.33..65480.05 rows=80627 width=143)                                                                                             Hash Cond: (amendment.user_segment = usersegment.id)                                                                                             ->  Hash Join  (cost=47309.02..64370.12 rows=80627 width=148)                                                                                                   Hash Cond: (contractline.mpt = mpt.id)                                                                                                   ->  Hash Left Join  (cost=42733.67..58686.26 rows=80627 width=148)                                                                                                         Hash Cond: (partner.client_nature = clientnature.id)                                                                                                         ->  Hash Join  (cost=42732.36..57971.72 rows=80627 width=140)                                                                                                               Hash Cond: (contractline.business_provider_partner = businessprovider.id)                                                                                                               ->  Hash Join  (cost=35201.74..49333.07 rows=80627 width=143)                                                                                                                     Hash Cond: (contractline.contract = contract.id)                                                                                                                     ->  Hash Join  (cost=24290.54..37313.25 rows=80627 width=96)                                                                                                                           Hash Cond: (amendment.id = contractline.amendment)                                                                                                                           ->  Hash Right Join  (cost=17963.43..29866.37 rows=83629 width=60)                                                                                                                                 Hash Cond: (subscribed_power.amendment = amendment.id)                                                                                                                                 ->  GroupAggregate  (cost=12457.78..22574.03 rows=75229 width=168)                                                                                                                                       Group Key: subscribed_power.amendment                                                                                                                                       ->  Merge Join  (cost=12457.78..20764.08 rows=173917 width=12)                                                                                                                                             Merge Cond: (subscribed_power.amendment = amendment_1.id)                                                                                                                                             ->  Index Scan using contract_subscribed_power_amendment_idx on contract_subscribed_power subscribed_power  (cost=0.42..13523.09 rows=173917 width=12)                                                                                                                                             ->  Sort  (cost=12457.36..12666.43 rows=83629 width=8)                                                                                                                                                   Sort Key: amendment_1.id                                                                                                                                                   ->  Hash Join  (cost=10.21..5619.97 rows=83629 width=8)                                                                                                                                                         Hash Cond: (amendment_1.pricing = pricing.id)                                                                                                                                                         ->  Seq Scan on contract_amendment amendment_1  (cost=0.00..4460.29 rows=83629 width=16)                                                                                                                                                         ->  Hash  (cost=8.43..8.43 rows=142 width=8)                                                                                                                                                               ->  Hash Join  (cost=1.07..8.43 rows=142 width=8)                                                                                                                                                                     Hash Cond: (pricing.elec_range = elec_range.id)                                                                                                                                                                     ->  Seq Scan on pricing_pricing pricing  (cost=0.00..5.42 rows=142 width=16)                                                                                                                                                                     ->  Hash  (cost=1.03..1.03 rows=3 width=8)                                                                                                                                                                           ->  Seq Scan on fluid_elec_range elec_range  (cost=0.00..1.03 rows=3 width=8)                                                                                                                                 ->  Hash  (cost=4460.29..4460.29 rows=83629 width=28)                                                                                                                                       ->  Seq Scan on contract_amendment amendment  (cost=0.00..4460.29 rows=83629 width=28)                                                                                                                           ->  Hash  (cost=5319.27..5319.27 rows=80627 width=52)                                                                                                                                 ->  Seq Scan on contract_contract_line contractline  (cost=0.00..5319.27 rows=80627 width=52)                                                                                                                     ->  Hash  (cost=10091.85..10091.85 rows=65548 width=63)                                                                                                                           ->  Hash Join  (cost=3038.83..10091.85 rows=65548 width=63)                                                                                                                                 Hash Cond: (partner.id = contract.main_client_partner)                                                                                                                                 ->  Seq Scan on contact_partner partner  (cost=0.00..5911.94 rows=129494 width=55)                                                                                                                                 ->  Hash  (cost=2219.48..2219.48 rows=65548 width=24)                                                                                                                                       ->  Seq Scan on contract_contract contract  (cost=0.00..2219.48 rows=65548 width=24)                                                                                                               ->  Hash  (cost=5911.94..5911.94 rows=129494 width=13)                                                                                                                     ->  Seq Scan on contact_partner businessprovider  (cost=0.00..5911.94 rows=129494 width=13)                                                                                                         ->  Hash  (cost=1.14..1.14 rows=14 width=24)                                                                                                               ->  Seq Scan on contact_client_nature clientnature  (cost=0.00..1.14 rows=14 width=24)                                                                                                   ->  Hash  (cost=3602.93..3602.93 rows=77793 width=16)                                                                                                         ->  Seq Scan on territory_mpt mpt  (cost=0.00..3602.93 rows=77793 width=16)                                                                                             ->  Hash  (cost=1.14..1.14 rows=14 width=11)                                                                                                   ->  Seq Scan on contract_user_segment usersegment  (cost=0.00..1.14 rows=14 width=11)                                                               ->  Hash  (cost=6107.94..6107.94 rows=52884 width=27)                                                                     ->  Hash Left Join  (cost=1.94..6107.94 rows=52884 width=27)                                                                           Hash Cond: (ws.how_meet_enercoop = co.id)                                                                           ->  Seq Scan on shareholder_web_subscription ws  (cost=0.00..5378.84 rows=52884 width=24)                                                                           ->  Hash  (cost=1.42..1.42 rows=42 width=19)                                                                                 ->  Seq Scan on crm_origin co  (cost=0.00..1.42 rows=42 width=19)                                                         ->  Hash  (cost=15431.77..15431.77 rows=272933 width=96)                                                               ->  Hash Left Join  (cost=2101.31..15431.77 rows=272933 width=96)                                                                     Hash Cond: (a.commune = commune.id)                                                                     ->  Seq Scan on contact_address a  (cost=0.00..10026.33 rows=272933 width=84)                                                                     ->  Hash  (cost=1641.83..1641.83 rows=36758 width=36)                                                                           ->  Hash Left Join  (cost=7.27..1641.83 rows=36758 width=36)                                                                                 Hash Cond: (commune.department = dept.id)                                                                                 ->  Seq Scan on territory_commune commune  (cost=0.00..1129.58 rows=36758 width=16)                                                                                 ->  Hash  (cost=6.01..6.01 rows=101 width=36)                                                                                       ->  Hash Left Join  (cost=1.61..6.01 rows=101 width=36)                                                                                             Hash Cond: (dept.region = reg.id)                                                                                             ->  Seq Scan on territory_department dept  (cost=0.00..3.01 rows=101 width=37)                                                                                             ->  Hash  (cost=1.27..1.27 rows=27 width=23)                                                                                                   ->  Seq Scan on territory_region reg  (cost=0.00..1.27 rows=27 width=23)                                                   ->  Hash  (cost=1.63..1.63 rows=63 width=21)                                                         ->  Seq Scan on administration_status status  (cost=0.00..1.63 rows=63 width=21)                                             ->  Hash  (cost=1.63..1.63 rows=63 width=42)                                                   ->  Seq Scan on contact_title title  (cost=0.00..1.63 rows=63 width=42)                           ->  Hash  (cost=10026.33..10026.33 rows=272933 width=68)                                 ->  Seq Scan on contact_address adr_contact  (cost=0.00..10026.33 rows=272933 width=68)         ->  Hash  (cost=1.05..1.05 rows=5 width=9)               ->  Seq Scan on contact_contact_address_status npai  (cost=0.00..1.05 rows=5 width=9)(120 lignes)--http://www.laurentmartelli.com    //    http://www.imprimart.fr", "msg_date": "Tue, 23 Jan 2018 13:03:49 +0100", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": true, "msg_subject": "Bad plan" }, { "msg_contents": "On Tue, Jan 23, 2018 at 01:03:49PM +0100, Laurent Martelli wrote:\n> Hello all,\n> \n> So I have a view, for which I can select all rows in about 3s (returns ~80k\n> rows), but if I add a where clause on a column, it takes +300s to return\n> the ~8k lines.\n> \n> From the plan, I see that it expects to return only 1 row and so choose to\n> perform some nested loops. Of course, I did run \"ANALYZE\", but with no\n> success.\n> \n> I managed to speed things up with \"set enable_nestloop = false;\", but is\n> that the only choice I have ? Should I report a bug ?\n\n\n> Here is the default plan :\n\nCan you resend without line breaks or paste a link to explain.depesz?\n\nThe problem appears to be here:\n\n-> Nested Loop Left Join (cost=32067.09..39197.85 rows=1 width=276) (actual time=342.725..340775.031 rows=7359 loops=1)\nJoin Filter: (sh.share_holder_partner = partner.id)\nRows Removed by Join Filter: 204915707\n\nJustin\n\n", "msg_date": "Tue, 23 Jan 2018 09:18:48 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan" }, { "msg_contents": "2018-01-23 16:18 GMT+01:00 Justin Pryzby <[email protected]>:\n> On Tue, Jan 23, 2018 at 01:03:49PM +0100, Laurent Martelli wrote:\n>\n>> Here is the default plan :\n>\n> Can you resend without line breaks or paste a link to explain.depesz?\n\nI hope it's better like that. I've attached it too, just in case.\n\n>\n> The problem appears to be here:\n>\n> -> Nested Loop Left Join (cost=32067.09..39197.85 rows=1 width=276) (actual time=342.725..340775.031 rows=7359 loops=1)\n> Join Filter: (sh.share_holder_partner = partner.id)\n> Rows Removed by Join Filter: 204915707\n>\n> Justin\n\n\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=39200.76..39200.76 rows=1 width=1066) (actual\ntime=341273.300..341274.244 rows=7359 loops=1)\n Sort Key: ((array_agg(subscribed_power.subscribed_power))[1]) DESC,\nstatus.name, contractline.id\n Sort Method: quicksort Memory: 3930kB\n -> Nested Loop Left Join (cost=32069.19..39200.75 rows=1\nwidth=1066) (actual time=342.806..341203.151 rows=7359 loops=1)\n -> Nested Loop Left Join (cost=32069.05..39200.50 rows=1\nwidth=508) (actual time=342.784..341102.848 rows=7359 loops=1)\n -> Nested Loop Left Join (cost=32068.77..39200.20\nrows=1 width=500) (actual time=342.778..341070.310 rows=7359 loops=1)\n -> Nested Loop Left Join\n(cost=32068.64..39200.04 rows=1 width=507) (actual\ntime=342.776..341058.256 rows=7359 loops=1)\n Join Filter: (cca.address = adr_contact.id)\n Rows Removed by Join Filter: 2254\n -> Nested Loop Left Join\n(cost=32068.22..39199.55 rows=1 width=515) (actual\ntime=342.767..340997.058 rows=7359 loops=1)\n -> Nested Loop Left Join\n(cost=32067.79..39198.84 rows=1 width=447) (actual\ntime=342.753..340932.286 rows=7359 loops=1)\n -> Nested Loop Left Join\n(cost=32067.65..39198.67 rows=1 width=421) (actual\ntime=342.748..340896.132 rows=7359 loops=1)\n -> Nested Loop Left Join\n (cost=32067.23..39198.01 rows=1 width=279) (actual\ntime=342.739..340821.987 rows=7359 loops=1)\n -> Nested Loop\nLeft Join (cost=32067.09..39197.85 rows=1 width=276) (actual\ntime=342.725..340775.031 rows=7359 loops=1)\n Join Filter:\n(sh.share_holder_partner = partner.id)\n Rows Removed\nby Join Filter: 204915707\n -> Nested\nLoop Left Join (cost=28514.61..34092.46 rows=1 width=244) (actual\ntime=287.323..610.192 rows=7359 loops=1)\n ->\nNested Loop Left Join (cost=28514.47..34092.30 rows=1 width=239)\n(actual time=287.318..573.234 rows=7359 loops=1)\n\n-> Hash Right Join (cost=28513.48..34090.65 rows=1 width=159)\n(actual time=287.293..379.564 rows=7359 loops=1)\n\n Hash Cond: (ws.contract_line = contractline.id)\n\n -> Seq Scan on shareholder_web_subscription ws\n(cost=0.00..5378.84 rows=52884 width=24) (actual time=0.006..12.307\nrows=52884 loops=1)\n\n -> Hash (cost=28513.47..28513.47 rows=1 width=143) (actual\ntime=287.243..287.243 rows=7359 loops=1)\n\n Buckets: 8192 (originally 1024) Batches: 1 (originally 1)\nMemory Usage: 1173kB\n\n -> Nested Loop Left Join (cost=17456.16..28513.47 rows=1\nwidth=143) (actual time=85.005..284.689 rows=7359 loops=1)\n\n -> Nested Loop (cost=17456.03..28513.31 rows=1\nwidth=148) (actual time=85.000..276.599 rows=7359 loops=1)\n\n -> Nested Loop Left Join\n(cost=17455.73..28512.84 rows=1 width=148) (actual\ntime=84.993..261.954 rows=7359 loops=1)\n\n -> Nested Loop (cost=17455.60..28512.67\nrows=1 width=140) (actual time=84.989..253.715 rows=7359 loops=1)\n\n -> Nested Loop\n(cost=17455.18..28511.93 rows=1 width=93) (actual time=84.981..230.977\nrows=7359 loops=1)\n\n -> Merge Right Join\n(cost=17454.89..28511.52 rows=1 width=93) (actual time=84.974..211.200\nrows=7359 loops=1)\n\n Merge Cond:\n(subscribed_power.amendment = amendment.id)\n\n -> GroupAggregate\n(cost=12457.78..22574.03 rows=75229 width=168) (actual\ntime=57.500..175.674 rows=83432 loops=1)\n\n Group Key:\nsubscribed_power.amendment\n\n -> Merge Join\n(cost=12457.78..20764.08 rows=173917 width=12) (actual\ntime=57.479..129.530 rows=87938 loops=1)\n\n Merge Cond:\n(subscribed_power.amendment = amendment_1.id)\n\n -> Index\nScan using contract_subscribed_power_amendment_idx on\ncontract_subscribed_power subscribed_power (cost=0.42..13523.09\nrows=173917 width=12) (actual time=0.009..33.704 rows=87963 loops=1)\n\n -> Sort\n(cost=12457.36..12666.43 rows=83629 width=8) (actual\ntime=57.467..67.071 rows=88019 loops=1)\n\n Sort\nKey: amendment_1.id\n\n Sort\nMethod: quicksort Memory: 6988kB\n\n ->\nHash Join (cost=10.21..5619.97 rows=83629 width=8) (actual\ntime=0.112..40.965 rows=83532 loops=1)\n\n\nHash Cond: (amendment_1.pricing = pricing.id)\n\n\n-> Seq Scan on contract_amendment amendment_1 (cost=0.00..4460.29\nrows=83629 width=16) (actual time=0.004..6.988 rows=83629 loops=1)\n\n\n-> Hash (cost=8.43..8.43 rows=142 width=8) (actual time=0.095..0.095\nrows=141 loops=1)\n\n\n Buckets: 1024 Batches: 1 Memory Usage: 14kB\n\n\n -> Hash Join (cost=1.07..8.43 rows=142 width=8) (actual\ntime=0.012..0.078 rows=141 loops=1)\n\n\n Hash Cond: (pricing.elec_range = elec_range.id)\n\n\n -> Seq Scan on pricing_pricing pricing (cost=0.00..5.42\nrows=142 width=16) (actual time=0.003..0.015 rows=142 loops=1)\n\n\n -> Hash (cost=1.03..1.03 rows=3 width=8) (actual\ntime=0.006..0.006 rows=3 loops=1)\n\n\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n\n\n -> Seq Scan on fluid_elec_range elec_range\n(cost=0.00..1.03 rows=3 width=8) (actual time=0.003..0.005 rows=3\nloops=1)\n\n -> Sort\n(cost=4997.11..4997.11 rows=1 width=69) (actual time=27.427..28.896\nrows=7359 loops=1)\n\n Sort Key:\namendment.id\n\n Sort Method:\nquicksort Memory: 1227kB\n\n -> Nested Loop\n(cost=183.44..4997.10 rows=1 width=69) (actual time=1.115..24.616\nrows=7359 loops=1)\n\n -> Nested\nLoop (cost=183.15..4996.59 rows=1 width=49) (actual time=1.107..9.091\nrows=7360 loops=1)\n\n ->\nIndex Scan using uk_3b1y5vw9gmh7u3jj8aa2uy0b9 on contact_partner\nbusinessprovider (cost=0.42..8.44 rows=1 width=13) (actual\ntime=0.010..0.010 rows=1 loops=1)\n\n\nIndex Cond: ((business_provider_code)::text = 'BRZH'::text)\n\n ->\nBitmap Heap Scan on contract_contract_line contractline\n(cost=182.73..4907.58 rows=8057 width=52) (actual time=1.086..5.231\nrows=7360 loops=1)\n\n\nRecheck Cond: (business_provider_partner = businessprovider.id)\n\n\nHeap Blocks: exact=3586\n\n\n-> Bitmap Index Scan on\ncontract_contract_line_business_provider_partner_idx\n(cost=0.00..180.72 rows=8057 width=0) (actual time=0.655..0.655\nrows=7360 loops=1)\n\n\n Index Cond: (business_provider_partner = businessprovider.id)\n\n -> Index\nScan using contract_amendment_pkey on contract_amendment amendment\n(cost=0.29..0.50 rows=1 width=28) (actual time=0.001..0.002 rows=1\nloops=7360)\n\n Index\nCond: (id = contractline.amendment)\n\n -> Index Scan using\ncontract_contract_pkey on contract_contract contract (cost=0.29..0.40\nrows=1 width=24) (actual time=0.002..0.002 rows=1 loops=7359)\n\n Index Cond: (id =\ncontractline.contract)\n\n -> Index Scan using\ncontact_partner_pkey on contact_partner partner (cost=0.42..0.74\nrows=1 width=55) (actual time=0.002..0.002 rows=1 loops=7359)\n\n Index Cond: (id =\ncontract.main_client_partner)\n\n -> Index Scan using\ncontact_client_nature_pkey on contact_client_nature clientnature\n(cost=0.14..0.15 rows=1 width=24) (actual time=0.001..0.001 rows=1\nloops=7359)\n\n Index Cond: (id =\npartner.client_nature)\n\n -> Index Scan using territory_mpt_pkey on\nterritory_mpt mpt (cost=0.29..0.46 rows=1 width=16) (actual\ntime=0.001..0.001 rows=1 loops=7359)\n\n Index Cond: (id = contractline.mpt)\n\n -> Index Scan using contract_user_segment_pkey on\ncontract_user_segment usersegment (cost=0.14..0.15 rows=1 width=11)\n(actual time=0.001..0.001 rows=1 loops=7359)\n\n Index Cond: (id = amendment.user_segment)\n\n-> Nested Loop Left Join (cost=0.99..1.64 rows=1 width=96) (actual\ntime=0.021..0.025 rows=1 loops=7359)\n\n -> Nested Loop Left Join (cost=0.85..1.35 rows=1 width=89)\n(actual time=0.017..0.020 rows=1 loops=7359)\n\n -> Nested Loop Left Join (cost=0.71..1.18 rows=1 width=76)\n(actual time=0.013..0.014 rows=1 loops=7359)\n\n -> Index Scan using contact_address_pkey on\ncontact_address a (cost=0.42..0.85 rows=1 width=84) (actual\ntime=0.005..0.006 rows=1 loops=7359)\n\n Index Cond: (mpt.address = id)\n\n -> Index Scan using territory_commune_pkey on\nterritory_commune commune (cost=0.29..0.32 rows=1 width=16) (actual\ntime=0.005..0.006 rows=1 loops=7359)\n\n Index Cond: (a.commune = id)\n\n -> Index Scan using territory_department_pkey on\nterritory_department dept (cost=0.14..0.16 rows=1 width=37) (actual\ntime=0.003..0.004 rows=1 loops=7359)\n\n Index Cond: (commune.department = id)\n\n -> Index Scan using territory_region_pkey on territory_region reg\n (cost=0.14..0.27 rows=1 width=23) (actual time=0.003..0.003 rows=1\nloops=7359)\n\n Index Cond: (dept.region = id)\n ->\nIndex Scan using administration_status_pkey on administration_status\nstatus (cost=0.14..0.16 rows=1 width=21) (actual time=0.003..0.003\nrows=1 loops=7359)\n\nIndex Cond: (id = contractline.status)\n ->\nGroupAggregate (cost=3552.48..4479.27 rows=27827 width=80) (actual\ntime=0.006..44.205 rows=27846 loops=7359)\n Group\nKey: sh.share_holder_partner\n ->\nSort (cost=3552.48..3624.85 rows=28948 width=17) (actual\ntime=0.003..2.913 rows=28946 loops=7359)\n\nSort Key: sh.share_holder_partner\n\nSort Method: quicksort Memory: 3030kB\n\n-> Hash Join (cost=2.23..1407.26 rows=28948 width=17) (actual\ntime=0.024..12.296 rows=28946 loops=1)\n\n Hash Cond: (sh.company = sh_coop.id)\n\n -> Seq Scan on shareholder_share_holder sh (cost=0.00..1007.00\nrows=28948 width=20) (actual time=0.007..5.495 rows=28946 loops=1)\n\n Filter: (nb_share > 0)\n\n Rows Removed by Filter: 1934\n\n -> Hash (cost=2.10..2.10 rows=10 width=13) (actual\ntime=0.009..0.009 rows=10 loops=1)\n\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n\n -> Seq Scan on contact_company sh_coop (cost=0.00..2.10\nrows=10 width=13) (actual time=0.003..0.006 rows=10 loops=1)\n -> Index Scan\nusing crm_origin_pkey on crm_origin co (cost=0.14..0.16 rows=1\nwidth=19) (actual time=0.004..0.004 rows=1 loops=7359)\n Index Cond:\n(id = ws.how_meet_enercoop)\n -> Index Scan using\ncontact_contact_pkey on contact_contact mc (cost=0.42..0.65 rows=1\nwidth=150) (actual time=0.007..0.008 rows=1 loops=7359)\n Index Cond:\n(partner.main_contact = id)\n -> Index Scan using\ncontact_title_pkey on contact_title title (cost=0.14..0.16 rows=1\nwidth=42) (actual time=0.003..0.003 rows=1 loops=7359)\n Index Cond: (mc.title = id)\n -> Index Scan using\ncontact_address_pkey on contact_address adr_contact (cost=0.43..0.70\nrows=1 width=68) (actual time=0.005..0.005 rows=1 loops=7359)\n Index Cond: (id = CASE WHEN\n(CASE WHEN ((partner.person_category_select)::text =\n'naturalPerson'::text) THEN 'P'::text WHEN\n((partner.person_category_select)::text = 'legalPerson'::text) THEN\n'M'::text ELSE '?????'::text END = 'P'::text) THEN\nCOALESCE(mc.address, mc.address_pro) ELSE COALESCE(mc.address_pro,\nmc.address) END)\n -> Index Scan using\ncontact_contact_address_contact_idx on contact_contact_address cca\n(cost=0.42..0.48 rows=1 width=24) (actual time=0.006..0.006 rows=1\nloops=7359)\n Index Cond: (contact = mc.id)\n -> Index Scan using\ncontact_contact_address_status_pkey on contact_contact_address_status\nnpai (cost=0.13..0.15 rows=1 width=9) (actual time=0.000..0.000\nrows=0 loops=7359)\n Index Cond: (cca.contact_address_status = id)\n -> Index Scan using\ncrm_crm_request_original_contract_line_idx on crm_crm_request\nmesrequest (cost=0.28..0.29 rows=1 width=16) (actual\ntime=0.003..0.003 rows=0 loops=7359)\n Index Cond: (original_contract_line = contractline.id)\n -> Index Scan using sale_product_sub_family_pkey on\nsale_product_sub_family mesproductsubfamily (cost=0.14..0.20 rows=1\nwidth=62) (actual time=0.000..0.000 rows=0 loops=7359)\n Index Cond: (id = mesrequest.product_sub_family)\n Filter: (new_contract_ok IS TRUE)\n Planning time: 21.106 ms\n Execution time: 341275.027 ms\n(118 lignes)\n\n\n-- \nhttp://www.laurentmartelli.com // http://www.imprimart.fr", "msg_date": "Tue, 23 Jan 2018 16:38:46 +0100", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad plan" }, { "msg_contents": "I've have a look to the plan with pgadmin, and I think the problem is\nrather here :\n\n-> Sort (cost=4997.11..4997.11 rows=1 width=69) (actual\ntime=27.427..28.896 rows=7359 loops=1)\n Sort Key: amendment.id\n Sort Method: quicksort Memory: 1227kB\n -> Nested Loop (cost=183.44..4997.10 rows=1 width=69) (actual\ntime=1.115..24.616 rows=7359 loops=1)\n -> Nested Loop (cost=183.15..4996.59 rows=1 width=49)\n(actual time=1.107..9.091 rows=7360 loops=1)\n -> Index Scan using uk_3b1y5vw9gmh7u3jj8aa2uy0b9 on\ncontact_partner businessprovider (cost=0.42..8.44 rows=1 width=13)\n(actual time=0.010..0.010 rows=1 loops=1)\n Index Cond: ((business_provider_code)::text =\n'BRZH'::text)\n -> Bitmap Heap Scan on contract_contract_line\ncontractline (cost=182.73..4907.58 rows=8057 width=52) (actual\ntime=1.086..5.231 rows=7360 loops=1)\n Recheck Cond: (business_provider_partner =\nbusinessprovider.id)\n Heap Blocks: exact=3586\n -> Bitmap Index Scan on\ncontract_contract_line_business_provider_partner_idx\n(cost=0.00..180.72 rows=8057 width=0) (actual time=0.655..0.655\nrows=7360 loops=1)\n Index Cond: (business_provider_partner =\nbusinessprovider.id)\n -> Index Scan using contract_amendment_pkey on\ncontract_amendment amendment (cost=0.29..0.50 rows=1 width=28)\n(actual time=0.001..0.002 rows=1 loops=7360)\n Index Cond: (id = contractline.amendment)\n\nThe bitmap scan on contract_contract_line is good (8057 vs 7360 rows),\nand so is the index scan (1 row), but the JOIN with \"contact_partner\nbusinessProvider\" should give the 8057 rows from the bitmap scan,\nshouldn't it ?\n\n\n2018-01-23 16:38 GMT+01:00 Laurent Martelli <[email protected]>:\n> 2018-01-23 16:18 GMT+01:00 Justin Pryzby <[email protected]>:\n>> On Tue, Jan 23, 2018 at 01:03:49PM +0100, Laurent Martelli wrote:\n>>\n>>> Here is the default plan :\n>>\n>> Can you resend without line breaks or paste a link to explain.depesz?\n>\n> I hope it's better like that. I've attached it too, just in case.\n>\n>>\n>> The problem appears to be here:\n>>\n>> -> Nested Loop Left Join (cost=32067.09..39197.85 rows=1 width=276) (actual time=342.725..340775.031 rows=7359 loops=1)\n>> Join Filter: (sh.share_holder_partner = partner.id)\n>> Rows Removed by Join Filter: 204915707\n>>\n>> Justin\n>\n>\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=39200.76..39200.76 rows=1 width=1066) (actual\n> time=341273.300..341274.244 rows=7359 loops=1)\n> Sort Key: ((array_agg(subscribed_power.subscribed_power))[1]) DESC,\n> status.name, contractline.id\n> Sort Method: quicksort Memory: 3930kB\n> -> Nested Loop Left Join (cost=32069.19..39200.75 rows=1\n> width=1066) (actual time=342.806..341203.151 rows=7359 loops=1)\n> -> Nested Loop Left Join (cost=32069.05..39200.50 rows=1\n> width=508) (actual time=342.784..341102.848 rows=7359 loops=1)\n> -> Nested Loop Left Join (cost=32068.77..39200.20\n> rows=1 width=500) (actual time=342.778..341070.310 rows=7359 loops=1)\n> -> Nested Loop Left Join\n> (cost=32068.64..39200.04 rows=1 width=507) (actual\n> time=342.776..341058.256 rows=7359 loops=1)\n> Join Filter: (cca.address = adr_contact.id)\n> Rows Removed by Join Filter: 2254\n> -> Nested Loop Left Join\n> (cost=32068.22..39199.55 rows=1 width=515) (actual\n> time=342.767..340997.058 rows=7359 loops=1)\n> -> Nested Loop Left Join\n> (cost=32067.79..39198.84 rows=1 width=447) (actual\n> time=342.753..340932.286 rows=7359 loops=1)\n> -> Nested Loop Left Join\n> (cost=32067.65..39198.67 rows=1 width=421) (actual\n> time=342.748..340896.132 rows=7359 loops=1)\n> -> Nested Loop Left Join\n> (cost=32067.23..39198.01 rows=1 width=279) (actual\n> time=342.739..340821.987 rows=7359 loops=1)\n> -> Nested Loop\n> Left Join (cost=32067.09..39197.85 rows=1 width=276) (actual\n> time=342.725..340775.031 rows=7359 loops=1)\n> Join Filter:\n> (sh.share_holder_partner = partner.id)\n> Rows Removed\n> by Join Filter: 204915707\n> -> Nested\n> Loop Left Join (cost=28514.61..34092.46 rows=1 width=244) (actual\n> time=287.323..610.192 rows=7359 loops=1)\n> ->\n> Nested Loop Left Join (cost=28514.47..34092.30 rows=1 width=239)\n> (actual time=287.318..573.234 rows=7359 loops=1)\n>\n> -> Hash Right Join (cost=28513.48..34090.65 rows=1 width=159)\n> (actual time=287.293..379.564 rows=7359 loops=1)\n>\n> Hash Cond: (ws.contract_line = contractline.id)\n>\n> -> Seq Scan on shareholder_web_subscription ws\n> (cost=0.00..5378.84 rows=52884 width=24) (actual time=0.006..12.307\n> rows=52884 loops=1)\n>\n> -> Hash (cost=28513.47..28513.47 rows=1 width=143) (actual\n> time=287.243..287.243 rows=7359 loops=1)\n>\n> Buckets: 8192 (originally 1024) Batches: 1 (originally 1)\n> Memory Usage: 1173kB\n>\n> -> Nested Loop Left Join (cost=17456.16..28513.47 rows=1\n> width=143) (actual time=85.005..284.689 rows=7359 loops=1)\n>\n> -> Nested Loop (cost=17456.03..28513.31 rows=1\n> width=148) (actual time=85.000..276.599 rows=7359 loops=1)\n>\n> -> Nested Loop Left Join\n> (cost=17455.73..28512.84 rows=1 width=148) (actual\n> time=84.993..261.954 rows=7359 loops=1)\n>\n> -> Nested Loop (cost=17455.60..28512.67\n> rows=1 width=140) (actual time=84.989..253.715 rows=7359 loops=1)\n>\n> -> Nested Loop\n> (cost=17455.18..28511.93 rows=1 width=93) (actual time=84.981..230.977\n> rows=7359 loops=1)\n>\n> -> Merge Right Join\n> (cost=17454.89..28511.52 rows=1 width=93) (actual time=84.974..211.200\n> rows=7359 loops=1)\n>\n> Merge Cond:\n> (subscribed_power.amendment = amendment.id)\n>\n> -> GroupAggregate\n> (cost=12457.78..22574.03 rows=75229 width=168) (actual\n> time=57.500..175.674 rows=83432 loops=1)\n>\n> Group Key:\n> subscribed_power.amendment\n>\n> -> Merge Join\n> (cost=12457.78..20764.08 rows=173917 width=12) (actual\n> time=57.479..129.530 rows=87938 loops=1)\n>\n> Merge Cond:\n> (subscribed_power.amendment = amendment_1.id)\n>\n> -> Index\n> Scan using contract_subscribed_power_amendment_idx on\n> contract_subscribed_power subscribed_power (cost=0.42..13523.09\n> rows=173917 width=12) (actual time=0.009..33.704 rows=87963 loops=1)\n>\n> -> Sort\n> (cost=12457.36..12666.43 rows=83629 width=8) (actual\n> time=57.467..67.071 rows=88019 loops=1)\n>\n> Sort\n> Key: amendment_1.id\n>\n> Sort\n> Method: quicksort Memory: 6988kB\n>\n> ->\n> Hash Join (cost=10.21..5619.97 rows=83629 width=8) (actual\n> time=0.112..40.965 rows=83532 loops=1)\n>\n>\n> Hash Cond: (amendment_1.pricing = pricing.id)\n>\n>\n> -> Seq Scan on contract_amendment amendment_1 (cost=0.00..4460.29\n> rows=83629 width=16) (actual time=0.004..6.988 rows=83629 loops=1)\n>\n>\n> -> Hash (cost=8.43..8.43 rows=142 width=8) (actual time=0.095..0.095\n> rows=141 loops=1)\n>\n>\n> Buckets: 1024 Batches: 1 Memory Usage: 14kB\n>\n>\n> -> Hash Join (cost=1.07..8.43 rows=142 width=8) (actual\n> time=0.012..0.078 rows=141 loops=1)\n>\n>\n> Hash Cond: (pricing.elec_range = elec_range.id)\n>\n>\n> -> Seq Scan on pricing_pricing pricing (cost=0.00..5.42\n> rows=142 width=16) (actual time=0.003..0.015 rows=142 loops=1)\n>\n>\n> -> Hash (cost=1.03..1.03 rows=3 width=8) (actual\n> time=0.006..0.006 rows=3 loops=1)\n>\n>\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n>\n>\n> -> Seq Scan on fluid_elec_range elec_range\n> (cost=0.00..1.03 rows=3 width=8) (actual time=0.003..0.005 rows=3\n> loops=1)\n>\n> -> Sort\n> (cost=4997.11..4997.11 rows=1 width=69) (actual time=27.427..28.896\n> rows=7359 loops=1)\n>\n> Sort Key:\n> amendment.id\n>\n> Sort Method:\n> quicksort Memory: 1227kB\n>\n> -> Nested Loop\n> (cost=183.44..4997.10 rows=1 width=69) (actual time=1.115..24.616\n> rows=7359 loops=1)\n>\n> -> Nested\n> Loop (cost=183.15..4996.59 rows=1 width=49) (actual time=1.107..9.091\n> rows=7360 loops=1)\n>\n> ->\n> Index Scan using uk_3b1y5vw9gmh7u3jj8aa2uy0b9 on contact_partner\n> businessprovider (cost=0.42..8.44 rows=1 width=13) (actual\n> time=0.010..0.010 rows=1 loops=1)\n>\n>\n> Index Cond: ((business_provider_code)::text = 'BRZH'::text)\n>\n> ->\n> Bitmap Heap Scan on contract_contract_line contractline\n> (cost=182.73..4907.58 rows=8057 width=52) (actual time=1.086..5.231\n> rows=7360 loops=1)\n>\n>\n> Recheck Cond: (business_provider_partner = businessprovider.id)\n>\n>\n> Heap Blocks: exact=3586\n>\n>\n> -> Bitmap Index Scan on\n> contract_contract_line_business_provider_partner_idx\n> (cost=0.00..180.72 rows=8057 width=0) (actual time=0.655..0.655\n> rows=7360 loops=1)\n>\n>\n> Index Cond: (business_provider_partner = businessprovider.id)\n>\n> -> Index\n> Scan using contract_amendment_pkey on contract_amendment amendment\n> (cost=0.29..0.50 rows=1 width=28) (actual time=0.001..0.002 rows=1\n> loops=7360)\n>\n> Index\n> Cond: (id = contractline.amendment)\n>\n> -> Index Scan using\n> contract_contract_pkey on contract_contract contract (cost=0.29..0.40\n> rows=1 width=24) (actual time=0.002..0.002 rows=1 loops=7359)\n>\n> Index Cond: (id =\n> contractline.contract)\n>\n> -> Index Scan using\n> contact_partner_pkey on contact_partner partner (cost=0.42..0.74\n> rows=1 width=55) (actual time=0.002..0.002 rows=1 loops=7359)\n>\n> Index Cond: (id =\n> contract.main_client_partner)\n>\n> -> Index Scan using\n> contact_client_nature_pkey on contact_client_nature clientnature\n> (cost=0.14..0.15 rows=1 width=24) (actual time=0.001..0.001 rows=1\n> loops=7359)\n>\n> Index Cond: (id =\n> partner.client_nature)\n>\n> -> Index Scan using territory_mpt_pkey on\n> territory_mpt mpt (cost=0.29..0.46 rows=1 width=16) (actual\n> time=0.001..0.001 rows=1 loops=7359)\n>\n> Index Cond: (id = contractline.mpt)\n>\n> -> Index Scan using contract_user_segment_pkey on\n> contract_user_segment usersegment (cost=0.14..0.15 rows=1 width=11)\n> (actual time=0.001..0.001 rows=1 loops=7359)\n>\n> Index Cond: (id = amendment.user_segment)\n>\n> -> Nested Loop Left Join (cost=0.99..1.64 rows=1 width=96) (actual\n> time=0.021..0.025 rows=1 loops=7359)\n>\n> -> Nested Loop Left Join (cost=0.85..1.35 rows=1 width=89)\n> (actual time=0.017..0.020 rows=1 loops=7359)\n>\n> -> Nested Loop Left Join (cost=0.71..1.18 rows=1 width=76)\n> (actual time=0.013..0.014 rows=1 loops=7359)\n>\n> -> Index Scan using contact_address_pkey on\n> contact_address a (cost=0.42..0.85 rows=1 width=84) (actual\n> time=0.005..0.006 rows=1 loops=7359)\n>\n> Index Cond: (mpt.address = id)\n>\n> -> Index Scan using territory_commune_pkey on\n> territory_commune commune (cost=0.29..0.32 rows=1 width=16) (actual\n> time=0.005..0.006 rows=1 loops=7359)\n>\n> Index Cond: (a.commune = id)\n>\n> -> Index Scan using territory_department_pkey on\n> territory_department dept (cost=0.14..0.16 rows=1 width=37) (actual\n> time=0.003..0.004 rows=1 loops=7359)\n>\n> Index Cond: (commune.department = id)\n>\n> -> Index Scan using territory_region_pkey on territory_region reg\n> (cost=0.14..0.27 rows=1 width=23) (actual time=0.003..0.003 rows=1\n> loops=7359)\n>\n> Index Cond: (dept.region = id)\n> ->\n> Index Scan using administration_status_pkey on administration_status\n> status (cost=0.14..0.16 rows=1 width=21) (actual time=0.003..0.003\n> rows=1 loops=7359)\n>\n> Index Cond: (id = contractline.status)\n> ->\n> GroupAggregate (cost=3552.48..4479.27 rows=27827 width=80) (actual\n> time=0.006..44.205 rows=27846 loops=7359)\n> Group\n> Key: sh.share_holder_partner\n> ->\n> Sort (cost=3552.48..3624.85 rows=28948 width=17) (actual\n> time=0.003..2.913 rows=28946 loops=7359)\n>\n> Sort Key: sh.share_holder_partner\n>\n> Sort Method: quicksort Memory: 3030kB\n>\n> -> Hash Join (cost=2.23..1407.26 rows=28948 width=17) (actual\n> time=0.024..12.296 rows=28946 loops=1)\n>\n> Hash Cond: (sh.company = sh_coop.id)\n>\n> -> Seq Scan on shareholder_share_holder sh (cost=0.00..1007.00\n> rows=28948 width=20) (actual time=0.007..5.495 rows=28946 loops=1)\n>\n> Filter: (nb_share > 0)\n>\n> Rows Removed by Filter: 1934\n>\n> -> Hash (cost=2.10..2.10 rows=10 width=13) (actual\n> time=0.009..0.009 rows=10 loops=1)\n>\n> Buckets: 1024 Batches: 1 Memory Usage: 9kB\n>\n> -> Seq Scan on contact_company sh_coop (cost=0.00..2.10\n> rows=10 width=13) (actual time=0.003..0.006 rows=10 loops=1)\n> -> Index Scan\n> using crm_origin_pkey on crm_origin co (cost=0.14..0.16 rows=1\n> width=19) (actual time=0.004..0.004 rows=1 loops=7359)\n> Index Cond:\n> (id = ws.how_meet_enercoop)\n> -> Index Scan using\n> contact_contact_pkey on contact_contact mc (cost=0.42..0.65 rows=1\n> width=150) (actual time=0.007..0.008 rows=1 loops=7359)\n> Index Cond:\n> (partner.main_contact = id)\n> -> Index Scan using\n> contact_title_pkey on contact_title title (cost=0.14..0.16 rows=1\n> width=42) (actual time=0.003..0.003 rows=1 loops=7359)\n> Index Cond: (mc.title = id)\n> -> Index Scan using\n> contact_address_pkey on contact_address adr_contact (cost=0.43..0.70\n> rows=1 width=68) (actual time=0.005..0.005 rows=1 loops=7359)\n> Index Cond: (id = CASE WHEN\n> (CASE WHEN ((partner.person_category_select)::text =\n> 'naturalPerson'::text) THEN 'P'::text WHEN\n> ((partner.person_category_select)::text = 'legalPerson'::text) THEN\n> 'M'::text ELSE '?????'::text END = 'P'::text) THEN\n> COALESCE(mc.address, mc.address_pro) ELSE COALESCE(mc.address_pro,\n> mc.address) END)\n> -> Index Scan using\n> contact_contact_address_contact_idx on contact_contact_address cca\n> (cost=0.42..0.48 rows=1 width=24) (actual time=0.006..0.006 rows=1\n> loops=7359)\n> Index Cond: (contact = mc.id)\n> -> Index Scan using\n> contact_contact_address_status_pkey on contact_contact_address_status\n> npai (cost=0.13..0.15 rows=1 width=9) (actual time=0.000..0.000\n> rows=0 loops=7359)\n> Index Cond: (cca.contact_address_status = id)\n> -> Index Scan using\n> crm_crm_request_original_contract_line_idx on crm_crm_request\n> mesrequest (cost=0.28..0.29 rows=1 width=16) (actual\n> time=0.003..0.003 rows=0 loops=7359)\n> Index Cond: (original_contract_line = contractline.id)\n> -> Index Scan using sale_product_sub_family_pkey on\n> sale_product_sub_family mesproductsubfamily (cost=0.14..0.20 rows=1\n> width=62) (actual time=0.000..0.000 rows=0 loops=7359)\n> Index Cond: (id = mesrequest.product_sub_family)\n> Filter: (new_contract_ok IS TRUE)\n> Planning time: 21.106 ms\n> Execution time: 341275.027 ms\n> (118 lignes)\n>\n>\n> --\n> http://www.laurentmartelli.com // http://www.imprimart.fr\n\n\n\n-- \nhttp://www.laurentmartelli.com // http://www.imprimart.fr\n\n", "msg_date": "Tue, 23 Jan 2018 16:59:41 +0100", "msg_from": "Laurent Martelli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad plan" }, { "msg_contents": "In my opinion this is the Achilles heel of the postgres optimizer. Row\nestimates should never return 1, unless the estimate is provably <=1. This\nis particularly a problem with join estimates. A dumb fix for this is to\nchange clamp_join_row_est() to never return a value <2. This fixes most of\nmy observed poor plans. The real fix is to track uniqueness (or provable\nmax rows) along with the selectivity estimate.\n\nHere's the dumb fix.\n\nhttps://github.com/labkey-matthewb/postgres/commit/b1fd99f4deffbbf3db2172ccaba51a34f18d1b1a\n\n\n\nOn Tue, Jan 23, 2018 at 7:59 AM, Laurent Martelli <[email protected]\n> wrote:\n\n> I've have a look to the plan with pgadmin, and I think the problem is\n> rather here :\n>\n> -> Sort (cost=4997.11..4997.11 rows=1 width=69) (actual\n> time=27.427..28.896 rows=7359 loops=1)\n> Sort Key: amendment.id\n> Sort Method: quicksort Memory: 1227kB\n> -> Nested Loop (cost=183.44..4997.10 rows=1 width=69) (actual\n> time=1.115..24.616 rows=7359 loops=1)\n> -> Nested Loop (cost=183.15..4996.59 rows=1 width=49)\n> (actual time=1.107..9.091 rows=7360 loops=1)\n> -> Index Scan using uk_3b1y5vw9gmh7u3jj8aa2uy0b9 on\n> contact_partner businessprovider (cost=0.42..8.44 rows=1 width=13)\n> (actual time=0.010..0.010 rows=1 loops=1)\n> Index Cond: ((business_provider_code)::text =\n> 'BRZH'::text)\n> -> Bitmap Heap Scan on contract_contract_line\n> contractline (cost=182.73..4907.58 rows=8057 width=52) (actual\n> time=1.086..5.231 rows=7360 loops=1)\n> Recheck Cond: (business_provider_partner =\n> businessprovider.id)\n> Heap Blocks: exact=3586\n> -> Bitmap Index Scan on\n> contract_contract_line_business_provider_partner_idx\n> (cost=0.00..180.72 rows=8057 width=0) (actual time=0.655..0.655\n> rows=7360 loops=1)\n> Index Cond: (business_provider_partner =\n> businessprovider.id)\n> -> Index Scan using contract_amendment_pkey on\n> contract_amendment amendment (cost=0.29..0.50 rows=1 width=28)\n> (actual time=0.001..0.002 rows=1 loops=7360)\n> Index Cond: (id = contractline.amendment)\n>\n> The bitmap scan on contract_contract_line is good (8057 vs 7360 rows),\n> and so is the index scan (1 row), but the JOIN with \"contact_partner\n> businessProvider\" should give the 8057 rows from the bitmap scan,\n> shouldn't it ?\n>\n>\n> 2018-01-23 16:38 GMT+01:00 Laurent Martelli <[email protected]>:\n> > 2018-01-23 16:18 GMT+01:00 Justin Pryzby <[email protected]>:\n> >> On Tue, Jan 23, 2018 at 01:03:49PM +0100, Laurent Martelli wrote:\n> >>\n> >>> Here is the default plan :\n> >>\n> >> Can you resend without line breaks or paste a link to explain.depesz?\n> >\n> > I hope it's better like that. I've attached it too, just in case.\n> >\n> >>\n> >> The problem appears to be here:\n> >>\n> >> -> Nested Loop Left Join (cost=32067.09..39197.85 rows=1 width=276)\n> (actual time=342.725..340775.031 rows=7359 loops=1)\n> >> Join Filter: (sh.share_holder_partner = partner.id)\n> >> Rows Removed by Join Filter: 204915707\n> >>\n> >> Justin\n> >\n> >\n> >\n> > QUERY PLAN\n> > ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------------\n> > Sort (cost=39200.76..39200.76 rows=1 width=1066) (actual\n> > time=341273.300..341274.244 rows=7359 loops=1)\n> > Sort Key: ((array_agg(subscribed_power.subscribed_power))[1]) DESC,\n> > status.name, contractline.id\n> > Sort Method: quicksort Memory: 3930kB\n> > -> Nested Loop Left Join (cost=32069.19..39200.75 rows=1\n> > width=1066) (actual time=342.806..341203.151 rows=7359 loops=1)\n> > -> Nested Loop Left Join (cost=32069.05..39200.50 rows=1\n> > width=508) (actual time=342.784..341102.848 rows=7359 loops=1)\n> > -> Nested Loop Left Join (cost=32068.77..39200.20\n> > rows=1 width=500) (actual time=342.778..341070.310 rows=7359 loops=1)\n> > -> Nested Loop Left Join\n> > (cost=32068.64..39200.04 rows=1 width=507) (actual\n> > time=342.776..341058.256 rows=7359 loops=1)\n> > Join Filter: (cca.address = adr_contact.id)\n> > Rows Removed by Join Filter: 2254\n> > -> Nested Loop Left Join\n> > (cost=32068.22..39199.55 rows=1 width=515) (actual\n> > time=342.767..340997.058 rows=7359 loops=1)\n> > -> Nested Loop Left Join\n> > (cost=32067.79..39198.84 rows=1 width=447) (actual\n> > time=342.753..340932.286 rows=7359 loops=1)\n> > -> Nested Loop Left Join\n> > (cost=32067.65..39198.67 rows=1 width=421) (actual\n> > time=342.748..340896.132 rows=7359 loops=1)\n> > -> Nested Loop Left Join\n> > (cost=32067.23..39198.01 rows=1 width=279) (actual\n> > time=342.739..340821.987 rows=7359 loops=1)\n> > -> Nested Loop\n> > Left Join (cost=32067.09..39197.85 rows=1 width=276) (actual\n> > time=342.725..340775.031 rows=7359 loops=1)\n> > Join Filter:\n> > (sh.share_holder_partner = partner.id)\n> > Rows Removed\n> > by Join Filter: 204915707\n> > -> Nested\n> > Loop Left Join (cost=28514.61..34092.46 rows=1 width=244) (actual\n> > time=287.323..610.192 rows=7359 loops=1)\n> > ->\n> > Nested Loop Left Join (cost=28514.47..34092.30 rows=1 width=239)\n> > (actual time=287.318..573.234 rows=7359 loops=1)\n> >\n> > -> Hash Right Join (cost=28513.48..34090.65 rows=1 width=159)\n> > (actual time=287.293..379.564 rows=7359 loops=1)\n> >\n> > Hash Cond: (ws.contract_line = contractline.id)\n> >\n> > -> Seq Scan on shareholder_web_subscription ws\n> > (cost=0.00..5378.84 rows=52884 width=24) (actual time=0.006..12.307\n> > rows=52884 loops=1)\n> >\n> > -> Hash (cost=28513.47..28513.47 rows=1 width=143) (actual\n> > time=287.243..287.243 rows=7359 loops=1)\n> >\n> > Buckets: 8192 (originally 1024) Batches: 1 (originally 1)\n> > Memory Usage: 1173kB\n> >\n> > -> Nested Loop Left Join (cost=17456.16..28513.47 rows=1\n> > width=143) (actual time=85.005..284.689 rows=7359 loops=1)\n> >\n> > -> Nested Loop (cost=17456.03..28513.31 rows=1\n> > width=148) (actual time=85.000..276.599 rows=7359 loops=1)\n> >\n> > -> Nested Loop Left Join\n> > (cost=17455.73..28512.84 rows=1 width=148) (actual\n> > time=84.993..261.954 rows=7359 loops=1)\n> >\n> > -> Nested Loop (cost=17455.60..28512.67\n> > rows=1 width=140) (actual time=84.989..253.715 rows=7359 loops=1)\n> >\n> > -> Nested Loop\n> > (cost=17455.18..28511.93 rows=1 width=93) (actual time=84.981..230.977\n> > rows=7359 loops=1)\n> >\n> > -> Merge Right Join\n> > (cost=17454.89..28511.52 rows=1 width=93) (actual time=84.974..211.200\n> > rows=7359 loops=1)\n> >\n> > Merge Cond:\n> > (subscribed_power.amendment = amendment.id)\n> >\n> > -> GroupAggregate\n> > (cost=12457.78..22574.03 rows=75229 width=168) (actual\n> > time=57.500..175.674 rows=83432 loops=1)\n> >\n> > Group Key:\n> > subscribed_power.amendment\n> >\n> > -> Merge Join\n> > (cost=12457.78..20764.08 rows=173917 width=12) (actual\n> > time=57.479..129.530 rows=87938 loops=1)\n> >\n> > Merge Cond:\n> > (subscribed_power.amendment = amendment_1.id)\n> >\n> > -> Index\n> > Scan using contract_subscribed_power_amendment_idx on\n> > contract_subscribed_power subscribed_power (cost=0.42..13523.09\n> > rows=173917 width=12) (actual time=0.009..33.704 rows=87963 loops=1)\n> >\n> > -> Sort\n> > (cost=12457.36..12666.43 rows=83629 width=8) (actual\n> > time=57.467..67.071 rows=88019 loops=1)\n> >\n> > Sort\n> > Key: amendment_1.id\n> >\n> > Sort\n> > Method: quicksort Memory: 6988kB\n> >\n> > ->\n> > Hash Join (cost=10.21..5619.97 rows=83629 width=8) (actual\n> > time=0.112..40.965 rows=83532 loops=1)\n> >\n> >\n> > Hash Cond: (amendment_1.pricing = pricing.id)\n> >\n> >\n> > -> Seq Scan on contract_amendment amendment_1 (cost=0.00..4460.29\n> > rows=83629 width=16) (actual time=0.004..6.988 rows=83629 loops=1)\n> >\n> >\n> > -> Hash (cost=8.43..8.43 rows=142 width=8) (actual time=0.095..0.095\n> > rows=141 loops=1)\n> >\n> >\n> > Buckets: 1024 Batches: 1 Memory Usage: 14kB\n> >\n> >\n> > -> Hash Join (cost=1.07..8.43 rows=142 width=8) (actual\n> > time=0.012..0.078 rows=141 loops=1)\n> >\n> >\n> > Hash Cond: (pricing.elec_range = elec_range.id)\n> >\n> >\n> > -> Seq Scan on pricing_pricing pricing (cost=0.00..5.42\n> > rows=142 width=16) (actual time=0.003..0.015 rows=142 loops=1)\n> >\n> >\n> > -> Hash (cost=1.03..1.03 rows=3 width=8) (actual\n> > time=0.006..0.006 rows=3 loops=1)\n> >\n> >\n> > Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> >\n> >\n> > -> Seq Scan on fluid_elec_range elec_range\n> > (cost=0.00..1.03 rows=3 width=8) (actual time=0.003..0.005 rows=3\n> > loops=1)\n> >\n> > -> Sort\n> > (cost=4997.11..4997.11 rows=1 width=69) (actual time=27.427..28.896\n> > rows=7359 loops=1)\n> >\n> > Sort Key:\n> > amendment.id\n> >\n> > Sort Method:\n> > quicksort Memory: 1227kB\n> >\n> > -> Nested Loop\n> > (cost=183.44..4997.10 rows=1 width=69) (actual time=1.115..24.616\n> > rows=7359 loops=1)\n> >\n> > -> Nested\n> > Loop (cost=183.15..4996.59 rows=1 width=49) (actual time=1.107..9.091\n> > rows=7360 loops=1)\n> >\n> > ->\n> > Index Scan using uk_3b1y5vw9gmh7u3jj8aa2uy0b9 on contact_partner\n> > businessprovider (cost=0.42..8.44 rows=1 width=13) (actual\n> > time=0.010..0.010 rows=1 loops=1)\n> >\n> >\n> > Index Cond: ((business_provider_code)::text = 'BRZH'::text)\n> >\n> > ->\n> > Bitmap Heap Scan on contract_contract_line contractline\n> > (cost=182.73..4907.58 rows=8057 width=52) (actual time=1.086..5.231\n> > rows=7360 loops=1)\n> >\n> >\n> > Recheck Cond: (business_provider_partner = businessprovider.id)\n> >\n> >\n> > Heap Blocks: exact=3586\n> >\n> >\n> > -> Bitmap Index Scan on\n> > contract_contract_line_business_provider_partner_idx\n> > (cost=0.00..180.72 rows=8057 width=0) (actual time=0.655..0.655\n> > rows=7360 loops=1)\n> >\n> >\n> > Index Cond: (business_provider_partner = businessprovider.id)\n> >\n> > -> Index\n> > Scan using contract_amendment_pkey on contract_amendment amendment\n> > (cost=0.29..0.50 rows=1 width=28) (actual time=0.001..0.002 rows=1\n> > loops=7360)\n> >\n> > Index\n> > Cond: (id = contractline.amendment)\n> >\n> > -> Index Scan using\n> > contract_contract_pkey on contract_contract contract (cost=0.29..0.40\n> > rows=1 width=24) (actual time=0.002..0.002 rows=1 loops=7359)\n> >\n> > Index Cond: (id =\n> > contractline.contract)\n> >\n> > -> Index Scan using\n> > contact_partner_pkey on contact_partner partner (cost=0.42..0.74\n> > rows=1 width=55) (actual time=0.002..0.002 rows=1 loops=7359)\n> >\n> > Index Cond: (id =\n> > contract.main_client_partner)\n> >\n> > -> Index Scan using\n> > contact_client_nature_pkey on contact_client_nature clientnature\n> > (cost=0.14..0.15 rows=1 width=24) (actual time=0.001..0.001 rows=1\n> > loops=7359)\n> >\n> > Index Cond: (id =\n> > partner.client_nature)\n> >\n> > -> Index Scan using territory_mpt_pkey on\n> > territory_mpt mpt (cost=0.29..0.46 rows=1 width=16) (actual\n> > time=0.001..0.001 rows=1 loops=7359)\n> >\n> > Index Cond: (id = contractline.mpt)\n> >\n> > -> Index Scan using contract_user_segment_pkey on\n> > contract_user_segment usersegment (cost=0.14..0.15 rows=1 width=11)\n> > (actual time=0.001..0.001 rows=1 loops=7359)\n> >\n> > Index Cond: (id = amendment.user_segment)\n> >\n> > -> Nested Loop Left Join (cost=0.99..1.64 rows=1 width=96) (actual\n> > time=0.021..0.025 rows=1 loops=7359)\n> >\n> > -> Nested Loop Left Join (cost=0.85..1.35 rows=1 width=89)\n> > (actual time=0.017..0.020 rows=1 loops=7359)\n> >\n> > -> Nested Loop Left Join (cost=0.71..1.18 rows=1 width=76)\n> > (actual time=0.013..0.014 rows=1 loops=7359)\n> >\n> > -> Index Scan using contact_address_pkey on\n> > contact_address a (cost=0.42..0.85 rows=1 width=84) (actual\n> > time=0.005..0.006 rows=1 loops=7359)\n> >\n> > Index Cond: (mpt.address = id)\n> >\n> > -> Index Scan using territory_commune_pkey on\n> > territory_commune commune (cost=0.29..0.32 rows=1 width=16) (actual\n> > time=0.005..0.006 rows=1 loops=7359)\n> >\n> > Index Cond: (a.commune = id)\n> >\n> > -> Index Scan using territory_department_pkey on\n> > territory_department dept (cost=0.14..0.16 rows=1 width=37) (actual\n> > time=0.003..0.004 rows=1 loops=7359)\n> >\n> > Index Cond: (commune.department = id)\n> >\n> > -> Index Scan using territory_region_pkey on territory_region reg\n> > (cost=0.14..0.27 rows=1 width=23) (actual time=0.003..0.003 rows=1\n> > loops=7359)\n> >\n> > Index Cond: (dept.region = id)\n> > ->\n> > Index Scan using administration_status_pkey on administration_status\n> > status (cost=0.14..0.16 rows=1 width=21) (actual time=0.003..0.003\n> > rows=1 loops=7359)\n> >\n> > Index Cond: (id = contractline.status)\n> > ->\n> > GroupAggregate (cost=3552.48..4479.27 rows=27827 width=80) (actual\n> > time=0.006..44.205 rows=27846 loops=7359)\n> > Group\n> > Key: sh.share_holder_partner\n> > ->\n> > Sort (cost=3552.48..3624.85 rows=28948 width=17) (actual\n> > time=0.003..2.913 rows=28946 loops=7359)\n> >\n> > Sort Key: sh.share_holder_partner\n> >\n> > Sort Method: quicksort Memory: 3030kB\n> >\n> > -> Hash Join (cost=2.23..1407.26 rows=28948 width=17) (actual\n> > time=0.024..12.296 rows=28946 loops=1)\n> >\n> > Hash Cond: (sh.company = sh_coop.id)\n> >\n> > -> Seq Scan on shareholder_share_holder sh (cost=0.00..1007.00\n> > rows=28948 width=20) (actual time=0.007..5.495 rows=28946 loops=1)\n> >\n> > Filter: (nb_share > 0)\n> >\n> > Rows Removed by Filter: 1934\n> >\n> > -> Hash (cost=2.10..2.10 rows=10 width=13) (actual\n> > time=0.009..0.009 rows=10 loops=1)\n> >\n> > Buckets: 1024 Batches: 1 Memory Usage: 9kB\n> >\n> > -> Seq Scan on contact_company sh_coop (cost=0.00..2.10\n> > rows=10 width=13) (actual time=0.003..0.006 rows=10 loops=1)\n> > -> Index Scan\n> > using crm_origin_pkey on crm_origin co (cost=0.14..0.16 rows=1\n> > width=19) (actual time=0.004..0.004 rows=1 loops=7359)\n> > Index Cond:\n> > (id = ws.how_meet_enercoop)\n> > -> Index Scan using\n> > contact_contact_pkey on contact_contact mc (cost=0.42..0.65 rows=1\n> > width=150) (actual time=0.007..0.008 rows=1 loops=7359)\n> > Index Cond:\n> > (partner.main_contact = id)\n> > -> Index Scan using\n> > contact_title_pkey on contact_title title (cost=0.14..0.16 rows=1\n> > width=42) (actual time=0.003..0.003 rows=1 loops=7359)\n> > Index Cond: (mc.title = id)\n> > -> Index Scan using\n> > contact_address_pkey on contact_address adr_contact (cost=0.43..0.70\n> > rows=1 width=68) (actual time=0.005..0.005 rows=1 loops=7359)\n> > Index Cond: (id = CASE WHEN\n> > (CASE WHEN ((partner.person_category_select)::text =\n> > 'naturalPerson'::text) THEN 'P'::text WHEN\n> > ((partner.person_category_select)::text = 'legalPerson'::text) THEN\n> > 'M'::text ELSE '?????'::text END = 'P'::text) THEN\n> > COALESCE(mc.address, mc.address_pro) ELSE COALESCE(mc.address_pro,\n> > mc.address) END)\n> > -> Index Scan using\n> > contact_contact_address_contact_idx on contact_contact_address cca\n> > (cost=0.42..0.48 rows=1 width=24) (actual time=0.006..0.006 rows=1\n> > loops=7359)\n> > Index Cond: (contact = mc.id)\n> > -> Index Scan using\n> > contact_contact_address_status_pkey on contact_contact_address_status\n> > npai (cost=0.13..0.15 rows=1 width=9) (actual time=0.000..0.000\n> > rows=0 loops=7359)\n> > Index Cond: (cca.contact_address_status = id)\n> > -> Index Scan using\n> > crm_crm_request_original_contract_line_idx on crm_crm_request\n> > mesrequest (cost=0.28..0.29 rows=1 width=16) (actual\n> > time=0.003..0.003 rows=0 loops=7359)\n> > Index Cond: (original_contract_line =\n> contractline.id)\n> > -> Index Scan using sale_product_sub_family_pkey on\n> > sale_product_sub_family mesproductsubfamily (cost=0.14..0.20 rows=1\n> > width=62) (actual time=0.000..0.000 rows=0 loops=7359)\n> > Index Cond: (id = mesrequest.product_sub_family)\n> > Filter: (new_contract_ok IS TRUE)\n> > Planning time: 21.106 ms\n> > Execution time: 341275.027 ms\n> > (118 lignes)\n> >\n> >\n> > --\n> > http://www.laurentmartelli.com // http://www.imprimart.fr\n>\n>\n>\n> --\n> http://www.laurentmartelli.com // http://www.imprimart.fr\n>\n>\n\nIn my opinion this is the Achilles heel of the postgres optimizer.  Row estimates should never return 1, unless the estimate is provably <=1.  This is particularly a problem with join estimates.  A dumb fix for this is to change clamp_join_row_est() to never return a value <2.  This fixes most of my observed poor plans.  The real fix is to track uniqueness (or provable max rows) along with the selectivity estimate.Here's the dumb fix.https://github.com/labkey-matthewb/postgres/commit/b1fd99f4deffbbf3db2172ccaba51a34f18d1b1aOn Tue, Jan 23, 2018 at 7:59 AM, Laurent Martelli <[email protected]> wrote:I've have a look to the plan with pgadmin, and I think the problem is\nrather here :\n\n->  Sort  (cost=4997.11..4997.11 rows=1 width=69) (actual\ntime=27.427..28.896 rows=7359 loops=1)\n      Sort Key: amendment.id\n      Sort Method: quicksort  Memory: 1227kB\n      ->  Nested Loop  (cost=183.44..4997.10 rows=1 width=69) (actual\ntime=1.115..24.616 rows=7359 loops=1)\n            ->  Nested Loop  (cost=183.15..4996.59 rows=1 width=49)\n(actual time=1.107..9.091 rows=7360 loops=1)\n                  ->  Index Scan using uk_3b1y5vw9gmh7u3jj8aa2uy0b9 on\ncontact_partner businessprovider  (cost=0.42..8.44 rows=1 width=13)\n(actual time=0.010..0.010 rows=1 loops=1)\n                        Index Cond: ((business_provider_code)::text =\n'BRZH'::text)\n                  ->  Bitmap Heap Scan on contract_contract_line\ncontractline  (cost=182.73..4907.58 rows=8057 width=52) (actual\ntime=1.086..5.231 rows=7360 loops=1)\n                        Recheck Cond: (business_provider_partner =\nbusinessprovider.id)\n                        Heap Blocks: exact=3586\n                        ->  Bitmap Index Scan on\ncontract_contract_line_business_provider_partner_idx\n(cost=0.00..180.72 rows=8057 width=0) (actual time=0.655..0.655\nrows=7360 loops=1)\n                              Index Cond: (business_provider_partner =\nbusinessprovider.id)\n            ->  Index Scan using contract_amendment_pkey on\ncontract_amendment amendment  (cost=0.29..0.50 rows=1 width=28)\n(actual time=0.001..0.002 rows=1 loops=7360)\n                  Index Cond: (id = contractline.amendment)\n\nThe bitmap scan on contract_contract_line is good (8057 vs 7360 rows),\nand so is the index scan (1 row), but the JOIN with \"contact_partner\nbusinessProvider\" should give the 8057 rows from the bitmap scan,\nshouldn't it ?\n\n\n2018-01-23 16:38 GMT+01:00 Laurent Martelli <[email protected]>:\n> 2018-01-23 16:18 GMT+01:00 Justin Pryzby <[email protected]>:\n>> On Tue, Jan 23, 2018 at 01:03:49PM +0100, Laurent Martelli wrote:\n>>\n>>> Here is the default plan :\n>>\n>> Can you resend without line breaks or paste a link to explain.depesz?\n>\n> I hope it's better like that. I've attached it too, just in case.\n>\n>>\n>> The problem appears to be here:\n>>\n>> ->  Nested Loop Left Join  (cost=32067.09..39197.85 rows=1 width=276) (actual time=342.725..340775.031 rows=7359 loops=1)\n>> Join Filter: (sh.share_holder_partner = partner.id)\n>> Rows Removed by Join Filter: 204915707\n>>\n>> Justin\n>\n>\n>\n>                                     QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Sort  (cost=39200.76..39200.76 rows=1 width=1066) (actual\n> time=341273.300..341274.244 rows=7359 loops=1)\n>    Sort Key: ((array_agg(subscribed_power.subscribed_power))[1]) DESC,\n> status.name, contractline.id\n>    Sort Method: quicksort  Memory: 3930kB\n>    ->  Nested Loop Left Join  (cost=32069.19..39200.75 rows=1\n> width=1066) (actual time=342.806..341203.151 rows=7359 loops=1)\n>          ->  Nested Loop Left Join  (cost=32069.05..39200.50 rows=1\n> width=508) (actual time=342.784..341102.848 rows=7359 loops=1)\n>                ->  Nested Loop Left Join  (cost=32068.77..39200.20\n> rows=1 width=500) (actual time=342.778..341070.310 rows=7359 loops=1)\n>                      ->  Nested Loop Left Join\n> (cost=32068.64..39200.04 rows=1 width=507) (actual\n> time=342.776..341058.256 rows=7359 loops=1)\n>                            Join Filter: (cca.address = adr_contact.id)\n>                            Rows Removed by Join Filter: 2254\n>                            ->  Nested Loop Left Join\n> (cost=32068.22..39199.55 rows=1 width=515) (actual\n> time=342.767..340997.058 rows=7359 loops=1)\n>                                  ->  Nested Loop Left Join\n> (cost=32067.79..39198.84 rows=1 width=447) (actual\n> time=342.753..340932.286 rows=7359 loops=1)\n>                                        ->  Nested Loop Left Join\n> (cost=32067.65..39198.67 rows=1 width=421) (actual\n> time=342.748..340896.132 rows=7359 loops=1)\n>                                              ->  Nested Loop Left Join\n>  (cost=32067.23..39198.01 rows=1 width=279) (actual\n> time=342.739..340821.987 rows=7359 loops=1)\n>                                                    ->  Nested Loop\n> Left Join  (cost=32067.09..39197.85 rows=1 width=276) (actual\n> time=342.725..340775.031 rows=7359 loops=1)\n>                                                          Join Filter:\n> (sh.share_holder_partner = partner.id)\n>                                                          Rows Removed\n> by Join Filter: 204915707\n>                                                          ->  Nested\n> Loop Left Join  (cost=28514.61..34092.46 rows=1 width=244) (actual\n> time=287.323..610.192 rows=7359 loops=1)\n>                                                                ->\n> Nested Loop Left Join  (cost=28514.47..34092.30 rows=1 width=239)\n> (actual time=287.318..573.234 rows=7359 loops=1)\n>\n> ->  Hash Right Join  (cost=28513.48..34090.65 rows=1 width=159)\n> (actual time=287.293..379.564 rows=7359 loops=1)\n>\n>     Hash Cond: (ws.contract_line = contractline.id)\n>\n>     ->  Seq Scan on shareholder_web_subscription ws\n> (cost=0.00..5378.84 rows=52884 width=24) (actual time=0.006..12.307\n> rows=52884 loops=1)\n>\n>     ->  Hash  (cost=28513.47..28513.47 rows=1 width=143) (actual\n> time=287.243..287.243 rows=7359 loops=1)\n>\n>           Buckets: 8192 (originally 1024)  Batches: 1 (originally 1)\n> Memory Usage: 1173kB\n>\n>           ->  Nested Loop Left Join  (cost=17456.16..28513.47 rows=1\n> width=143) (actual time=85.005..284.689 rows=7359 loops=1)\n>\n>                 ->  Nested Loop  (cost=17456.03..28513.31 rows=1\n> width=148) (actual time=85.000..276.599 rows=7359 loops=1)\n>\n>                       ->  Nested Loop Left Join\n> (cost=17455.73..28512.84 rows=1 width=148) (actual\n> time=84.993..261.954 rows=7359 loops=1)\n>\n>                             ->  Nested Loop  (cost=17455.60..28512.67\n> rows=1 width=140) (actual time=84.989..253.715 rows=7359 loops=1)\n>\n>                                   ->  Nested Loop\n> (cost=17455.18..28511.93 rows=1 width=93) (actual time=84.981..230.977\n> rows=7359 loops=1)\n>\n>                                         ->  Merge Right Join\n> (cost=17454.89..28511.52 rows=1 width=93) (actual time=84.974..211.200\n> rows=7359 loops=1)\n>\n>                                               Merge Cond:\n> (subscribed_power.amendment = amendment.id)\n>\n>                                               ->  GroupAggregate\n> (cost=12457.78..22574.03 rows=75229 width=168) (actual\n> time=57.500..175.674 rows=83432 loops=1)\n>\n>                                                     Group Key:\n> subscribed_power.amendment\n>\n>                                                     ->  Merge Join\n> (cost=12457.78..20764.08 rows=173917 width=12) (actual\n> time=57.479..129.530 rows=87938 loops=1)\n>\n>                                                           Merge Cond:\n> (subscribed_power.amendment = amendment_1.id)\n>\n>                                                           ->  Index\n> Scan using contract_subscribed_power_amendment_idx on\n> contract_subscribed_power subscribed_power  (cost=0.42..13523.09\n> rows=173917 width=12) (actual time=0.009..33.704 rows=87963 loops=1)\n>\n>                                                           ->  Sort\n> (cost=12457.36..12666.43 rows=83629 width=8) (actual\n> time=57.467..67.071 rows=88019 loops=1)\n>\n>                                                                 Sort\n> Key: amendment_1.id\n>\n>                                                                 Sort\n> Method: quicksort  Memory: 6988kB\n>\n>                                                                 ->\n> Hash Join  (cost=10.21..5619.97 rows=83629 width=8) (actual\n> time=0.112..40.965 rows=83532 loops=1)\n>\n>\n> Hash Cond: (amendment_1.pricing = pricing.id)\n>\n>\n> ->  Seq Scan on contract_amendment amendment_1  (cost=0.00..4460.29\n> rows=83629 width=16) (actual time=0.004..6.988 rows=83629 loops=1)\n>\n>\n> ->  Hash  (cost=8.43..8.43 rows=142 width=8) (actual time=0.095..0.095\n> rows=141 loops=1)\n>\n>\n>      Buckets: 1024  Batches: 1  Memory Usage: 14kB\n>\n>\n>      ->  Hash Join  (cost=1.07..8.43 rows=142 width=8) (actual\n> time=0.012..0.078 rows=141 loops=1)\n>\n>\n>            Hash Cond: (pricing.elec_range = elec_range.id)\n>\n>\n>            ->  Seq Scan on pricing_pricing pricing  (cost=0.00..5.42\n> rows=142 width=16) (actual time=0.003..0.015 rows=142 loops=1)\n>\n>\n>            ->  Hash  (cost=1.03..1.03 rows=3 width=8) (actual\n> time=0.006..0.006 rows=3 loops=1)\n>\n>\n>                  Buckets: 1024  Batches: 1  Memory Usage: 9kB\n>\n>\n>                  ->  Seq Scan on fluid_elec_range elec_range\n> (cost=0.00..1.03 rows=3 width=8) (actual time=0.003..0.005 rows=3\n> loops=1)\n>\n>                                               ->  Sort\n> (cost=4997.11..4997.11 rows=1 width=69) (actual time=27.427..28.896\n> rows=7359 loops=1)\n>\n>                                                     Sort Key:\n> amendment.id\n>\n>                                                     Sort Method:\n> quicksort  Memory: 1227kB\n>\n>                                                     ->  Nested Loop\n> (cost=183.44..4997.10 rows=1 width=69) (actual time=1.115..24.616\n> rows=7359 loops=1)\n>\n>                                                           ->  Nested\n> Loop  (cost=183.15..4996.59 rows=1 width=49) (actual time=1.107..9.091\n> rows=7360 loops=1)\n>\n>                                                                 ->\n> Index Scan using uk_3b1y5vw9gmh7u3jj8aa2uy0b9 on contact_partner\n> businessprovider  (cost=0.42..8.44 rows=1 width=13) (actual\n> time=0.010..0.010 rows=1 loops=1)\n>\n>\n> Index Cond: ((business_provider_code)::text = 'BRZH'::text)\n>\n>                                                                 ->\n> Bitmap Heap Scan on contract_contract_line contractline\n> (cost=182.73..4907.58 rows=8057 width=52) (actual time=1.086..5.231\n> rows=7360 loops=1)\n>\n>\n> Recheck Cond: (business_provider_partner = businessprovider.id)\n>\n>\n> Heap Blocks: exact=3586\n>\n>\n> ->  Bitmap Index Scan on\n> contract_contract_line_business_provider_partner_idx\n> (cost=0.00..180.72 rows=8057 width=0) (actual time=0.655..0.655\n> rows=7360 loops=1)\n>\n>\n>      Index Cond: (business_provider_partner = businessprovider.id)\n>\n>                                                           ->  Index\n> Scan using contract_amendment_pkey on contract_amendment amendment\n> (cost=0.29..0.50 rows=1 width=28) (actual time=0.001..0.002 rows=1\n> loops=7360)\n>\n>                                                                 Index\n> Cond: (id = contractline.amendment)\n>\n>                                         ->  Index Scan using\n> contract_contract_pkey on contract_contract contract  (cost=0.29..0.40\n> rows=1 width=24) (actual time=0.002..0.002 rows=1 loops=7359)\n>\n>                                               Index Cond: (id =\n> contractline.contract)\n>\n>                                   ->  Index Scan using\n> contact_partner_pkey on contact_partner partner  (cost=0.42..0.74\n> rows=1 width=55) (actual time=0.002..0.002 rows=1 loops=7359)\n>\n>                                         Index Cond: (id =\n> contract.main_client_partner)\n>\n>                             ->  Index Scan using\n> contact_client_nature_pkey on contact_client_nature clientnature\n> (cost=0.14..0.15 rows=1 width=24) (actual time=0.001..0.001 rows=1\n> loops=7359)\n>\n>                                   Index Cond: (id =\n> partner.client_nature)\n>\n>                       ->  Index Scan using territory_mpt_pkey on\n> territory_mpt mpt  (cost=0.29..0.46 rows=1 width=16) (actual\n> time=0.001..0.001 rows=1 loops=7359)\n>\n>                             Index Cond: (id = contractline.mpt)\n>\n>                 ->  Index Scan using contract_user_segment_pkey on\n> contract_user_segment usersegment  (cost=0.14..0.15 rows=1 width=11)\n> (actual time=0.001..0.001 rows=1 loops=7359)\n>\n>                       Index Cond: (id = amendment.user_segment)\n>\n> ->  Nested Loop Left Join  (cost=0.99..1.64 rows=1 width=96) (actual\n> time=0.021..0.025 rows=1 loops=7359)\n>\n>     ->  Nested Loop Left Join  (cost=0.85..1.35 rows=1 width=89)\n> (actual time=0.017..0.020 rows=1 loops=7359)\n>\n>           ->  Nested Loop Left Join  (cost=0.71..1.18 rows=1 width=76)\n> (actual time=0.013..0.014 rows=1 loops=7359)\n>\n>                 ->  Index Scan using contact_address_pkey on\n> contact_address a  (cost=0.42..0.85 rows=1 width=84) (actual\n> time=0.005..0.006 rows=1 loops=7359)\n>\n>                       Index Cond: (mpt.address = id)\n>\n>                 ->  Index Scan using territory_commune_pkey on\n> territory_commune commune  (cost=0.29..0.32 rows=1 width=16) (actual\n> time=0.005..0.006 rows=1 loops=7359)\n>\n>                       Index Cond: (a.commune = id)\n>\n>           ->  Index Scan using territory_department_pkey on\n> territory_department dept  (cost=0.14..0.16 rows=1 width=37) (actual\n> time=0.003..0.004 rows=1 loops=7359)\n>\n>                 Index Cond: (commune.department = id)\n>\n>     ->  Index Scan using territory_region_pkey on territory_region reg\n>  (cost=0.14..0.27 rows=1 width=23) (actual time=0.003..0.003 rows=1\n> loops=7359)\n>\n>           Index Cond: (dept.region = id)\n>                                                                ->\n> Index Scan using administration_status_pkey on administration_status\n> status  (cost=0.14..0.16 rows=1 width=21) (actual time=0.003..0.003\n> rows=1 loops=7359)\n>\n> Index Cond: (id = contractline.status)\n>                                                          ->\n> GroupAggregate  (cost=3552.48..4479.27 rows=27827 width=80) (actual\n> time=0.006..44.205 rows=27846 loops=7359)\n>                                                                Group\n> Key: sh.share_holder_partner\n>                                                                ->\n> Sort  (cost=3552.48..3624.85 rows=28948 width=17) (actual\n> time=0.003..2.913 rows=28946 loops=7359)\n>\n> Sort Key: sh.share_holder_partner\n>\n> Sort Method: quicksort  Memory: 3030kB\n>\n> ->  Hash Join  (cost=2.23..1407.26 rows=28948 width=17) (actual\n> time=0.024..12.296 rows=28946 loops=1)\n>\n>     Hash Cond: (sh.company = sh_coop.id)\n>\n>     ->  Seq Scan on shareholder_share_holder sh  (cost=0.00..1007.00\n> rows=28948 width=20) (actual time=0.007..5.495 rows=28946 loops=1)\n>\n>           Filter: (nb_share > 0)\n>\n>           Rows Removed by Filter: 1934\n>\n>     ->  Hash  (cost=2.10..2.10 rows=10 width=13) (actual\n> time=0.009..0.009 rows=10 loops=1)\n>\n>           Buckets: 1024  Batches: 1  Memory Usage: 9kB\n>\n>           ->  Seq Scan on contact_company sh_coop  (cost=0.00..2.10\n> rows=10 width=13) (actual time=0.003..0.006 rows=10 loops=1)\n>                                                    ->  Index Scan\n> using crm_origin_pkey on crm_origin co  (cost=0.14..0.16 rows=1\n> width=19) (actual time=0.004..0.004 rows=1 loops=7359)\n>                                                          Index Cond:\n> (id = ws.how_meet_enercoop)\n>                                              ->  Index Scan using\n> contact_contact_pkey on contact_contact mc  (cost=0.42..0.65 rows=1\n> width=150) (actual time=0.007..0.008 rows=1 loops=7359)\n>                                                    Index Cond:\n> (partner.main_contact = id)\n>                                        ->  Index Scan using\n> contact_title_pkey on contact_title title  (cost=0.14..0.16 rows=1\n> width=42) (actual time=0.003..0.003 rows=1 loops=7359)\n>                                              Index Cond: (mc.title = id)\n>                                  ->  Index Scan using\n> contact_address_pkey on contact_address adr_contact  (cost=0.43..0.70\n> rows=1 width=68) (actual time=0.005..0.005 rows=1 loops=7359)\n>                                        Index Cond: (id = CASE WHEN\n> (CASE WHEN ((partner.person_category_select)::text =\n> 'naturalPerson'::text) THEN 'P'::text WHEN\n> ((partner.person_category_select)::text = 'legalPerson'::text) THEN\n> 'M'::text ELSE '?????'::text END = 'P'::text) THEN\n> COALESCE(mc.address, mc.address_pro) ELSE COALESCE(mc.address_pro,\n> mc.address) END)\n>                            ->  Index Scan using\n> contact_contact_address_contact_idx on contact_contact_address cca\n> (cost=0.42..0.48 rows=1 width=24) (actual time=0.006..0.006 rows=1\n> loops=7359)\n>                                  Index Cond: (contact = mc.id)\n>                      ->  Index Scan using\n> contact_contact_address_status_pkey on contact_contact_address_status\n> npai  (cost=0.13..0.15 rows=1 width=9) (actual time=0.000..0.000\n> rows=0 loops=7359)\n>                            Index Cond: (cca.contact_address_status = id)\n>                ->  Index Scan using\n> crm_crm_request_original_contract_line_idx on crm_crm_request\n> mesrequest  (cost=0.28..0.29 rows=1 width=16) (actual\n> time=0.003..0.003 rows=0 loops=7359)\n>                      Index Cond: (original_contract_line = contractline.id)\n>          ->  Index Scan using sale_product_sub_family_pkey on\n> sale_product_sub_family mesproductsubfamily  (cost=0.14..0.20 rows=1\n> width=62) (actual time=0.000..0.000 rows=0 loops=7359)\n>                Index Cond: (id = mesrequest.product_sub_family)\n>                Filter: (new_contract_ok IS TRUE)\n>  Planning time: 21.106 ms\n>  Execution time: 341275.027 ms\n> (118 lignes)\n>\n>\n> --\n> http://www.laurentmartelli.com    //    http://www.imprimart.fr\n\n\n\n--\nhttp://www.laurentmartelli.com    //    http://www.imprimart.fr", "msg_date": "Tue, 23 Jan 2018 09:18:51 -0800", "msg_from": "Matthew Bellew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan" } ]
[ { "msg_contents": "Hi,\n\nI have to provide a summary of how much spaces is used in the large objects table based on a group by condition.\nI would expect an index only scan on the large object table, but a full seq scan that last for hours is performed.\n\nBigSql distribution\nPostgreSQL 9.6.5 on x86_64-pc-mingw64, compiled by gcc.exe (Rev5, Built by MSYS2 project) 4.9.2, 64-bit\nWin Server 2012 R2, 8GB RAM\npg server mem settings:\neffective_cache_size | 6GB\nmaintenance_work_mem | 819MB\nrandom_page_cost | 2\nshared_buffers | 2GB\nwork_mem | 32MB\n\nTestcase 1: Here is a simplified query, timing and the explain plan:\nSELECT ima.sit_cod, COUNT(*)*2048*4/3\n FROM images ima JOIN pg_largeobject ON (loid=ima.val)\nGROUP BY ima.sit_cod;\nTime: 343997.661 ms (about 6 min) ran on a small DB, took 4hrs on a ~1TB table\n\nHashAggregate (cost=2452378.86..2452379.01 rows=15 width=14)\n Group Key: ima.sit_cod\n -> Hash Join (cost=1460.40..2418245.74 rows=6826625 width=6)\n Hash Cond: (pg_largeobject.loid = ima.val)\n---------> Seq Scan on pg_largeobject (cost=0.00..2322919.25 rows=6826625 width=4)\n -> Hash (cost=1114.62..1114.62 rows=27662 width=10)\n -> Seq Scan on images ima (cost=0.00..1114.62 rows=27662 width=10)\n\n\nTestcase 2: A simple count(*) for a specific group (small group) perform an Index Only Scan and last few secs.\nSELECT COUNT(*)\n FROM images ima JOIN pg_largeobject ON (loid=ima.val)\nWHERE sit_cod='W8213';\ncount\n-------\n 8599\nTime: 12.090 ms\n\nAggregate (cost=11930.30..11930.31 rows=1 width=8)\n -> Nested Loop (cost=2.87..11918.58 rows=4689 width=0)\n -> Bitmap Heap Scan on images ima (cost=2.43..37.81 rows=19 width=4)\n Recheck Cond: ((sit_cod)::text = 'W8213'::text)\n -> Bitmap Index Scan on ima_pk (cost=0.00..2.43 rows=19 width=0)\n Index Cond: ((sit_cod)::text = 'W8213'::text)\n---------> Index Only Scan using pg_largeobject_loid_pn_index on pg_largeobject (cost=0.43..621.22 rows=408 width=4)\n Index Cond: (loid = ima.val)\n\n\nTestcase 3: However, larger group still perform full seq scan\nSELECT COUNT(*)\n FROM images ima JOIN pg_largeobject ON (loid=ima.val)\nWHERE sit_cod='W8317';\n count\n---------\n2209704\nTime: 345638.118 ms (about 6 min)\n\nAggregate (cost=2369363.01..2369363.02 rows=1 width=8)\n -> Hash Join (cost=1125.63..2365419.35 rows=1577463 width=0)\n Hash Cond: (pg_largeobject.loid = ima.val)\n---------> Seq Scan on pg_largeobject (cost=0.00..2322919.25 rows=6826625 width=4)\n -> Hash (cost=1045.73..1045.73 rows=6392 width=4)\n -> Bitmap Heap Scan on images ima (cost=127.83..1045.73 rows=6392 width=4)\n Recheck Cond: ((sit_cod)::text = 'W8317'::text)\n -> Bitmap Index Scan on ima_pk (cost=0.00..126.23 rows=6392 width=0)\n Index Cond: ((sit_cod)::text = 'W8317'::text)\n\nPretty sure that using the index would lead to much better perf.\nAny idea of what can be done?\n\nJean-Marc Lessard\nAdministrateur de base de données / Database Administrator\nUltra Electronics Forensic Technology Inc.\nT +1 514 489 4247 x4164\nwww.ultra-forensictechnology.com<http://www.ultra-forensictechnology.com>\n\nJean-Marc Lessard\nAdministrateur de base de données / Database Administrator\nUltra Electronics Forensic Technology Inc.\nT +1 514 489 4247 x4164\nwww.ultra-forensictechnology.com<http://www.ultra-forensictechnology.com>\n\n\n\n\n\n\n\n\n\nHi,\n \nI have to provide a summary of how much spaces is used in the large objects table based on a group by condition.\nI would expect an index only scan on the large object table, but a full seq scan that last for hours is performed.\n \nBigSql distribution\nPostgreSQL 9.6.5 on x86_64-pc-mingw64, compiled by gcc.exe (Rev5, Built by MSYS2 project) 4.9.2, 64-bit\nWin Server 2012 R2, 8GB RAM\npg server mem settings:\neffective_cache_size | 6GB\nmaintenance_work_mem | 819MB\nrandom_page_cost     | 2\nshared_buffers       | 2GB\nwork_mem             | 32MB\n \nTestcase 1: Here is a simplified query, timing and the explain plan:\nSELECT ima.sit_cod, COUNT(*)*2048*4/3\n  FROM images ima JOIN pg_largeobject ON (loid=ima.val)\nGROUP BY ima.sit_cod;\nTime: 343997.661 ms (about 6 min) ran on a small DB, took 4hrs on a ~1TB table\n \nHashAggregate  (cost=2452378.86..2452379.01 rows=15 width=14)\n  Group Key: ima.sit_cod\n  ->  Hash Join  (cost=1460.40..2418245.74 rows=6826625 width=6)\n        Hash Cond: (pg_largeobject.loid = ima.val)\n--------->  Seq Scan on pg_largeobject  (cost=0.00..2322919.25 rows=6826625 width=4)\n        ->  Hash  (cost=1114.62..1114.62 rows=27662 width=10)\n              ->  Seq Scan on images ima  (cost=0.00..1114.62 rows=27662 width=10)\n \n \nTestcase 2: A simple count(*) for a specific group (small group) perform an Index Only Scan and last few secs.\nSELECT COUNT(*)\n  FROM images ima JOIN pg_largeobject ON (loid=ima.val)\nWHERE sit_cod='W8213';\ncount\n-------\n  8599\nTime: 12.090 ms\n \nAggregate  (cost=11930.30..11930.31 rows=1 width=8)\n  ->  Nested Loop  (cost=2.87..11918.58 rows=4689 width=0)\n        ->  Bitmap Heap Scan on images ima  (cost=2.43..37.81 rows=19 width=4)\n              Recheck Cond: ((sit_cod)::text = 'W8213'::text)\n              ->  Bitmap Index Scan on ima_pk  (cost=0.00..2.43 rows=19 width=0)\n                    Index Cond: ((sit_cod)::text = 'W8213'::text)\n--------->  Index Only Scan using pg_largeobject_loid_pn_index on pg_largeobject  (cost=0.43..621.22 rows=408 width=4)\n              Index Cond: (loid = ima.val)\n \n \nTestcase 3: However, larger group still perform full seq scan\nSELECT COUNT(*)\n  FROM images ima JOIN pg_largeobject ON (loid=ima.val)\nWHERE sit_cod='W8317';\n  count\n---------\n2209704\nTime: 345638.118 ms (about 6 min)\n \nAggregate  (cost=2369363.01..2369363.02 rows=1 width=8)\n  ->  Hash Join  (cost=1125.63..2365419.35 rows=1577463 width=0)\n        Hash Cond: (pg_largeobject.loid = ima.val)\n--------->  Seq Scan on pg_largeobject  (cost=0.00..2322919.25 rows=6826625 width=4)\n        ->  Hash  (cost=1045.73..1045.73 rows=6392 width=4)\n              ->  Bitmap Heap Scan on images ima  (cost=127.83..1045.73 rows=6392 width=4)\n                    Recheck Cond: ((sit_cod)::text = 'W8317'::text)\n                    ->  Bitmap Index Scan on ima_pk  (cost=0.00..126.23 rows=6392 width=0)\n                          Index Cond: ((sit_cod)::text = 'W8317'::text)\n \nPretty sure that using the index would lead to much better perf.\nAny idea of what can be done?\nJean-Marc Lessard\nAdministrateur de base de données / Database Administrator\nUltra Electronics Forensic Technology Inc.\nT +1 514 489 4247 x4164\nwww.ultra-forensictechnology.com\n\nJean-Marc Lessard\nAdministrateur de base de données / Database Administrator\nUltra Electronics Forensic Technology Inc.\nT +1 514 489 4247 x4164\nwww.ultra-forensictechnology.com", "msg_date": "Tue, 23 Jan 2018 14:08:17 +0000", "msg_from": "Jean-Marc Lessard <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Inefficient full seq scan on pg_largeobject instead of index\n scan" } ]
[ { "msg_contents": "Hello,\n\nThis email is structured in sections as follows:\n\n1 - Estimating the size of pg_xlog depending on postgresql.conf parameters\n2 - Cleaning up pg_xlog using a watchdog script\n3 - Mailing list survey of related bugs\n4 - Thoughts\n\nWe're using PostgreSQL 9.6.6 on a Ubuntu 16.04.3 LTS.\nDuring some database imports(using pg_restore), we're noticing fast\nand unbounded growth of pg_xlog up to the point where the\npartition(280G in size for us) that stores it fills up and PostgreSQL\nshuts down. The error seen in the logs:\n\n 2018-01-17 01:46:23.035 CST [41671] LOG: database system was shut down at 2018-01-16 15:49:26 CST\n 2018-01-17 01:46:23.038 CST [41671] FATAL: could not write to file \"pg_xlog/xlogtemp.41671\": No space left on device\n 2018-01-17 01:46:23.039 CST [41662] LOG: startup process (PID 41671) exited with exit code 1\n 2018-01-17 01:46:23.039 CST [41662] LOG: aborting startup due to startup process failure\n 2018-01-17 01:46:23.078 CST [41662] LOG: database system is shut down\n\nThe config settings I thought were relevant are these ones (but I'm\nalso attaching the entire postgresql.conf if there are other ones that\nI missed):\n\n wal_level=replica\n archive_command='exit 0;'\n min_wal_size=2GB\n max_wal_size=500MB\n checkpoint_completion_target = 0.7\n wal_keep_segments = 8\n\nSo currently the pg_xlog is growing a lot, and there doesn't seem to\nbe any way to stop it.\n\nThere are some formulas I came across that allow one to compute the\nmaximum number of WAL allowed in pg_xlog as a function of the\nPostgreSQL config parameters.\n\n1.1) Method from 2012 found in [2]\n\nThe formula for the upper bound for WAL files in pg_xlog is \n\n(2 + checkpoint_completion_target) * checkpoint_segments + 1\nwhich is \n( (2 + 0.7) * (2048/16 * 1/3 ) ) + 1 ~ 116 WAL files\n\nI used the 1/3 because of [6] the shift from checkpoint_segments to\nmax_wal_size in 9.5 , the relevant quote from the release notes being:\n\n If you previously adjusted checkpoint_segments, the following formula\n will give you an approximately equivalent setting:\n max_wal_size = (3 * checkpoint_segments) * 16MB\n\nAnother way of computing it, also according to [2] is the following\n2 * checkpoint_segments + wal_keep_segments + 1\nwhich is (2048/16) + 8 + 1 = 137 WAL files\n\nSo far we have two answers, in practice none of them check out, since\npg_xlog grows indefinitely.\n\n1.2) Method from the PostgreSQL internals book \n\nThe book [4] says the following:\n\n it could temporarily become up to \"3 * checkpoint_segments + 1\"\n\nOk, let's compute this too, it's 3 * (128/3) + 1 = 129 WAL files\n\nThis doesn't check out either.\n\n1.3) On the mailing list [3] , I found similar formulas that were seen\npreviously.\n\n1.4) The post at [5] says max_wal_size is as soft limit and also sets\nwal_keep_segments = 0 in order to enforce keeping as little WAL as\npossible around. Would this work?\n\nDoes wal_keep_segments = 0 turn off WAL recycling? Frankly, I would\nrather have WAL not be recycled/reused, and just deleted to keep\npg_xlog below expected size.\n\nAnother question is, does wal_level = replica affect the size of\npg_xlog in any way? We have an archive_command that just exits with\nexit code 0, so I don't see any reason for the pg_xlog files to not be\ncleaned up.\n\n2) Cleaning up pg_xlog using a watchdog script\n\nTo get the import done I wrote a script that's actually inspired from\na blog post where the pg_xlog out of disk space problem is\naddressed [1]. It periodically reads the last checkpoint's REDO WAL\nfile, and deletes all WAL in pg_xlog before that one. \n\nThe intended usage is for this script to run alongside the imports\nin order for pg_xlog to be cleaned up gradually and prevent the disk\nfrom filling up.\n\nUnlike the blog post and probably slightly wrong is that I used\nlexicographic ordering and not ordering by date. But I guess it worked\nbecause the checks were frequent enough that no WAL ever got\nrecycled. In retrospect I should've used the date ordering.\n\nDoes this script have the same effect as checkpoint_completion_target=0 ?\n\nAt the end of the day, this script seems to have allowed the import we needed\nto get done, but I acknowledge it was a stop-gap measure and not a long-term\nsolution, hence me posting on the mailing list to find a better solution.\n\n3) Mailing list survey of related bugs\n\nOn the mailing lists, in the past, there have been bugs around pg_xlog\ngrowing out of control:\n\nBUG 7902 [7] - Discusses a situation where WAL are produced faster than\ncheckpoints can be completed(written to disk), and therefore the WALs\nin pg_xlog cannot be recycled/deleted. The status of this bug report\nis unclear. I have a feeling it's still open. Is that the case?\n\nBUG 14340 [9] - A user(Sonu Gupta) is reporting pg_xlog unbounded growth\nand is asked to do some checks and then directed to the pgsql-general mailing list\nwhere he did not follow up.\nI quote the checks that were suggested\n\n Check that your archive_command is functioning correctly, and that you\n don't have any inactive replication slots (select * from\n pg_replication_slots where not active). Also check the server logs if\n both those things are okay.\n\nI have done these checks, and the archive_command we have is returning zero always.\nAnd we do not have inactive replication slots.\n\nBUG 10013 [12] - A user reports initdb to fill up the disk once he changes\nBLCKSZ and/or XLOG_BLCKSZ to non-standard values. The bug seems to be\nopen.\n\nBUG 11989 [8] - A user reports a pg_xlog unbounded growth that concludes\nin a disk outage. No further replies after the bug report.\n\nBUG 2104 [10] - A user reports a PostgreSQL not recycling pg_xlog\nfiles. It's suggested that this might have happened because\ncheckpoints were failing so WAL segments could not be recycled.\n\nBUG 7801 [11] - This is a bit offtopic for our problem(since we don't have\nreplication set up yet for the server with unbound pg_xlog growth),\nbut still an interesting read.\n\nA slave falls too far behind a master which leads to increase of\npg_xlog on the slave. The user says making\ncheckpoint_completion_target=0 or, manually running CHECKPOINT on the\nslave is immediately freeing up space on the slave's pg_xlog.\n\nI also learned here that a CHECKPOINT occurs approximately every\ncheckpoint_completion_target * checkpoint_timeout. Is this correct?\n\nShould I set checkpoint_completion_target=0? \n\n4) Thoughts\n\nIn the logs, there are lines like the following one:\n\n 28 2018-01-17 02:34:39.407 CST [59922] HINT: Consider increasing the configuration parameter \"max_wal_size\".\n 29 2018-01-17 02:35:02.513 CST [59922] LOG: checkpoints are occurring too frequently (23 seconds apart)\n\nThis looks very similar to BUG 7902 [7]. Is there any rule of thumb,\nguideline or technique that can be used when checkpoints cannot be\ncompleted fast enough ?\n\nI'm not sure if this is a misconfiguration problem or a bug. Which one\nwould be more appropriate?\n\nThanks,\nStefan\n\n[1] https://www.endpoint.com/blog/2014/09/25/pgxlog-disk-space-problem-on-postgres\n[2] http://chirupgadmin.blogspot.ro/2012/02/wal-growth-calculation-pgxlog-directory.html\n[3] https://www.postgresql.org/message-id/[email protected]\n[4] http://www.interdb.jp/blog/pgsql/pg95walsegments/\n[5] http://liufuyang.github.io/2017/09/26/postgres-cannot-auto-clean-up-folder-pg_xlog.html\n[6] https://www.postgresql.org/docs/9.5/static/release-9-5.html#AEN128150\n[7] https://www.postgresql.org/message-id/flat/E1U91WW-0006rq-82%40wrigleys.postgresql.org\n[8] https://www.postgresql.org/message-id/[email protected]\n[9] https://www.postgresql.org/message-id/flat/8a3a6780-18f6-d23a-2350-ac7ad335c9e7%402ndquadrant.fr\n[10] https://www.postgresql.org/message-id/flat/20051209134337.94B0BF0BAB%40svr2.postgresql.org\n[11] https://www.postgresql.org/message-id/flat/E1TsemH-0004dK-KN%40wrigleys.postgresql.org\n[12] https://www.postgresql.org/message-id/flat/20140414014442.15385.74268%40wrigleys.postgresql.org\n\nStefan Petrea\nSystem Engineer, Network Engineering\n\n\[email protected] \ntangoe.com\n\nThis e-mail message, including any attachments, is for the sole use of the intended recipient of this message, and may contain information that is confidential or legally protected. If you are not the intended recipient or have received this message in error, you are not authorized to copy, distribute, or otherwise use this message or its attachments. Please notify the sender immediately by return e-mail and permanently delete this message and any attachments. Tangoe makes no warranty that this e-mail or its attachments are error or virus free.", "msg_date": "Wed, 24 Jan 2018 11:48:17 +0000", "msg_from": "Stefan Petrea <[email protected]>", "msg_from_op": true, "msg_subject": "pg_xlog unbounded growth" }, { "msg_contents": "Stefan Petrea wrote:\n> During some database imports(using pg_restore), we're noticing fast\n> and unbounded growth of pg_xlog up to the point where the\n> partition(280G in size for us) that stores it fills up and PostgreSQL\n> shuts down.\n\nWhat do you see in pg_stat_archiver?\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Thu, 25 Jan 2018 17:57:44 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_xlog unbounded growth" }, { "msg_contents": "\n\nOn 01/24/2018 12:48 PM, Stefan Petrea wrote:\n> Hello,\n> \n> This email is structured in sections as follows:\n> \n> 1 - Estimating the size of pg_xlog depending on postgresql.conf parameters\n> 2 - Cleaning up pg_xlog using a watchdog script\n> 3 - Mailing list survey of related bugs\n> 4 - Thoughts\n> \n> We're using PostgreSQL 9.6.6 on a Ubuntu 16.04.3 LTS.\n> During some database imports(using pg_restore), we're noticing fast\n> and unbounded growth of pg_xlog up to the point where the\n> partition(280G in size for us) that stores it fills up and PostgreSQL\n> shuts down. The error seen in the logs:\n> \n> 2018-01-17 01:46:23.035 CST [41671] LOG: database system was shut down at 2018-01-16 15:49:26 CST\n> 2018-01-17 01:46:23.038 CST [41671] FATAL: could not write to file \"pg_xlog/xlogtemp.41671\": No space left on device\n> 2018-01-17 01:46:23.039 CST [41662] LOG: startup process (PID 41671) exited with exit code 1\n> 2018-01-17 01:46:23.039 CST [41662] LOG: aborting startup due to startup process failure\n> 2018-01-17 01:46:23.078 CST [41662] LOG: database system is shut down\n> \n> The config settings I thought were relevant are these ones (but I'm\n> also attaching the entire postgresql.conf if there are other ones that\n> I missed):\n> \n> wal_level=replica\n> archive_command='exit 0;'\n> min_wal_size=2GB\n> max_wal_size=500MB\n> checkpoint_completion_target = 0.7\n> wal_keep_segments = 8\n> \n\nThose are values from the config file, right? What values are currently\nused by the processes? That is, when you do\n\n SELECT * FROM pg_settings\n\nwhat values does that show? Perhaps someone modified the config file and\nforgot to reload it / restart the server?\n\nBTW there's a mistake in the settings, it should be max_wal_size=2GB\n(it's just a type in the message, it's set correctly in the config).\n\nAnother thought is that the log file you provided is full of warning\nabout checkpoints happening less than 30 seconds apart. That means you\nneed to bump the max_wal_size value up - a lot. Perhaps to 16-32GB, to\nmake checkpoints less frequent. That is a basic checkpoint tuning.\n\n> So currently the pg_xlog is growing a lot, and there doesn't seem to\n> be any way to stop it.\n> \n> There are some formulas I came across that allow one to compute the\n> maximum number of WAL allowed in pg_xlog as a function of the\n> PostgreSQL config parameters.\n> \n> 1.1) Method from 2012 found in [2]\n> \n> The formula for the upper bound for WAL files in pg_xlog is \n> \n> (2 + checkpoint_completion_target) * checkpoint_segments + 1\n> which is \n> ( (2 + 0.7) * (2048/16 * 1/3 ) ) + 1 ~ 116 WAL files\n> \n> I used the 1/3 because of [6] the shift from checkpoint_segments to\n> max_wal_size in 9.5 , the relevant quote from the release notes being:\n> \n> If you previously adjusted checkpoint_segments, the following formula\n> will give you an approximately equivalent setting:\n> max_wal_size = (3 * checkpoint_segments) * 16MB\n> \n> Another way of computing it, also according to [2] is the following\n> 2 * checkpoint_segments + wal_keep_segments + 1\n> which is (2048/16) + 8 + 1 = 137 WAL files\n>\n> So far we have two answers, in practice none of them check out, since\n> pg_xlog grows indefinitely.\n> \n> 1.2) Method from the PostgreSQL internals book \n> \n> The book [4] says the following:\n> \n> it could temporarily become up to \"3 * checkpoint_segments + 1\"\n> \n> Ok, let's compute this too, it's 3 * (128/3) + 1 = 129 WAL files\n> \n> This doesn't check out either.\nI don't quite understand the logic in the first formula - why you first\ndivide by 3 and then multiply by 2.7. But that does not really matter,\namount of WAL segments kept in pg_xlog should be about 2GB, give or\ntake. If you got much more WAL than that, the segments are kept because\nof something preventing their removal.\n\nAnd if I understand it correctly, you have about ~200GB of them, right?\n\n> \n> 1.3) On the mailing list [3] , I found similar formulas that were seen\n> previously.\n> \n> 1.4) The post at [5] says max_wal_size is as soft limit and also sets\n> wal_keep_segments = 0 in order to enforce keeping as little WAL as\n> possible around. Would this work?\n> \n\nYes, max_wal_size is a soft limit, which means it can be temporarily\nexceeded. But 2GB vs. 200GB is helluwa difference, far beyond what would\nbe reasonable with max_wal_size=2GB.\n\nRegarding wal_keep_segments=0 - considering you currently have this set\nto 8, which is a whopping 128MB, I very much doubt setting it to 0 will\nmake any difference. The segments are kept around for some other reason.\n\nThere are cases where wal_keep_segments are set to high values (like\n5000 or so), to allow replicas to temporarily fall behind without having\nto setup a WAL archive. But this is not the case here. Honestly, I doubt\nsetting this to 8 makes practical sense ... That value seems so low it\ndoes not guarantee anything.\n\n> Does wal_keep_segments = 0 turn off WAL recycling? Frankly, I would\n> rather have WAL not be recycled/reused, and just deleted to keep\n> pg_xlog below expected size.\n> \n\nNo, it doesn't. Why would it disable that? It simply means the segments\nmay need to be keept around for longer before getting recycled.\n\n> Another question is, does wal_level = replica affect the size of\n> pg_xlog in any way? We have an archive_command that just exits with\n> exit code 0, so I don't see any reason for the pg_xlog files to not be\n> cleaned up.\n> \n\nNo, it shouldn't. The replica should stream the WAL (if it's close to\ncurrent positiion), or fetch the older WAL segments if it falls behind.\nBut if it falls behind too much, it may not be able to catch up as the\nWAL segments get removed.\n\nwal_keep_segments is a protection against that, but it won't keep\nsegments indefinitely and you set that just to 8. So this is not the\nroot cause.\n\nAnother option is that you created a replication slot. That is actually\nthe only I can think of that could cause this issue. Perhaps there is a\nreplication slot that is not used anymore and is preventing removal of\nWAL segments? Because that's the whole point of replication slots.\n\nWhat does\n\n SELECT * FROM pg_replication_slots;\n\nsay on the master node?\n\n> 2) Cleaning up pg_xlog using a watchdog script\n> \n> To get the import done I wrote a script that's actually inspired from\n> a blog post where the pg_xlog out of disk space problem is\n> addressed [1]. It periodically reads the last checkpoint's REDO WAL\n> file, and deletes all WAL in pg_xlog before that one. \n> \n> ...\n\nThis makes no sense. The database should be able to remove unnecessary\nWAL segments automatically. There's pretty small chance you'll get it\nright in an external script - not removing WAL segments that are still\nneeded, etc.\n\nFind the actual root cause, fix it. Don't invent custom scripts messing\nwith the critical part of the database.\n\n> \n> [email protected] \n> tangoe.com\n> \n> This e-mail message, including any attachments, is for the sole use \n> of the intended recipient of this message, and may contain\n> information that is confidential or legally protected. If you are not\n> the intended recipient or have received this message in error, you\n> are not authorized to copy, distribute, or otherwise use this message\n> or its attachments. Please notify the sender immediately by return\n> e-mail and permanently delete this message and any attachments.\n> Tangoe makes no warranty that this e-mail or its attachments are\n> error or virus free.\n>\n\nLOL\n\n\nkindd regards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Thu, 25 Jan 2018 22:36:00 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_xlog unbounded growth" }, { "msg_contents": "Hi,\n\n\nAm 24.01.2018 um 12:48 schrieb Stefan Petrea:\n> We're using PostgreSQL 9.6.6 on a Ubuntu 16.04.3 LTS.\n> During some database imports(using pg_restore), we're noticing fast\n> and unbounded growth of pg_xlog up to the point where the\n> partition(280G in size for us) that stores it fills up and PostgreSQL\n> shuts down. The error seen in the logs:\n>\n> 2018-01-17 01:46:23.035 CST [41671] LOG: database system was shut down at 2018-01-16 15:49:26 CST\n> 2018-01-17 01:46:23.038 CST [41671] FATAL: could not write to file \"pg_xlog/xlogtemp.41671\": No space left on device\n> 2018-01-17 01:46:23.039 CST [41662] LOG: startup process (PID 41671) exited with exit code 1\n> 2018-01-17 01:46:23.039 CST [41662] LOG: aborting startup due to startup process failure\n> 2018-01-17 01:46:23.078 CST [41662] LOG: database system is shut down\n>\n> The config settings I thought were relevant are these ones (but I'm\n> also attaching the entire postgresql.conf if there are other ones that\n> I missed):\n>\n> wal_level=replica\n> archive_command='exit 0;'\n> min_wal_size=2GB\n> max_wal_size=500MB\n> checkpoint_completion_target = 0.7\n> wal_keep_segments = 8\n\njust to exclude some things out:\n\n* is that only happens during pg_restore, or also during normal work?\n* can you show us how pg_restore is invoked?\n* how did you create the dump (same pg-version, which format)?\n* can you change wal_level to minimal? (maybe that's not possible if it \nis in production und there are standbys)\n\nCan you change your archive_command to '/bin/true' ? I'm not sure if \nthat can be the reason for the your problem, but 'exit 0;' terminates \nthe process, but archive_command should return true or false, not \nterminate.\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n", "msg_date": "Fri, 26 Jan 2018 09:18:38 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_xlog unbounded growth" }, { "msg_contents": "Have you tried \narchive_command='/bin/true' \nas Andreas wrote?\n\n-----Original Message-----\nFrom: Stefan Petrea [mailto:[email protected]] \nSent: Wednesday, January 24, 2018 2:48 PM\nTo: [email protected]\nSubject: pg_xlog unbounded growth\n\nHello,\n\nThis email is structured in sections as follows:\n\n1 - Estimating the size of pg_xlog depending on postgresql.conf parameters\n2 - Cleaning up pg_xlog using a watchdog script\n3 - Mailing list survey of related bugs\n4 - Thoughts\n\nWe're using PostgreSQL 9.6.6 on a Ubuntu 16.04.3 LTS.\nDuring some database imports(using pg_restore), we're noticing fast and\nunbounded growth of pg_xlog up to the point where the partition(280G in size\nfor us) that stores it fills up and PostgreSQL shuts down. The error seen in\nthe logs:\n\n 2018-01-17 01:46:23.035 CST [41671] LOG: database system was shut down\nat 2018-01-16 15:49:26 CST\n 2018-01-17 01:46:23.038 CST [41671] FATAL: could not write to file\n\"pg_xlog/xlogtemp.41671\": No space left on device\n 2018-01-17 01:46:23.039 CST [41662] LOG: startup process (PID 41671)\nexited with exit code 1\n 2018-01-17 01:46:23.039 CST [41662] LOG: aborting startup due to\nstartup process failure\n 2018-01-17 01:46:23.078 CST [41662] LOG: database system is shut down\n\nThe config settings I thought were relevant are these ones (but I'm also\nattaching the entire postgresql.conf if there are other ones that I missed):\n\n wal_level=replica\n archive_command='exit 0;'\n min_wal_size=2GB\n max_wal_size=500MB\n checkpoint_completion_target = 0.7\n wal_keep_segments = 8\n\nSo currently the pg_xlog is growing a lot, and there doesn't seem to be any\nway to stop it.\n\nThere are some formulas I came across that allow one to compute the maximum\nnumber of WAL allowed in pg_xlog as a function of the PostgreSQL config\nparameters.\n\n1.1) Method from 2012 found in [2]\n\nThe formula for the upper bound for WAL files in pg_xlog is \n\n(2 + checkpoint_completion_target) * checkpoint_segments + 1 which is ( (2 +\n0.7) * (2048/16 * 1/3 ) ) + 1 ~ 116 WAL files\n\nI used the 1/3 because of [6] the shift from checkpoint_segments to\nmax_wal_size in 9.5 , the relevant quote from the release notes being:\n\n If you previously adjusted checkpoint_segments, the following formula\n will give you an approximately equivalent setting:\n max_wal_size = (3 * checkpoint_segments) * 16MB\n\nAnother way of computing it, also according to [2] is the following\n2 * checkpoint_segments + wal_keep_segments + 1 which is (2048/16) + 8 + 1 =\n137 WAL files\n\nSo far we have two answers, in practice none of them check out, since\npg_xlog grows indefinitely.\n\n1.2) Method from the PostgreSQL internals book \n\nThe book [4] says the following:\n\n it could temporarily become up to \"3 * checkpoint_segments + 1\"\n\nOk, let's compute this too, it's 3 * (128/3) + 1 = 129 WAL files\n\nThis doesn't check out either.\n\n1.3) On the mailing list [3] , I found similar formulas that were seen\npreviously.\n\n1.4) The post at [5] says max_wal_size is as soft limit and also sets\nwal_keep_segments = 0 in order to enforce keeping as little WAL as possible\naround. Would this work?\n\nDoes wal_keep_segments = 0 turn off WAL recycling? Frankly, I would rather\nhave WAL not be recycled/reused, and just deleted to keep pg_xlog below\nexpected size.\n\nAnother question is, does wal_level = replica affect the size of pg_xlog in\nany way? We have an archive_command that just exits with exit code 0, so I\ndon't see any reason for the pg_xlog files to not be cleaned up.\n\n2) Cleaning up pg_xlog using a watchdog script\n\nTo get the import done I wrote a script that's actually inspired from a blog\npost where the pg_xlog out of disk space problem is addressed [1]. It\nperiodically reads the last checkpoint's REDO WAL file, and deletes all WAL\nin pg_xlog before that one. \n\nThe intended usage is for this script to run alongside the imports in order\nfor pg_xlog to be cleaned up gradually and prevent the disk from filling up.\n\nUnlike the blog post and probably slightly wrong is that I used\nlexicographic ordering and not ordering by date. But I guess it worked\nbecause the checks were frequent enough that no WAL ever got recycled. In\nretrospect I should've used the date ordering.\n\nDoes this script have the same effect as checkpoint_completion_target=0 ?\n\nAt the end of the day, this script seems to have allowed the import we\nneeded to get done, but I acknowledge it was a stop-gap measure and not a\nlong-term solution, hence me posting on the mailing list to find a better\nsolution.\n\n3) Mailing list survey of related bugs\n\nOn the mailing lists, in the past, there have been bugs around pg_xlog\ngrowing out of control:\n\nBUG 7902 [7] - Discusses a situation where WAL are produced faster than\ncheckpoints can be completed(written to disk), and therefore the WALs in\npg_xlog cannot be recycled/deleted. The status of this bug report is\nunclear. I have a feeling it's still open. Is that the case?\n\nBUG 14340 [9] - A user(Sonu Gupta) is reporting pg_xlog unbounded growth and\nis asked to do some checks and then directed to the pgsql-general mailing\nlist where he did not follow up.\nI quote the checks that were suggested\n\n Check that your archive_command is functioning correctly, and that you\n don't have any inactive replication slots (select * from\n pg_replication_slots where not active). Also check the server logs if\n both those things are okay.\n\nI have done these checks, and the archive_command we have is returning zero\nalways.\nAnd we do not have inactive replication slots.\n\nBUG 10013 [12] - A user reports initdb to fill up the disk once he changes\nBLCKSZ and/or XLOG_BLCKSZ to non-standard values. The bug seems to be open.\n\nBUG 11989 [8] - A user reports a pg_xlog unbounded growth that concludes in\na disk outage. No further replies after the bug report.\n\nBUG 2104 [10] - A user reports a PostgreSQL not recycling pg_xlog files.\nIt's suggested that this might have happened because checkpoints were\nfailing so WAL segments could not be recycled.\n\nBUG 7801 [11] - This is a bit offtopic for our problem(since we don't have\nreplication set up yet for the server with unbound pg_xlog growth), but\nstill an interesting read.\n\nA slave falls too far behind a master which leads to increase of pg_xlog on\nthe slave. The user says making\ncheckpoint_completion_target=0 or, manually running CHECKPOINT on the slave\nis immediately freeing up space on the slave's pg_xlog.\n\nI also learned here that a CHECKPOINT occurs approximately every\ncheckpoint_completion_target * checkpoint_timeout. Is this correct?\n\nShould I set checkpoint_completion_target=0? \n\n4) Thoughts\n\nIn the logs, there are lines like the following one:\n\n 28 2018-01-17 02:34:39.407 CST [59922] HINT: Consider increasing the\nconfiguration parameter \"max_wal_size\".\n 29 2018-01-17 02:35:02.513 CST [59922] LOG: checkpoints are occurring\ntoo frequently (23 seconds apart)\n\nThis looks very similar to BUG 7902 [7]. Is there any rule of thumb,\nguideline or technique that can be used when checkpoints cannot be completed\nfast enough ?\n\nI'm not sure if this is a misconfiguration problem or a bug. Which one would\nbe more appropriate?\n\nThanks,\nStefan\n\n[1]\nhttps://www.endpoint.com/blog/2014/09/25/pgxlog-disk-space-problem-on-postgr\nes\n[2]\nhttp://chirupgadmin.blogspot.ro/2012/02/wal-growth-calculation-pgxlog-direct\nory.html\n[3]\nhttps://www.postgresql.org/message-id/AANLkTi=e=oR54OuxAw88=dtV4wt0e5edMiGae\[email protected]\n[4] http://www.interdb.jp/blog/pgsql/pg95walsegments/\n[5]\nhttp://liufuyang.github.io/2017/09/26/postgres-cannot-auto-clean-up-folder-p\ng_xlog.html\n[6] https://www.postgresql.org/docs/9.5/static/release-9-5.html#AEN128150\n[7]\nhttps://www.postgresql.org/message-id/flat/E1U91WW-0006rq-82%40wrigleys.post\ngresql.org\n[8]\nhttps://www.postgresql.org/message-id/[email protected]\ngresql.org\n[9]\nhttps://www.postgresql.org/message-id/flat/8a3a6780-18f6-d23a-2350-ac7ad335c\n9e7%402ndquadrant.fr\n[10]\nhttps://www.postgresql.org/message-id/flat/20051209134337.94B0BF0BAB%40svr2.\npostgresql.org\n[11]\nhttps://www.postgresql.org/message-id/flat/E1TsemH-0004dK-KN%40wrigleys.post\ngresql.org\n[12]\nhttps://www.postgresql.org/message-id/flat/20140414014442.15385.74268%40wrig\nleys.postgresql.org\n\nStefan Petrea\nSystem Engineer, Network Engineering\n\n\[email protected]\ntangoe.com\n\nThis e-mail message, including any attachments, is for the sole use of the\nintended recipient of this message, and may contain information that is\nconfidential or legally protected. If you are not the intended recipient or\nhave received this message in error, you are not authorized to copy,\ndistribute, or otherwise use this message or its attachments. Please notify\nthe sender immediately by return e-mail and permanently delete this message\nand any attachments. Tangoe makes no warranty that this e-mail or its\nattachments are error or virus free.\n\n\n\n", "msg_date": "Wed, 7 Feb 2018 20:11:57 +0300", "msg_from": "\"Alex Ignatov\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: pg_xlog unbounded growth" } ]
[ { "msg_contents": "Since upgrading to PG 10 a few weeks ago I've been experimenting with hash\nindexes. One thing I've noticed is that they seem to take a _lot_ longer\nto create than btree indexes, particularly on large tables.\n\nI've got a moderately sized table of about 38M rows and the create index\nusing hash for an integer column (with about 300 unique values) has been\nrunning for 12 hours now and still hasn't finished. I have not\nsuccessfully installed a hash index on a larger table (of which I have\nmany) yet because the create index never seems to finish.\n\nThe create index thread will consume an entire CPU while doing this. It\ndoes not seem to be I/O bound. It just crunches away burning cpu cycles\nwith no apparent end.\n\nIs expected?\n\nSince upgrading to PG 10 a few weeks ago I've been experimenting with hash indexes.  One thing I've noticed is that they seem to take a _lot_ longer to create than btree indexes, particularly on large tables.I've got a moderately sized table of about 38M rows and the create index using hash for an integer column (with about 300 unique values) has been running for 12 hours now and still hasn't finished.  I have not successfully installed a hash index on a larger table (of which I have many) yet because the create index never seems to finish.The create index thread will consume an entire CPU while doing this.  It does not seem to be I/O bound.  It just crunches away burning cpu cycles with no apparent end.Is expected?", "msg_date": "Fri, 26 Jan 2018 05:58:18 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": true, "msg_subject": "PG 10 hash index create times" } ]
[ { "msg_contents": "Hi,\nI configured a master table that is called \"year_2018\" :\ncreate table year_2018(a int,b int c date);\n\nThe master table has a unique constraint on those 3 columns so that I wont\nhave any duplicated rows. Moreover, I configured a before insert trigger on\nthat table that creates a child table for each day in the year. The child\nshould include all the data related to that specific day.\n\nNow, every day I got a csv file that I need to load to the year table. I\nmust load the data as fast as possible but I have 2 problems :\n1)I must load the data as a bulk - via the copy command. However, the copy\ncommand fails because sometimes I have duplicated rows.\n2)I tried to use the pgloader extension but it fails because I have a\ntrigger before each insert.\n\n-I cant load all the data into a temp table and then run insert into\nyear_2018 select * from temp because it takes too much time.\n\nAny idea ?\n\nHi,I configured a master table that is called \"year_2018\" : create table year_2018(a int,b int c date); The master table has a unique constraint on those 3 columns so that I wont have any duplicated rows. Moreover, I configured a before insert trigger on that table that creates a child table for each day in the year. The child should include all the data related to that specific day.Now, every day I got a csv file that I need to load to the year table. I must load the data as fast as possible but I have 2 problems :1)I must load the data as a bulk - via the copy command. However, the copy command fails because sometimes I have duplicated rows.2)I tried to use the pgloader extension but it fails because I have a trigger before each insert.-I cant load all the data into a temp table and then run insert into year_2018 select * from temp because it takes too much time.Any idea ?", "msg_date": "Sun, 28 Jan 2018 11:11:33 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "copy csv into partitioned table with unique index" }, { "msg_contents": "Did you try to transform your temp table into a table partition using the\nALTER TABLE ATTACH syntax\nhttps://www.postgresql.org/docs/10/static/ddl-partitioning.html\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Sun, 28 Jan 2018 03:55:38 -0700 (MST)", "msg_from": "legrand legrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy csv into partitioned table with unique index" }, { "msg_contents": "Hi\n\n\nWe had the same problems with performance when testing with more than 100 billion weather observations. We now have a solution where we can push between 300 and 400 million weather observations pr. Second into the database.\n\n\nWe download date from NetCDF files. The date are joined based on time and geolocation, so data from many different NetCDF should end up in the same row and table.\n\n\nDoing this based on database join and table update was taking a long time as you also have noticed.\n\n\nTo get high performance we ended up this solution\n\n- Uses os commands like awk,sed,join,cut,.. to prepare CSV file for the database.\n\n- Use multithreads.\n\n- Insert data directly into child tables.\n\n- No triggers, constraints and indexes on working table.\n\n- Don't update rows.\n\n- Unlogged tables.\n\n\nWe first download NetCDF and make CSV files that fits in perfect for the copy a command and with complete files for each child tables it's created for, this is a time consuming operation.\n\n\nSo before the copy in into database we just do a truncate on the selected table. We are then able to insert between 300 and 400 mill. weather observations pr. Second. We have 11 observations pr row so it means around 35 mill rows pr second. We have one child table for each year and month.\n\n\nThe database we working on have 16 dual core CPU's and SSD discs. When testing I was running 11 threads in parallel.\n\n\nIndexes and constraints are added later based on needs.\n\n\nHow can you take on chance om using something like unlogged tables?\n\n\nLinux system are quite stable and if we keep the a copy of the CVS files it does not take long time to insert data after crash.\n\n\nYou can also change your table to logged later if you need to secure your data in the database.\n\n\nLars\n\n\n\n________________________________\nFra: Mariel Cherkassky <[email protected]>\nSendt: 28. januar 2018 10:11\nTil: PostgreSQL mailing lists\nEmne: copy csv into partitioned table with unique index\n\nHi,\nI configured a master table that is called \"year_2018\" :\ncreate table year_2018(a int,b int c date);\n\nThe master table has a unique constraint on those 3 columns so that I wont have any duplicated rows. Moreover, I configured a before insert trigger on that table that creates a child table for each day in the year. The child should include all the data related to that specific day.\n\nNow, every day I got a csv file that I need to load to the year table. I must load the data as fast as possible but I have 2 problems :\n1)I must load the data as a bulk - via the copy command. However, the copy command fails because sometimes I have duplicated rows.\n2)I tried to use the pgloader extension but it fails because I have a trigger before each insert.\n\n-I cant load all the data into a temp table and then run insert into year_2018 select * from temp because it takes too much time.\n\nAny idea ?\n\n\n\n\n\n\n\n\n\n\nHi\n\n\nWe had the same problems with performance when testing with more than 100 billion weather observations. We now have a solution where we can push between 300 and 400 million weather observations pr. Second into\n the database. \n\n\nWe download date from NetCDF files. The date are joined based on time and geolocation, so data from many different NetCDF should end up in the same row and table.\n\n\nDoing this based on database join and table update was taking a long time as you also have noticed.\n\n\nTo get high performance we ended up this solution\n\n- Uses os commands like awk,sed,join,cut,.. to prepare CSV file for the database.\n- Use multithreads.\n- Insert data directly into child tables.\n- No triggers, constraints and indexes on working table.\n- Don’t update rows.\n- Unlogged tables.\n\n\nWe first download NetCDF and make CSV files that fits in perfect for the copy a command and with complete files for each child tables it’s created for, this is a time consuming operation.\n\n\nSo before the copy in into database we just do a truncate on the selected table. We are then able to insert between 300 and 400 mill. weather observations pr. Second. We have 11 observations pr row so it means\n around 35 mill rows pr second. We have one child table for each year and month.\n\n\nThe database we working on have 16 dual core CPU’s and SSD discs. When testing I was running 11 threads in parallel.\n\n\n\nIndexes and constraints are added later based on needs.\n\n\nHow can you take on chance om using something like unlogged tables?\n\n\nLinux system are quite stable and if we keep the a copy of the CVS files it does not take long time to insert data after crash.\n\n\n\nYou can also change your table to logged later if you need to secure your data in the database.\n\n\nLars\n\n\n\n\n\n\n\nFra: Mariel Cherkassky <[email protected]>\nSendt: 28. januar 2018 10:11\nTil: PostgreSQL mailing lists\nEmne: copy csv into partitioned table with unique index\n \n\n\n\nHi,\nI configured a master table that is called \"year_2018\" : \ncreate table year_2018(a int,b int c date); \n\n\nThe master table has a unique constraint on those 3 columns so that I wont have any duplicated rows. Moreover, I configured a before insert trigger on that table that creates a child table for each day in the year. The child should include all\n the data related to that specific day.\n\n\nNow, every day I got a csv file that I need to load to the year table. I must load the data as fast as possible but I have 2 problems :\n\n1)I must load the data as a bulk - via the copy command. However, the copy command fails because sometimes I have duplicated rows.\n2)I tried to use the pgloader extension but it fails because I have a trigger before each insert.\n\n\n-I cant load all the data into a temp table and then run insert into year_2018 select * from temp because it takes too much time.\n\n\nAny idea ?", "msg_date": "Sun, 28 Jan 2018 11:57:53 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": false, "msg_subject": "SV: copy csv into partitioned table with unique index" }, { "msg_contents": "Hei\n\n\nSorry it's was a zero to much, it should 30-40 million weather observations pr second.\n\n\nLars\n\n\n________________________________\nFra: Lars Aksel Opsahl <[email protected]>\nSendt: 28. januar 2018 12:57\nTil: Mariel Cherkassky; PostgreSQL mailing lists\nEmne: SV: copy csv into partitioned table with unique index\n\n\nHi\n\n\nWe had the same problems with performance when testing with more than 100 billion weather observations. We now have a solution where we can push between 300 and 400 million weather observations pr. Second into the database.\n\n\nWe download date from NetCDF files. The date are joined based on time and geolocation, so data from many different NetCDF should end up in the same row and table.\n\n\nDoing this based on database join and table update was taking a long time as you also have noticed.\n\n\nTo get high performance we ended up this solution\n\n- Uses os commands like awk,sed,join,cut,.. to prepare CSV file for the database.\n\n- Use multithreads.\n\n- Insert data directly into child tables.\n\n- No triggers, constraints and indexes on working table.\n\n- Don't update rows.\n\n- Unlogged tables.\n\n\nWe first download NetCDF and make CSV files that fits in perfect for the copy a command and with complete files for each child tables it's created for, this is a time consuming operation.\n\n\nSo before the copy in into database we just do a truncate on the selected table. We are then able to insert between 300 and 400 mill. weather observations pr. Second. We have 11 observations pr row so it means around 35 mill rows pr second. We have one child table for each year and month.\n\n\nThe database we working on have 16 dual core CPU's and SSD discs. When testing I was running 11 threads in parallel.\n\n\nIndexes and constraints are added later based on needs.\n\n\nHow can you take on chance om using something like unlogged tables?\n\n\nLinux system are quite stable and if we keep the a copy of the CVS files it does not take long time to insert data after crash.\n\n\nYou can also change your table to logged later if you need to secure your data in the database.\n\n\nLars\n\n\n\n________________________________\nFra: Mariel Cherkassky <[email protected]>\nSendt: 28. januar 2018 10:11\nTil: PostgreSQL mailing lists\nEmne: copy csv into partitioned table with unique index\n\nHi,\nI configured a master table that is called \"year_2018\" :\ncreate table year_2018(a int,b int c date);\n\nThe master table has a unique constraint on those 3 columns so that I wont have any duplicated rows. Moreover, I configured a before insert trigger on that table that creates a child table for each day in the year. The child should include all the data related to that specific day.\n\nNow, every day I got a csv file that I need to load to the year table. I must load the data as fast as possible but I have 2 problems :\n1)I must load the data as a bulk - via the copy command. However, the copy command fails because sometimes I have duplicated rows.\n2)I tried to use the pgloader extension but it fails because I have a trigger before each insert.\n\n-I cant load all the data into a temp table and then run insert into year_2018 select * from temp because it takes too much time.\n\nAny idea ?\n\n\n\n\n\n\n\n\n\n\nHei\n\n\nSorry it's was a zero to much, it should 30-40 million weather observations pr second.\n\n\nLars\n\n\n\n\nFra: Lars Aksel Opsahl <[email protected]>\nSendt: 28. januar 2018 12:57\nTil: Mariel Cherkassky; PostgreSQL mailing lists\nEmne: SV: copy csv into partitioned table with unique index\n \n\n\n\nHi\n\n\nWe had the same problems with performance when testing with more than 100 billion weather observations. We now have a solution where we can push between 300 and 400 million weather observations pr. Second into\n the database. \n\n\nWe download date from NetCDF files. The date are joined based on time and geolocation, so data from many different NetCDF should end up in the same row and table.\n\n\nDoing this based on database join and table update was taking a long time as you also have noticed.\n\n\nTo get high performance we ended up this solution\n\n- Uses os commands like awk,sed,join,cut,.. to prepare CSV file for the database.\n- Use multithreads.\n- Insert data directly into child tables.\n- No triggers, constraints and indexes on working table.\n- Don’t update rows.\n- Unlogged tables.\n\n\nWe first download NetCDF and make CSV files that fits in perfect for the copy a command and with complete files for each child tables it’s created for, this is a time consuming operation.\n\n\nSo before the copy in into database we just do a truncate on the selected table. We are then able to insert between 300 and 400 mill. weather observations pr. Second. We have 11 observations pr row so it means\n around 35 mill rows pr second. We have one child table for each year and month.\n\n\nThe database we working on have 16 dual core CPU’s and SSD discs. When testing I was running 11 threads in parallel.\n\n\n\nIndexes and constraints are added later based on needs.\n\n\nHow can you take on chance om using something like unlogged tables?\n\n\nLinux system are quite stable and if we keep the a copy of the CVS files it does not take long time to insert data after crash.\n\n\n\nYou can also change your table to logged later if you need to secure your data in the database.\n\n\nLars\n\n\n\n\n\n\n\nFra: Mariel Cherkassky <[email protected]>\nSendt: 28. januar 2018 10:11\nTil: PostgreSQL mailing lists\nEmne: copy csv into partitioned table with unique index\n \n\n\n\nHi,\nI configured a master table that is called \"year_2018\" : \ncreate table year_2018(a int,b int c date); \n\n\nThe master table has a unique constraint on those 3 columns so that I wont have any duplicated rows. Moreover, I configured a before insert trigger on that table that creates a child table for each day in the year. The child should include all\n the data related to that specific day.\n\n\nNow, every day I got a csv file that I need to load to the year table. I must load the data as fast as possible but I have 2 problems :\n\n1)I must load the data as a bulk - via the copy command. However, the copy command fails because sometimes I have duplicated rows.\n2)I tried to use the pgloader extension but it fails because I have a trigger before each insert.\n\n\n-I cant load all the data into a temp table and then run insert into year_2018 select * from temp because it takes too much time.\n\n\nAny idea ?", "msg_date": "Sun, 28 Jan 2018 12:04:35 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": false, "msg_subject": "SV: copy csv into partitioned table with unique index" } ]
[ { "msg_contents": "Hello,\n\nI would like to report a strange behaviour on postgresql 9.4.4.\n\nThe following query run in just 9 ms:\n\nSELECT SUM(\"distrib_report_items\".\"qty\") AS sum_id\nFROM\n \"distrib_report_items\" INNER JOIN\n \"retailers\" ON \"retailers\".\"id\" = \"distrib_report_items\".\"retailer_id\"\nINNER JOIN\n \"distrib_reports\" ON \"distrib_reports\".\"id\" =\n\"distrib_report_items\".\"distrib_report_id\" INNER JOIN\n \"distrib_report_groups\" ON \"distrib_report_groups\".\"id\" =\n\"distrib_reports\".\"distrib_report_group_id\"\nWHERE\n \"retailers\".\"sub_district_id\" = 'f4bff929-f911-4ab8-b1b2-aaa50e0ccb39' AND\n \"distrib_report_items\".\"product_id\" =\n'05167ad0-d2fa-4a4a-bd13-be8f89ce34a2' AND\n \"distrib_reports\".\"month\" = 1 AND\n \"distrib_reports\".\"year\" = 2017 AND\n \"distrib_reports\".\"state\" = 'SUBMITTED' AND\n \"distrib_report_groups\".\"distrib_report_group_type_id\" =\n'559a5fdc-418d-4494-aebf-80ecf8743d35'\n\nBut changing just one parameter (the year) from 2017 to 2018, the \"exactly\nsame query\", become incredebly slow, at 8 seconds. This is the full query\nafter changing the year:\n\nSELECT SUM(\"distrib_report_items\".\"qty\") AS sum_id\nFROM\n \"distrib_report_items\" INNER JOIN\n \"retailers\" ON \"retailers\".\"id\" = \"distrib_report_items\".\"retailer_id\"\nINNER JOIN\n \"distrib_reports\" ON \"distrib_reports\".\"id\" =\n\"distrib_report_items\".\"distrib_report_id\" INNER JOIN\n \"distrib_report_groups\" ON \"distrib_report_groups\".\"id\" =\n\"distrib_reports\".\"distrib_report_group_id\"\nWHERE\n \"retailers\".\"sub_district_id\" = 'f4bff929-f911-4ab8-b1b2-aaa50e0ccb39' AND\n \"distrib_report_items\".\"product_id\" =\n'05167ad0-d2fa-4a4a-bd13-be8f89ce34a2' AND\n \"distrib_reports\".\"month\" = 1 AND\n \"distrib_reports\".\"year\" = 2018 AND\n \"distrib_reports\".\"state\" = 'SUBMITTED' AND\n \"distrib_report_groups\".\"distrib_report_group_type_id\" =\n'559a5fdc-418d-4494-aebf-80ecf8743d35'\n\nThe explain analyze of the 2 queries are resulting on really different\nquery plan, here are the links to depesz:\n2017 --> explain result on postgres-9: https://explain.depesz.com/s/qJF1\n2018 --> explain result on postgres-9: https://explain.depesz.com/s/pT0y\n\nThe table growth itself are normal. distrib_report_items table are growing\nfrom 1.9++ millions row on december 2017 to 2.3++ million rows on january\n2018. Not a really significant growth.\nThe distrib_reports table (on which the year is filtered) has even less\nrows on 2018 (10k rows) compared to 400.000++ rows on 2017, which is very\nobvious.\n\nThe question is, why the query planner choose such very different path just\nby changing one parameter?\n\nThe table structures are below:\nhttps://pastebin.com/T6AmtQ3z\n\nThis behaviour is *not-reproducable* on postgres-10. On postgres-10, the\nquery plan are consistent, and both have very acceptable time:\n2017 --> explain result on postgres-10: https://explain.depesz.com/s/N9r5\n2018 --> --> explain result on postgres-10:\nhttps://explain.depesz.com/s/Tf5K\n\nIs this a bug on postgres-9.4.4 ?\n\nWe are considering upgrade to postgres-10 but since this is a very critical\nsystem, it takes a lot of test and approval :)\n\nThank you very much.\n\nHello,I would like to report a strange behaviour on postgresql 9.4.4.The following query run in just 9 ms:SELECT SUM(\"distrib_report_items\".\"qty\") AS sum_idFROM \"distrib_report_items\" INNER JOIN \"retailers\" ON \"retailers\".\"id\" = \"distrib_report_items\".\"retailer_id\" INNER JOIN \"distrib_reports\" ON \"distrib_reports\".\"id\" = \"distrib_report_items\".\"distrib_report_id\" INNER JOIN \"distrib_report_groups\" ON \"distrib_report_groups\".\"id\" = \"distrib_reports\".\"distrib_report_group_id\"WHERE \"retailers\".\"sub_district_id\" = 'f4bff929-f911-4ab8-b1b2-aaa50e0ccb39' AND \"distrib_report_items\".\"product_id\" = '05167ad0-d2fa-4a4a-bd13-be8f89ce34a2' AND \"distrib_reports\".\"month\" = 1 AND \"distrib_reports\".\"year\" = 2017 AND \"distrib_reports\".\"state\" = 'SUBMITTED' AND \"distrib_report_groups\".\"distrib_report_group_type_id\" = '559a5fdc-418d-4494-aebf-80ecf8743d35'But changing just one parameter (the year) from 2017 to 2018, the \"exactly same query\", become incredebly slow, at 8 seconds. This is the full query after changing the year:SELECT SUM(\"distrib_report_items\".\"qty\") AS sum_idFROM \"distrib_report_items\" INNER JOIN \"retailers\" ON \"retailers\".\"id\" = \"distrib_report_items\".\"retailer_id\" INNER JOIN \"distrib_reports\" ON \"distrib_reports\".\"id\" = \"distrib_report_items\".\"distrib_report_id\" INNER JOIN \"distrib_report_groups\" ON \"distrib_report_groups\".\"id\" = \"distrib_reports\".\"distrib_report_group_id\"WHERE \"retailers\".\"sub_district_id\" = 'f4bff929-f911-4ab8-b1b2-aaa50e0ccb39' AND \"distrib_report_items\".\"product_id\" = '05167ad0-d2fa-4a4a-bd13-be8f89ce34a2' AND \"distrib_reports\".\"month\" = 1 AND \"distrib_reports\".\"year\" = 2018 AND \"distrib_reports\".\"state\" = 'SUBMITTED' AND \"distrib_report_groups\".\"distrib_report_group_type_id\" = '559a5fdc-418d-4494-aebf-80ecf8743d35'The explain analyze of the 2 queries are resulting on really different query plan, here are the links to depesz:2017 --> explain result on postgres-9: https://explain.depesz.com/s/qJF12018 --> explain result on postgres-9: https://explain.depesz.com/s/pT0yThe table growth itself are normal. distrib_report_items table are growing from 1.9++ millions row on december 2017 to 2.3++ million rows on january 2018. Not a really significant growth. The distrib_reports table (on which the year is filtered) has even less rows on 2018 (10k rows) compared to 400.000++ rows on 2017, which is very obvious.The question is, why the query planner choose such very different path just by changing one parameter?The table structures are below:https://pastebin.com/T6AmtQ3zThis behaviour is *not-reproducable* on postgres-10. On postgres-10, the query plan are consistent, and both have very acceptable time:2017 --> explain result on postgres-10: https://explain.depesz.com/s/N9r52018 --> --> explain result on postgres-10: https://explain.depesz.com/s/Tf5KIs this a bug on postgres-9.4.4 ? We are considering upgrade to postgres-10 but since this is a very critical system, it takes a lot of test and approval :)Thank you very much.", "msg_date": "Mon, 29 Jan 2018 00:32:59 +0700", "msg_from": "Nur Agus <[email protected]>", "msg_from_op": true, "msg_subject": "Query Slow After 2018" }, { "msg_contents": "On Mon, Jan 29, 2018 at 12:32:59AM +0700, Nur Agus wrote:\n> The following query run in just 9 ms:\n\n> \"distrib_reports\".\"month\" = 1 AND\n> \"distrib_reports\".\"year\" = 2017 AND\n> \"distrib_reports\".\"state\" = 'SUBMITTED' AND\n> \"distrib_report_groups\".\"distrib_report_group_type_id\" =\n> '559a5fdc-418d-4494-aebf-80ecf8743d35'\n\n> The explain analyze of the 2 queries are resulting on really different\n> query plan, here are the links to depesz:\n> 2017 --> explain result on postgres-9: https://explain.depesz.com/s/qJF1\n> 2018 --> explain result on postgres-9: https://explain.depesz.com/s/pT0y\n\n> The question is, why the query planner choose such very different path just\n> by changing one parameter?\n\nLooks like this badly underestimates its rowcount:\n\nIndex Scan using index_distrib_reports_on_year on distrib_reports (cost=0.42..40.62 rows=8 width=32) (actual time=0.034..50.452 rows=17,055 loops=1)\n Index Cond: (year = 2018)\n Filter: ((month = 1) AND ((state)::text = 'SUBMITTED'::text))\n Rows Removed by Filter: 1049\n\nMaybe because \"if year==2018\" then, month=1 does essentialy nothing ..\n..but postgres thinks it'll filters out some 90% of the rows.\n\nAnd possibly the same for SUBMITTED (?)\nYou should probably use timestamp column rather than integer year+month.\n\nOn PG10, you could probably work around it using \"CREATE STATISTICS\".\n\n> This behaviour is *not-reproducable* on postgres-10. On postgres-10, the\n> query plan are consistent, and both have very acceptable time:\n> 2017 --> explain result on postgres-10: https://explain.depesz.com/s/N9r5\n> 2018 --> --> explain result on postgres-10:\n> https://explain.depesz.com/s/Tf5K\n..I think default max_parallel_workers_per_gather=3 by chance causes the plan\nto be the same.\n\nI think there's still a underestimate rowcount with PG10 (without CREATE\nSTATISTICS), but it's masked by \"rows=1 actual rows=0\" roundoff error with\nhigh loop count.\n\nJustin\n\n", "msg_date": "Sun, 28 Jan 2018 11:51:10 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Slow After 2018" }, { "msg_contents": "On Sunday, January 28, 2018, Nur Agus <[email protected]> wrote:\n\n>\n> Is this a bug on postgres-9.4.4 ?\n>\n> We are considering upgrade to postgres-10 but since this is a very\n> critical system, it takes a lot of test and approval :)\n>\n>\n\nUpgrade to 9.4.15. Asking if 9.4.4 might have a bug is a question most\npeople here aren't going to be inclined to answer given the 11 bug fixes\nthat version has received since.\n\nDavid J.\n\nOn Sunday, January 28, 2018, Nur Agus <[email protected]> wrote:Is this a bug on postgres-9.4.4 ? We are considering upgrade to postgres-10 but since this is a very critical system, it takes a lot of test and approval :) Upgrade to 9.4.15.  Asking if 9.4.4 might have a bug is a question most people here aren't going to be inclined to answer given the 11 bug fixes that version has received since.David J.", "msg_date": "Sun, 28 Jan 2018 14:40:44 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Slow After 2018" } ]
[ { "msg_contents": "Hi,\nI'm currently migrating an oracle schema to postgresql. In the oracle`s\nschema there is a table partition that has partitions by range(date - for\nevery day) and each partition has a sub partition by list(some values..).\nMoreover, the data is loaded from a csv in a bulk. One important thing is\nthat some data might be imported twice therefore there must but a unique\nindex on the table.\n\nOn PostgreSQL 10.1 I created the main table partitioned by range(date) and\nI created all the sub partitions. I have 2 problems :\n\n1)In the oracle main table there are global indexes for selects that\ninvolve columns that arent part of the range or list partitions. According\nto the documentation I need to create the indexes on each leaf. I have\npartition for every day in the year so I'll have about 6(num of global\nindexes in oracle)*365(days of year)*7(number of sub partitions) = 15330\nindexes created every year. I guess that the performance that I will have\nwhen I select columns that arent part of the partitions order will be\npretty bad. Any idea ?\n\n2)Regarding the uniqueness, the only solution is to create a unique index\nfor every subpartition ?\n\n3)Any suggestions how to improve queries that involve columns that arent\npart of the paritions order ?\n\nThanks , Mariel.\n\nHi,I'm currently migrating an oracle schema to postgresql. In the oracle`s schema there is a table partition that has partitions by range(date - for every day) and each partition has a sub partition by list(some values..). Moreover, the data is loaded from a csv in a bulk. One important thing is that some data might be imported twice therefore there must but a unique index on the table.On PostgreSQL 10.1 I created the main table partitioned by range(date) and I created all the sub partitions. I have 2 problems : 1)In the oracle main table there are global indexes for selects that involve columns that arent part of the range or list partitions. According to the documentation I need to create the indexes on each leaf. I have partition for every day in the year so I'll have about 6(num of global indexes in oracle)*365(days of year)*7(number of sub partitions) = 15330 indexes created every year. I guess that the performance that I will have when I select columns that arent part of the partitions order will be pretty bad. Any idea ?2)Regarding the uniqueness, the only solution is to create a unique index for every subpartition ?3)Any suggestions how to improve queries that involve columns that arent part of the paritions order ? Thanks , Mariel.", "msg_date": "Mon, 29 Jan 2018 12:30:13 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 10.1 partitions and indexes" } ]
[ { "msg_contents": "I'm wondering if there is anything I can tune in my PG 10.1 database to\navoid these errors:\n\n$ psql -f failing_query.sql\npsql:failing_query.sql:46: ERROR: dsa_allocate could not find 7 free pages\nCONTEXT: parallel worker\n\nI tried throttling back the number of parallel workers to just 2, that\ndidn't help.\n\nThe query is joining two views that each have 50 or so underlying queries,\nunioned, in them. Unfortunately due to an invalid index, it is sequence\nscanning some of the tables. I can't fix the indexes until a few create\nmaterialized view commands that are currently running (and have been\nrunning for 6 days) finish or I kill them, because they are holding a lock\nthat is blocking any attempt to reindex.\n\nSo that leaves me looking for some tunable (hopefully one that doesn't\nrequire a restart) which will fix this by adding sufficient resources to\nthe system to allow the dsa_allocate() to find enough (contiguous?) pages.\nMy system seems to have plenty of extra capacity.\n\nThere was a thread on pghackers in December where someone else was seeing a\nsimilar error, but couldn't reproduce it consistently. I've run the above\nquery hundreds of times over the last 24 hours, but just the one fails when\nI select just the right parameters - and fails every time I run it with\nthose parameters.\n\nIn that thread someone speculated it had to do with running many parallel\nbitmap heap scans in one query. I count 98 in the query plan.\n\nI'm hoping there is a \"magic X tunable\" which I just need to bump up a\nlittle to let queries like this run without the fatal failure.\n\nI'm wondering if there is anything I can tune in my PG 10.1 database to avoid these errors:$  psql -f failing_query.sqlpsql:failing_query.sql:46: ERROR:  dsa_allocate could not find 7 free pagesCONTEXT:  parallel workerI tried throttling back the number of parallel workers to just 2, that didn't help.The query is joining two views that each have 50 or so underlying queries, unioned, in them.  Unfortunately due to an invalid index, it is sequence scanning some of the tables.   I can't fix the indexes until a few create materialized view commands that are currently running (and have been running for 6 days) finish or I kill them, because they are holding a lock that is blocking any attempt to reindex.So that leaves me looking for some tunable (hopefully one that doesn't require a restart) which will fix this by adding sufficient resources to the system to allow the dsa_allocate() to find enough (contiguous?) pages.  My system seems to have plenty of extra capacity.There was a thread on pghackers in December where someone else was seeing a similar error, but couldn't reproduce it consistently.   I've run the above query hundreds of times over the last 24 hours, but just the one fails when I select just the right parameters - and fails every time I run it with those parameters.In that thread someone speculated it had to do with running many parallel bitmap heap scans in one query.  I count 98 in the query plan.I'm hoping there is a \"magic X tunable\" which I just need to bump up a little to let queries like this run without the fatal failure.", "msg_date": "Mon, 29 Jan 2018 11:19:43 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": true, "msg_subject": "dsa_allocate() faliure" }, { "msg_contents": "Rick Otten <[email protected]> writes:\n> I'm wondering if there is anything I can tune in my PG 10.1 database to\n> avoid these errors:\n\n> $ psql -f failing_query.sql\n> psql:failing_query.sql:46: ERROR: dsa_allocate could not find 7 free pages\n> CONTEXT: parallel worker\n\nHmm. There's only one place in the source code that emits that message\ntext:\n\n /*\n * Ask the free page manager for a run of pages. This should always\n * succeed, since both get_best_segment and make_new_segment should\n * only return a non-NULL pointer if it actually contains enough\n * contiguous freespace. If it does fail, something in our backend\n * private state is out of whack, so use FATAL to kill the process.\n */\n if (!FreePageManagerGet(segment_map->fpm, npages, &first_page))\n elog(FATAL,\n \"dsa_allocate could not find %zu free pages\", npages);\n\nNow maybe that comment is being unreasonably optimistic, but it sure\nappears that this is supposed to be a can't-happen case, in which case\nyou've found a bug.\n\ncc'ing the DSA authors for comment.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 29 Jan 2018 11:37:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Tue, Jan 30, 2018 at 5:37 AM, Tom Lane <[email protected]> wrote:\n> Rick Otten <[email protected]> writes:\n>> I'm wondering if there is anything I can tune in my PG 10.1 database to\n>> avoid these errors:\n>\n>> $ psql -f failing_query.sql\n>> psql:failing_query.sql:46: ERROR: dsa_allocate could not find 7 free pages\n>> CONTEXT: parallel worker\n>\n> Hmm. There's only one place in the source code that emits that message\n> text:\n>\n> /*\n> * Ask the free page manager for a run of pages. This should always\n> * succeed, since both get_best_segment and make_new_segment should\n> * only return a non-NULL pointer if it actually contains enough\n> * contiguous freespace. If it does fail, something in our backend\n> * private state is out of whack, so use FATAL to kill the process.\n> */\n> if (!FreePageManagerGet(segment_map->fpm, npages, &first_page))\n> elog(FATAL,\n> \"dsa_allocate could not find %zu free pages\", npages);\n>\n> Now maybe that comment is being unreasonably optimistic, but it sure\n> appears that this is supposed to be a can't-happen case, in which case\n> you've found a bug.\n\nThis is probably the bug fixed here:\n\nhttps://www.postgresql.org/message-id/E1eQzIl-0004wM-K3%40gemulon.postgresql.org\n\nThat was back patched, so 10.2 will contain the fix. The bug was not\nin dsa.c itself, but in the parallel query code that mixed up DSA\nareas, corrupting them. The problem comes up when the query plan has\nmultiple Gather nodes (and a particular execution pattern) -- is that\nthe case here, in the EXPLAIN output? That seems plausible given the\ndescription of a 50-branch UNION. The only workaround until 10.2\nwould be to reduce max_parallel_workers_per_gather to 0 to prevent\nparallelism completely for this query.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Tue, 30 Jan 2018 09:52:43 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "If I do a \"set max_parallel_workers_per_gather=0;\" before I run the query\nin that session, it runs just fine.\nIf I set it to 2, the query dies with the dsa_allocate error.\n\nI'll use that as a work around until 10.2 comes out. Thanks! I have\nsomething that will help.\n\n\nOn Mon, Jan 29, 2018 at 3:52 PM, Thomas Munro <[email protected]\n> wrote:\n\n> On Tue, Jan 30, 2018 at 5:37 AM, Tom Lane <[email protected]> wrote:\n> > Rick Otten <[email protected]> writes:\n> >> I'm wondering if there is anything I can tune in my PG 10.1 database to\n> >> avoid these errors:\n> >\n> >> $ psql -f failing_query.sql\n> >> psql:failing_query.sql:46: ERROR: dsa_allocate could not find 7 free\n> pages\n> >> CONTEXT: parallel worker\n> >\n> > Hmm. There's only one place in the source code that emits that message\n> > text:\n> >\n> > /*\n> > * Ask the free page manager for a run of pages. This should\n> always\n> > * succeed, since both get_best_segment and make_new_segment\n> should\n> > * only return a non-NULL pointer if it actually contains enough\n> > * contiguous freespace. If it does fail, something in our\n> backend\n> > * private state is out of whack, so use FATAL to kill the\n> process.\n> > */\n> > if (!FreePageManagerGet(segment_map->fpm, npages, &first_page))\n> > elog(FATAL,\n> > \"dsa_allocate could not find %zu free pages\", npages);\n> >\n> > Now maybe that comment is being unreasonably optimistic, but it sure\n> > appears that this is supposed to be a can't-happen case, in which case\n> > you've found a bug.\n>\n> This is probably the bug fixed here:\n>\n> https://www.postgresql.org/message-id/E1eQzIl-0004wM-K3%\n> 40gemulon.postgresql.org\n>\n> That was back patched, so 10.2 will contain the fix. The bug was not\n> in dsa.c itself, but in the parallel query code that mixed up DSA\n> areas, corrupting them. The problem comes up when the query plan has\n> multiple Gather nodes (and a particular execution pattern) -- is that\n> the case here, in the EXPLAIN output? That seems plausible given the\n> description of a 50-branch UNION. The only workaround until 10.2\n> would be to reduce max_parallel_workers_per_gather to 0 to prevent\n> parallelism completely for this query.\n>\n> --\n> Thomas Munro\n> http://www.enterprisedb.com\n>\n\nIf I do a \"set max_parallel_workers_per_gather=0;\" before I run the query in that session, it runs just fine.If I set it to 2, the query dies with the dsa_allocate error.I'll use that as a work around until 10.2 comes out.  Thanks!  I have something that will help.On Mon, Jan 29, 2018 at 3:52 PM, Thomas Munro <[email protected]> wrote:On Tue, Jan 30, 2018 at 5:37 AM, Tom Lane <[email protected]> wrote:\n> Rick Otten <[email protected]> writes:\n>> I'm wondering if there is anything I can tune in my PG 10.1 database to\n>> avoid these errors:\n>\n>> $  psql -f failing_query.sql\n>> psql:failing_query.sql:46: ERROR:  dsa_allocate could not find 7 free pages\n>> CONTEXT:  parallel worker\n>\n> Hmm.  There's only one place in the source code that emits that message\n> text:\n>\n>         /*\n>          * Ask the free page manager for a run of pages.  This should always\n>          * succeed, since both get_best_segment and make_new_segment should\n>          * only return a non-NULL pointer if it actually contains enough\n>          * contiguous freespace.  If it does fail, something in our backend\n>          * private state is out of whack, so use FATAL to kill the process.\n>          */\n>         if (!FreePageManagerGet(segment_map->fpm, npages, &first_page))\n>             elog(FATAL,\n>                  \"dsa_allocate could not find %zu free pages\", npages);\n>\n> Now maybe that comment is being unreasonably optimistic, but it sure\n> appears that this is supposed to be a can't-happen case, in which case\n> you've found a bug.\n\nThis is probably the bug fixed here:\n\nhttps://www.postgresql.org/message-id/E1eQzIl-0004wM-K3%40gemulon.postgresql.org\n\nThat was back patched, so 10.2 will contain the fix.  The bug was not\nin dsa.c itself, but in the parallel query code that mixed up DSA\nareas, corrupting them.  The problem comes up when the query plan has\nmultiple Gather nodes (and a particular execution pattern) -- is that\nthe case here, in the EXPLAIN output?  That seems plausible given the\ndescription of a 50-branch UNION.  The only workaround until 10.2\nwould be to reduce max_parallel_workers_per_gather to 0 to prevent\nparallelism completely for this query.\n\n--\nThomas Munro\nhttp://www.enterprisedb.com", "msg_date": "Mon, 29 Jan 2018 16:35:53 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": ">>dsa_allocate could not find 7 free pages\nI just this error message again on all of my worker nodes (I am using\nCitus 7.4 rel). The PG core is my own build of release_10_stable\n(10.4) out of GitHub on Ubuntu.\n\nWhat's the best way to debug this? I am running pre-production tests\nfor the next few days, so I could gather info. if necessary (I cannot\npinpoint a query to repro this yet, as we have 10K queries running\nconcurrently).\n\n\n\n\nOn Mon, Jan 29, 2018 at 1:35 PM, Rick Otten <[email protected]> wrote:\n> If I do a \"set max_parallel_workers_per_gather=0;\" before I run the query in\n> that session, it runs just fine.\n> If I set it to 2, the query dies with the dsa_allocate error.\n>\n> I'll use that as a work around until 10.2 comes out. Thanks! I have\n> something that will help.\n>\n>\n> On Mon, Jan 29, 2018 at 3:52 PM, Thomas Munro\n> <[email protected]> wrote:\n>>\n>> On Tue, Jan 30, 2018 at 5:37 AM, Tom Lane <[email protected]> wrote:\n>> > Rick Otten <[email protected]> writes:\n>> >> I'm wondering if there is anything I can tune in my PG 10.1 database to\n>> >> avoid these errors:\n>> >\n>> >> $ psql -f failing_query.sql\n>> >> psql:failing_query.sql:46: ERROR: dsa_allocate could not find 7 free\n>> >> pages\n>> >> CONTEXT: parallel worker\n>> >\n>> > Hmm. There's only one place in the source code that emits that message\n>> > text:\n>> >\n>> > /*\n>> > * Ask the free page manager for a run of pages. This should\n>> > always\n>> > * succeed, since both get_best_segment and make_new_segment\n>> > should\n>> > * only return a non-NULL pointer if it actually contains enough\n>> > * contiguous freespace. If it does fail, something in our\n>> > backend\n>> > * private state is out of whack, so use FATAL to kill the\n>> > process.\n>> > */\n>> > if (!FreePageManagerGet(segment_map->fpm, npages, &first_page))\n>> > elog(FATAL,\n>> > \"dsa_allocate could not find %zu free pages\", npages);\n>> >\n>> > Now maybe that comment is being unreasonably optimistic, but it sure\n>> > appears that this is supposed to be a can't-happen case, in which case\n>> > you've found a bug.\n>>\n>> This is probably the bug fixed here:\n>>\n>>\n>> https://www.postgresql.org/message-id/E1eQzIl-0004wM-K3%40gemulon.postgresql.org\n>>\n>> That was back patched, so 10.2 will contain the fix. The bug was not\n>> in dsa.c itself, but in the parallel query code that mixed up DSA\n>> areas, corrupting them. The problem comes up when the query plan has\n>> multiple Gather nodes (and a particular execution pattern) -- is that\n>> the case here, in the EXPLAIN output? That seems plausible given the\n>> description of a 50-branch UNION. The only workaround until 10.2\n>> would be to reduce max_parallel_workers_per_gather to 0 to prevent\n>> parallelism completely for this query.\n>>\n>> --\n>> Thomas Munro\n>> http://www.enterprisedb.com\n>\n>\n\n", "msg_date": "Tue, 22 May 2018 21:10:02 -0700", "msg_from": "Sand Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Wed, May 23, 2018 at 4:10 PM, Sand Stone <[email protected]> wrote:\n>>>dsa_allocate could not find 7 free pages\n> I just this error message again on all of my worker nodes (I am using\n> Citus 7.4 rel). The PG core is my own build of release_10_stable\n> (10.4) out of GitHub on Ubuntu.\n\nAt which commit ID?\n\nAll of your worker nodes... so this happened at the same time or at\ndifferent times? I don't know much about Citus -- do you mean that\nthese were separate PostgreSQL clusters, and they were all running the\nsame query and they all crashed like this?\n\n> What's the best way to debug this? I am running pre-production tests\n> for the next few days, so I could gather info. if necessary (I cannot\n> pinpoint a query to repro this yet, as we have 10K queries running\n> concurrently).\n\nAny chance of an EXPLAIN plan for the query that crashed like this?\nDo you know if it's using multiple Gather[Merge] nodes and parallel\nbitmap heap scans? Was it a regular backend process or a parallel\nworker process (or a Citus worker process, if that is a thing?) that\nraised the error?\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Wed, 23 May 2018 16:44:25 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": ">> At which commit ID?\n83fcc615020647268bb129cbf86f7661feee6412 (5/6)\n\n>>do you mean that these were separate PostgreSQL clusters, and they were all running the same query and they all crashed like this?\nA few worker nodes, a table is hash partitioned by \"aTable.did\" by\nCitus, and further partitioned by PG10 by time range on field \"ts\". As\nfar as I could tell, Citus just does a query rewrite, and execute the\nsame type of queries to all nodes.\n\n>>so this happened at the same time or at different times?\nAt the same time. The queries are simple count and sum queries, here\nis the relevant part from one of the worker nodes:\n2018-05-23 01:24:01.492 UTC [130536] ERROR: dsa_allocate could not\nfind 7 free pages\n2018-05-23 01:24:01.492 UTC [130536] CONTEXT: parallel worker\nSTATEMENT: COPY (SELECT count(1) AS count, sum(worker_column_1) AS\nsum FROM (SELECT subquery.avg AS worker_column_1 FROM (SELECT\naTable.did, avg((aTable.sum OPERATOR(pg_catalog./)\n(aTable.count)::double precision)) AS avg FROM public.aTable_102117\naTable WHERE ((aTable.ts OPERATOR(pg_catalog.>=) '2018-04-25\n00:00:00+00'::timestamp with time zone) AND (aTable.ts\nOPERATOR(pg_catalog.<=) '2018-04-30 00:00:00+00'::timestamp with time\nzone) AND (aTable.v OPERATOR(pg_catalog.=) 12345)) GROUP BY\naTable.did) subquery) worker_subquery) TO STDOUT WITH (FORMAT binary)\n\n\n>> a parallel worker process\nI think this is more of PG10 parallel bg worker issue. I don't think\nCitus just lets each worker PG server do its own planning.\n\nI will try to do more experiments about this, and see if there is any\nspecific query to cause the parallel query execution to fail. As far\nas I can tell, the level of concurrency triggered this issue. That is\nexecuting 10s of queries as shown on the worker nodes, depending on\nthe stats, the PG10 core may or may not spawn more bg workers.\n\nThanks for your time!\n\n\n\n\n\nOn Tue, May 22, 2018 at 9:44 PM, Thomas Munro\n<[email protected]> wrote:\n> On Wed, May 23, 2018 at 4:10 PM, Sand Stone <[email protected]> wrote:\n>>>>dsa_allocate could not find 7 free pages\n>> I just this error message again on all of my worker nodes (I am using\n>> Citus 7.4 rel). The PG core is my own build of release_10_stable\n>> (10.4) out of GitHub on Ubuntu.\n>\n> At which commit ID?\n>\n> All of your worker nodes... so this happened at the same time or at\n> different times? I don't know much about Citus -- do you mean that\n> these were separate PostgreSQL clusters, and they were all running the\n> same query and they all crashed like this?\n>\n>> What's the best way to debug this? I am running pre-production tests\n>> for the next few days, so I could gather info. if necessary (I cannot\n>> pinpoint a query to repro this yet, as we have 10K queries running\n>> concurrently).\n>\n> Any chance of an EXPLAIN plan for the query that crashed like this?\n> Do you know if it's using multiple Gather[Merge] nodes and parallel\n> bitmap heap scans? Was it a regular backend process or a parallel\n> worker process (or a Citus worker process, if that is a thing?) that\n> raised the error?\n>\n> --\n> Thomas Munro\n> http://www.enterprisedb.com\n\n", "msg_date": "Wed, 23 May 2018 07:06:41 -0700", "msg_from": "Sand Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "Just as a follow up. I tried the parallel execution again (in a stress\ntest environment). Now the crash seems gone. I will keep an eye on\nthis for the next few weeks.\n\nMy theory is that the Citus cluster created and shut down a lot of TCP\nconnections between coordinator and workers. If running on untuned\nLinux machines, the TCP ports might run out.\n\nOf course, I am using \"newer\" PG10 bits and Citus7.5 this time.\nOn Wed, May 23, 2018 at 7:06 AM Sand Stone <[email protected]> wrote:\n>\n> >> At which commit ID?\n> 83fcc615020647268bb129cbf86f7661feee6412 (5/6)\n>\n> >>do you mean that these were separate PostgreSQL clusters, and they were all running the same query and they all crashed like this?\n> A few worker nodes, a table is hash partitioned by \"aTable.did\" by\n> Citus, and further partitioned by PG10 by time range on field \"ts\". As\n> far as I could tell, Citus just does a query rewrite, and execute the\n> same type of queries to all nodes.\n>\n> >>so this happened at the same time or at different times?\n> At the same time. The queries are simple count and sum queries, here\n> is the relevant part from one of the worker nodes:\n> 2018-05-23 01:24:01.492 UTC [130536] ERROR: dsa_allocate could not\n> find 7 free pages\n> 2018-05-23 01:24:01.492 UTC [130536] CONTEXT: parallel worker\n> STATEMENT: COPY (SELECT count(1) AS count, sum(worker_column_1) AS\n> sum FROM (SELECT subquery.avg AS worker_column_1 FROM (SELECT\n> aTable.did, avg((aTable.sum OPERATOR(pg_catalog./)\n> (aTable.count)::double precision)) AS avg FROM public.aTable_102117\n> aTable WHERE ((aTable.ts OPERATOR(pg_catalog.>=) '2018-04-25\n> 00:00:00+00'::timestamp with time zone) AND (aTable.ts\n> OPERATOR(pg_catalog.<=) '2018-04-30 00:00:00+00'::timestamp with time\n> zone) AND (aTable.v OPERATOR(pg_catalog.=) 12345)) GROUP BY\n> aTable.did) subquery) worker_subquery) TO STDOUT WITH (FORMAT binary)\n>\n>\n> >> a parallel worker process\n> I think this is more of PG10 parallel bg worker issue. I don't think\n> Citus just lets each worker PG server do its own planning.\n>\n> I will try to do more experiments about this, and see if there is any\n> specific query to cause the parallel query execution to fail. As far\n> as I can tell, the level of concurrency triggered this issue. That is\n> executing 10s of queries as shown on the worker nodes, depending on\n> the stats, the PG10 core may or may not spawn more bg workers.\n>\n> Thanks for your time!\n>\n>\n>\n>\n>\n> On Tue, May 22, 2018 at 9:44 PM, Thomas Munro\n> <[email protected]> wrote:\n> > On Wed, May 23, 2018 at 4:10 PM, Sand Stone <[email protected]> wrote:\n> >>>>dsa_allocate could not find 7 free pages\n> >> I just this error message again on all of my worker nodes (I am using\n> >> Citus 7.4 rel). The PG core is my own build of release_10_stable\n> >> (10.4) out of GitHub on Ubuntu.\n> >\n> > At which commit ID?\n> >\n> > All of your worker nodes... so this happened at the same time or at\n> > different times? I don't know much about Citus -- do you mean that\n> > these were separate PostgreSQL clusters, and they were all running the\n> > same query and they all crashed like this?\n> >\n> >> What's the best way to debug this? I am running pre-production tests\n> >> for the next few days, so I could gather info. if necessary (I cannot\n> >> pinpoint a query to repro this yet, as we have 10K queries running\n> >> concurrently).\n> >\n> > Any chance of an EXPLAIN plan for the query that crashed like this?\n> > Do you know if it's using multiple Gather[Merge] nodes and parallel\n> > bitmap heap scans? Was it a regular backend process or a parallel\n> > worker process (or a Citus worker process, if that is a thing?) that\n> > raised the error?\n> >\n> > --\n> > Thomas Munro\n> > http://www.enterprisedb.com\n\n", "msg_date": "Wed, 15 Aug 2018 13:32:45 -0700", "msg_from": "Sand Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Thu, Aug 16, 2018 at 8:32 AM, Sand Stone <[email protected]> wrote:\n> Just as a follow up. I tried the parallel execution again (in a stress\n> test environment). Now the crash seems gone. I will keep an eye on\n> this for the next few weeks.\n\nThanks for the report. That's great news, but it'd be good to\nunderstand why it was happening.\n\n> My theory is that the Citus cluster created and shut down a lot of TCP\n> connections between coordinator and workers. If running on untuned\n> Linux machines, the TCP ports might run out.\n\nI'm not sure how that's relevant, unless perhaps it causes executor\nnodes to be invoked in a strange sequence that commit fd7c0fa7 didn't\nfix? I wonder if there could be something different about the control\nflow with custom scans, or something about the way Citus worker nodes\ninvoke plan fragments, or some error path that I failed to consider...\nIt's a clue that all of your worker nodes reliably crashed at the same\ntime on the same/similar queries (presumably distributed query\nfragments for different shards), making it seem more like a\ncommon-or-garden bug rather than some kind of timing-based heisenbug.\nIf you ever manage to reproduce it, an explain plan and a back trace\nwould be very useful.\n\n> Of course, I am using \"newer\" PG10 bits and Citus7.5 this time.\n\nHmm. There weren't any relevant commits to REL_10_STABLE that I can\nthink of. And (with the proviso that I know next to nothing about\nCitus) I just cloned https://github.com/citusdata/citus.git and\nskimmed through \"git diff origin/release-7.4..origin/release-7.5\", and\nnothing is jumping out at me. Can you still see the problem with\nCitus 7.4?\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Thu, 16 Aug 2018 10:42:25 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": ">Can you still see the problem with Citus 7.4?\nHi, Thomas. I actually went back to the cluster with Citus7.4 and\nPG10.4. And modified the parallel param. So far, I haven't seen any\nserver crash.\n\nThe main difference between crashes observed and no crash, is the set\nof Linux TCP time out parameters (to release the ports faster).\nUnfortunately, I cannot \"undo\" the Linux params and run the stress\ntests anymore, as this is a multi-million $ cluster and people are\ndoing more useful things on it. I will keep an eye on any parallel\nexecution issue.\n\n\nOn Wed, Aug 15, 2018 at 3:43 PM Thomas Munro\n<[email protected]> wrote:\n>\n> On Thu, Aug 16, 2018 at 8:32 AM, Sand Stone <[email protected]> wrote:\n> > Just as a follow up. I tried the parallel execution again (in a stress\n> > test environment). Now the crash seems gone. I will keep an eye on\n> > this for the next few weeks.\n>\n> Thanks for the report. That's great news, but it'd be good to\n> understand why it was happening.\n>\n> > My theory is that the Citus cluster created and shut down a lot of TCP\n> > connections between coordinator and workers. If running on untuned\n> > Linux machines, the TCP ports might run out.\n>\n> I'm not sure how that's relevant, unless perhaps it causes executor\n> nodes to be invoked in a strange sequence that commit fd7c0fa7 didn't\n> fix? I wonder if there could be something different about the control\n> flow with custom scans, or something about the way Citus worker nodes\n> invoke plan fragments, or some error path that I failed to consider...\n> It's a clue that all of your worker nodes reliably crashed at the same\n> time on the same/similar queries (presumably distributed query\n> fragments for different shards), making it seem more like a\n> common-or-garden bug rather than some kind of timing-based heisenbug.\n> If you ever manage to reproduce it, an explain plan and a back trace\n> would be very useful.\n>\n> > Of course, I am using \"newer\" PG10 bits and Citus7.5 this time.\n>\n> Hmm. There weren't any relevant commits to REL_10_STABLE that I can\n> think of. And (with the proviso that I know next to nothing about\n> Citus) I just cloned https://github.com/citusdata/citus.git and\n> skimmed through \"git diff origin/release-7.4..origin/release-7.5\", and\n> nothing is jumping out at me. Can you still see the problem with\n> Citus 7.4?\n>\n> --\n> Thomas Munro\n> http://www.enterprisedb.com\n\n", "msg_date": "Sat, 25 Aug 2018 07:46:32 -0700", "msg_from": "Sand Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "I attached a query (and its query plan) that caused the crash:\n\"dsa_allocate could not find 13 free pages\" on one of the worker nodes. I\nanonymised the query text a bit. Interestingly, this time only one (same\none) of the nodes is crashing. Since this is a production environment, I\ncannot get the stack trace. Once turned off parallel execution for this\nnode. The whole query finished just fine. So the parallel query plan is\nfrom one of the nodes not crashed, hopefully the same plan would have been\nexecuted on the crashed node. In theory, every worker node has the same\nbits, and very similar data.\n\n===\npsql (10.4)\n\\dx\n List of installed extensions\n Name | Version | Schema | Description\n----------------+---------+------------+-----------------------------------\n citus | 7.4-3 | pg_catalog | Citus distributed database\n hll | 2.10 | public | type for storing hyperloglog data\nplpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language\n\n\nOn Sat, Aug 25, 2018 at 7:46 AM Sand Stone <[email protected]> wrote:\n\n> >Can you still see the problem with Citus 7.4?\n> Hi, Thomas. I actually went back to the cluster with Citus7.4 and\n> PG10.4. And modified the parallel param. So far, I haven't seen any\n> server crash.\n>\n> The main difference between crashes observed and no crash, is the set\n> of Linux TCP time out parameters (to release the ports faster).\n> Unfortunately, I cannot \"undo\" the Linux params and run the stress\n> tests anymore, as this is a multi-million $ cluster and people are\n> doing more useful things on it. I will keep an eye on any parallel\n> execution issue.\n>\n>\n> On Wed, Aug 15, 2018 at 3:43 PM Thomas Munro\n> <[email protected]> wrote:\n> >\n> > On Thu, Aug 16, 2018 at 8:32 AM, Sand Stone <[email protected]>\n> wrote:\n> > > Just as a follow up. I tried the parallel execution again (in a stress\n> > > test environment). Now the crash seems gone. I will keep an eye on\n> > > this for the next few weeks.\n> >\n> > Thanks for the report. That's great news, but it'd be good to\n> > understand why it was happening.\n> >\n> > > My theory is that the Citus cluster created and shut down a lot of TCP\n> > > connections between coordinator and workers. If running on untuned\n> > > Linux machines, the TCP ports might run out.\n> >\n> > I'm not sure how that's relevant, unless perhaps it causes executor\n> > nodes to be invoked in a strange sequence that commit fd7c0fa7 didn't\n> > fix? I wonder if there could be something different about the control\n> > flow with custom scans, or something about the way Citus worker nodes\n> > invoke plan fragments, or some error path that I failed to consider...\n> > It's a clue that all of your worker nodes reliably crashed at the same\n> > time on the same/similar queries (presumably distributed query\n> > fragments for different shards), making it seem more like a\n> > common-or-garden bug rather than some kind of timing-based heisenbug.\n> > If you ever manage to reproduce it, an explain plan and a back trace\n> > would be very useful.\n> >\n> > > Of course, I am using \"newer\" PG10 bits and Citus7.5 this time.\n> >\n> > Hmm. There weren't any relevant commits to REL_10_STABLE that I can\n> > think of. And (with the proviso that I know next to nothing about\n> > Citus) I just cloned https://github.com/citusdata/citus.git and\n> > skimmed through \"git diff origin/release-7.4..origin/release-7.5\", and\n> > nothing is jumping out at me. Can you still see the problem with\n> > Citus 7.4?\n> >\n> > --\n> > Thomas Munro\n> > http://www.enterprisedb.com\n>", "msg_date": "Tue, 28 Aug 2018 21:44:07 -0700", "msg_from": "Sand Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Wed, Aug 29, 2018 at 5:48 PM Sand Stone <[email protected]> wrote:\n> I attached a query (and its query plan) that caused the crash: \"dsa_allocate could not find 13 free pages\" on one of the worker nodes. I anonymised the query text a bit. Interestingly, this time only one (same one) of the nodes is crashing. Since this is a production environment, I cannot get the stack trace. Once turned off parallel execution for this node. The whole query finished just fine. So the parallel query plan is from one of the nodes not crashed, hopefully the same plan would have been executed on the crashed node. In theory, every worker node has the same bits, and very similar data.\n\nI wonder if this was a different symptom of the problem fixed here:\n\nhttps://www.postgresql.org/message-id/flat/194c0706-c65b-7d81-ab32-2c248c3e2344%402ndquadrant.com\n\nCan you still reproduce it on current master, REL_11_STABLE or REL_10_STABLE?\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Fri, 5 Oct 2018 15:16:41 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "Hello,\r\n\r\nI'm not sure whether this is connected at all, but I'm facing the same error with a generated query on postgres 10.6.\r\nIt works with parallel query disabled and gives \"dsa_allocate could not find 7 free pages\" otherwise.\r\n\r\nI've attached query and strace. The table is partitioned on (o, date). It's not depended on the precise lists I'm using, while it obviously does depend on the fact that the optimizer chooses a parallel query. \r\n\r\nRegards\r\nArne Roland\r\n\r\n-----Original Message-----\r\nFrom: Thomas Munro <[email protected]> \r\nSent: Friday, October 5, 2018 4:17 AM\r\nTo: Sand Stone <[email protected]>\r\nCc: Rick Otten <[email protected]>; Tom Lane <[email protected]>; [email protected]; Robert Haas <[email protected]>\r\nSubject: Re: dsa_allocate() faliure\r\n\r\nOn Wed, Aug 29, 2018 at 5:48 PM Sand Stone <[email protected]> wrote:\r\n> I attached a query (and its query plan) that caused the crash: \"dsa_allocate could not find 13 free pages\" on one of the worker nodes. I anonymised the query text a bit. Interestingly, this time only one (same one) of the nodes is crashing. Since this is a production environment, I cannot get the stack trace. Once turned off parallel execution for this node. The whole query finished just fine. So the parallel query plan is from one of the nodes not crashed, hopefully the same plan would have been executed on the crashed node. In theory, every worker node has the same bits, and very similar data.\r\n\r\nI wonder if this was a different symptom of the problem fixed here:\r\n\r\nhttps://www.postgresql.org/message-id/flat/194c0706-c65b-7d81-ab32-2c248c3e2344%402ndquadrant.com\r\n\r\nCan you still reproduce it on current master, REL_11_STABLE or REL_10_STABLE?\r\n\r\n-- \r\nThomas Munro\r\nhttp://www.enterprisedb.com", "msg_date": "Thu, 24 Jan 2019 14:44:41 +0000", "msg_from": "Arne Roland <[email protected]>", "msg_from_op": false, "msg_subject": "RE: dsa_allocate() faliure" }, { "msg_contents": "Hello,\r\n\r\ndoes anybody have any idea what goes wrong here? Is there some additional information that could be helpful?\r\n\r\nAll the best\r\nArne Roland\r\n", "msg_date": "Mon, 28 Jan 2019 13:50:50 +0000", "msg_from": "Arne Roland <[email protected]>", "msg_from_op": false, "msg_subject": "RE: dsa_allocate() faliure" }, { "msg_contents": "On Tue, Jan 29, 2019 at 2:50 AM Arne Roland <[email protected]> wrote:\n> does anybody have any idea what goes wrong here? Is there some additional information that could be helpful?\n\nHi Arne,\n\nThis seems to be a bug; that error should not be reached. I wonder if\nit is a different manifestation of the bug reported as #15585 (ie some\ntype of rare corruption). Are you able to reproduce this\nconsistently? Can you please show the query plan?\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Tue, 29 Jan 2019 07:56:01 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "Hello,\n we are facing a similar issue on a Production system using a Postgresql 10.6:\n\norg.postgresql.util.PSQLException: ERROR: EXCEPTION on getstatistics ; ID: EXCEPTION on getstatistics_media ; ID: uidatareader.\nrun_query_media(2): [a1] REMOTE FATAL: dsa_allocate could not find 7 free pages\n\n\nThe query reads remotely (via pl/proxy) tables containing a lot of data (up to millions or rows for each table/node) after a remote “group by\" returns to the caller “master” node only a few hundreds of rows from each “slave” node.\nThe tables are partitioned using the INHERITANCE method that we are using since years with no issue. All tables have the same columns structure and number, about 300 columns. In the query there are no join, only a variable set of partitions depending on the date range.\nThe “REMOTE FATAL” refers to the pl/proxy that runs on 2 different slaves, [a0] and [a1], nodes with identical configuration and database structure, but it seems to fail only on node [a1].\nWhen we get the error if we reduce the date range and therefore the quantity of data read, the error disappears, the same if we set max_parallel_workers_per_gather = 0.\nObviously we cannot force the user to use short periods of time to avoid the error and so we have disabled the parallel query feature for the time being.\nIt is difficult to reproduce the issue because not always the user gets the error, furthermore re-running the same query in different moments/days it usually works. It is a kind of weird.\nWe would like not to stop the Production system and upgrade it to PG11. And even though would this guarantee a permanent fix? \nAny suggestion? \n\n\nRegards,\nFabio Isabettini\nVoipfuture (Germany)\n\n\n\nThe failing node [a1] configuration:\n\nOS: Centos 7 kernerl 3.10.0-862.11.6.el7.x86_64\nPostgres: postgres-10.5-862.11.6.1\nRAM: 256 GB (The main server containing the master node and [a0] node, the slave that has no issue, has 384 GB of RAM)\nCPU cores: 32\n\nshared_buffers = 64GB\nmax_worker_processes = 32\nmax_parallel_workers_per_gather = 8\nmax_parallel_workers = 32\n\n\n> On 28. Jan 2019, at 19:56:01, Thomas Munro <[email protected]> wrote:\n> \n> On Tue, Jan 29, 2019 at 2:50 AM Arne Roland <[email protected]> wrote:\n>> does anybody have any idea what goes wrong here? Is there some additional information that could be helpful?\n> \n> Hi Arne,\n> \n> This seems to be a bug; that error should not be reached. I wonder if\n> it is a different manifestation of the bug reported as #15585 (ie some\n> type of rare corruption). Are you able to reproduce this\n> consistently? Can you please show the query plan?\n> \n> -- \n> Thomas Munro\n> http://www.enterprisedb.com\n> \n\n\nHello, we are facing a similar issue on a Production system using a Postgresql 10.6:org.postgresql.util.PSQLException: ERROR: EXCEPTION on getstatistics ; ID: EXCEPTION on getstatistics_media ; ID: uidatareader.\nrun_query_media(2): [a1] REMOTE FATAL: dsa_allocate could not find 7 free pagesThe query reads remotely (via pl/proxy) tables containing a lot of data (up to millions or rows for each table/node) after a remote “group by\" returns to the caller “master” node only a few hundreds of rows from each “slave” node.The tables are partitioned using the INHERITANCE method that we are using since years with no issue. All tables have the same columns structure and number, about 300 columns. In the query there are no join, only a variable set of partitions depending on the date range.The “REMOTE FATAL” refers to the pl/proxy that runs on 2 different slaves, [a0] and [a1], nodes with identical configuration and database structure, but it seems to fail only on node [a1].When we get the error if we reduce the date range and therefore the quantity of data read, the error disappears, the same if we set max_parallel_workers_per_gather = 0.Obviously we cannot force the user to use short periods of time to avoid the error and so we have disabled the parallel query feature for the time being.It is difficult to reproduce the issue because not always the user gets the error, furthermore re-running the same query in different moments/days it usually works. It is a kind of weird.We would like not to stop the Production system and upgrade it to PG11. And even though would this guarantee a permanent fix? Any suggestion? Regards,Fabio IsabettiniVoipfuture (Germany)The failing node [a1] configuration:OS: Centos 7 kernerl 3.10.0-862.11.6.el7.x86_64Postgres: postgres-10.5-862.11.6.1RAM: 256 GB (The main server containing the master node and [a0] node, the slave that has no issue, has 384 GB of RAM)CPU cores: 32shared_buffers = 64GBmax_worker_processes = 32max_parallel_workers_per_gather = 8max_parallel_workers = 32On 28. Jan 2019, at 19:56:01, Thomas Munro <[email protected]> wrote:On Tue, Jan 29, 2019 at 2:50 AM Arne Roland <[email protected]> wrote:does anybody have any idea what goes wrong here? Is there some additional information that could be helpful?Hi Arne,This seems to be a bug; that error should not be reached.  I wonder ifit is a different manifestation of the bug reported as #15585 (ie sometype of rare corruption).  Are you able to reproduce thisconsistently?  Can you please show the query plan?-- Thomas Munrohttp://www.enterprisedb.com", "msg_date": "Tue, 29 Jan 2019 12:32:47 +0100", "msg_from": "Fabio Isabettini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Tue, Jan 29, 2019 at 10:32 PM Fabio Isabettini\n<[email protected]> wrote:\n> we are facing a similar issue on a Production system using a Postgresql 10.6:\n>\n> org.postgresql.util.PSQLException: ERROR: EXCEPTION on getstatistics ; ID: EXCEPTION on getstatistics_media ; ID: uidatareader.\n> run_query_media(2): [a1] REMOTE FATAL: dsa_allocate could not find 7 free pages\n\n> We would like not to stop the Production system and upgrade it to PG11. And even though would this guarantee a permanent fix?\n> Any suggestion?\n\nHi Fabio,\n\nThanks for your report. Could you please also show the query plan\nthat runs on the \"remote\" node (where the error occurred)?\n\nThere is no indication that upgrading to PG11 would help here. It\nseems we have an undiagnosed bug (in 10 and 11), and so far no one has\nbeen able to reproduce it at will. I personally have chewed a lot of\nCPU time on several machines trying various plan shapes and not seen\nthis or the possibly related symptom from bug #15585 even once. But\nwe have about three reports of each of the two symptoms. One reporter\nwrote to me off-list to say that they'd seen #15585 twice, the second\ntime by running the same query in a tight loop for 8 hours, and then\nnot seen it again in the past 3 weeks. Clearly there is issue needing\na fix here, but I don't yet know what it is.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Wed, 30 Jan 2019 14:13:14 +1100", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "Hi Thomas,\nit is a Production system and we don’t have permanent access to it.\nAlso to have an auto_explain feature always on, is not an option in production.\nI will ask the customer to give us notice asap the error present itself to connect immediately and try to get a query plan.\n\nRegards\n\nFabio Isabettini\nwww.voipfuture.com \n\n> On 30. Jan 2019, at 04:13:14, Thomas Munro <[email protected]> wrote:\n> \n> On Tue, Jan 29, 2019 at 10:32 PM Fabio Isabettini\n> <[email protected]> wrote:\n>> we are facing a similar issue on a Production system using a Postgresql 10.6:\n>> \n>> org.postgresql.util.PSQLException: ERROR: EXCEPTION on getstatistics ; ID: EXCEPTION on getstatistics_media ; ID: uidatareader.\n>> run_query_media(2): [a1] REMOTE FATAL: dsa_allocate could not find 7 free pages\n> \n>> We would like not to stop the Production system and upgrade it to PG11. And even though would this guarantee a permanent fix?\n>> Any suggestion?\n> \n> Hi Fabio,\n> \n> Thanks for your report. Could you please also show the query plan\n> that runs on the \"remote\" node (where the error occurred)?\n> \n> There is no indication that upgrading to PG11 would help here. It\n> seems we have an undiagnosed bug (in 10 and 11), and so far no one has\n> been able to reproduce it at will. I personally have chewed a lot of\n> CPU time on several machines trying various plan shapes and not seen\n> this or the possibly related symptom from bug #15585 even once. But\n> we have about three reports of each of the two symptoms. One reporter\n> wrote to me off-list to say that they'd seen #15585 twice, the second\n> time by running the same query in a tight loop for 8 hours, and then\n> not seen it again in the past 3 weeks. Clearly there is issue needing\n> a fix here, but I don't yet know what it is.\n> \n> -- \n> Thomas Munro\n> http://www.enterprisedb.com\n> \n\n\n\n", "msg_date": "Wed, 30 Jan 2019 10:53:24 +0100", "msg_from": "Fabio Isabettini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "Hi Thomas,\r\n\r\nthis is reproducible, while it's highly sensitive to the change of plans (i.e. the precise querys that do break change with every new analyze). Disabling parallel query seems to solve the problem (as expected).\r\nAt some point even the simple query\r\nselect count(*) from test_tab where (o = '0' and date >= '30.01.2019'::date-interval '14 days' or o = '1' and date >= '30.01.2019'::date-interval '14 days') and coalesce(fid,fallback) >=6 and coalesce(fid,fallback) <=6\r\nwas reported to fail (with the same error) at the live database, but I wasn't able to obtain a plan, since it works again with the current live data (maybe autoanalyze is at fault here). \r\nThe table test_tab has roughly 70 children that inherit from it. The children and the corresponding indexes should be named like '%part%'.\r\n\r\nI attached a query with a plan that fails on my test database.\r\n\r\nI don't want to rule out the possibility that it could be related to #15585; at least both issues seem to be related to Parallel Bitmap and inheritance/partitioned tables, but the error occurs relatively quickly in my case and every one of my processes (the children and the master) are failing with 'FATAL: dsa_allocate could not find 7 free pages'.\r\n\r\nRegards\r\nArne", "msg_date": "Thu, 31 Jan 2019 18:19:54 +0000", "msg_from": "Arne Roland <[email protected]>", "msg_from_op": false, "msg_subject": "RE: dsa_allocate() faliure" }, { "msg_contents": "On Thu, Jan 31, 2019 at 06:19:54PM +0000, Arne Roland wrote:\n> this is reproducible, while it's highly sensitive to the change of plans (i.e. the precise querys that do break change with every new analyze). Disabling parallel query seems to solve the problem (as expected).\n> At some point even the simple query\n> select count(*) from test_tab where (o = '0' and date >= '30.01.2019'::date-interval '14 days' or o = '1' and date >= '30.01.2019'::date-interval '14 days') and coalesce(fid,fallback) >=6 and coalesce(fid,fallback) <=6\n> was reported to fail (with the same error) at the live database, but I wasn't able to obtain a plan, since it works again with the current live data (maybe autoanalyze is at fault here). \n> The table test_tab has roughly 70 children that inherit from it. The children and the corresponding indexes should be named like '%part%'.\n> \n> I attached a query with a plan that fails on my test database.\n\nThanks - note that previously Thomas said:\n\nOn Mon, Dec 03, 2018 at 11:45:00AM +1300, Thomas Munro wrote:\n> On Sat, Dec 1, 2018 at 9:46 AM Justin Pryzby <[email protected]> wrote:\n> > elog(FATAL,\n> > \"dsa_allocate could not find %zu free pages\", npages);\n> > + abort()\n> \n> If anyone can reproduce this problem with a debugger, it'd be\n> interesting to see the output of dsa_dump(area), and\n> FreePageManagerDump(segment_map->fpm). This error condition means\n\nAre you able to cause the error in a test/devel/non-production environment to\nrun under a debugger, or could you compile with \"abort();\" after that elog() to\nsave a corefile ?\n\nJustin\n\n", "msg_date": "Fri, 1 Feb 2019 12:08:11 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "Hi Thomas,\nI was one of the reporter in the early Dec last year.\nI somehow dropped the ball and forgot about the issue.\nAnyhow I upgraded the clusters to pg11.1 and nothing changed. I also have a\nrule to coredump but a segfault does not happen while this is occurring.\nI see the error showing up every night on 2 different servers. But it's a\nbit of a heisenbug because If I go there now it won't be reproducible.\nIt was suggested by Justin Pryzby that I recompile pg src with his patch\nthat would cause a coredump.\nBut I don't feel comfortable doing this especially if I would have to run\nthis with prod data.\nMy question is. Can I do anything like increasing logging level or enable\nsome additional options?\nIt's a production server but I'm willing to sacrifice a bit of it's\nperformance if that would help.\n\n\n--\nregards,\npozdrawiam,\nJakub Glapa\n\n\nOn Wed, Jan 30, 2019 at 4:13 AM Thomas Munro <[email protected]>\nwrote:\n\n> On Tue, Jan 29, 2019 at 10:32 PM Fabio Isabettini\n> <[email protected]> wrote:\n> > we are facing a similar issue on a Production system using a Postgresql\n> 10.6:\n> >\n> > org.postgresql.util.PSQLException: ERROR: EXCEPTION on getstatistics ;\n> ID: EXCEPTION on getstatistics_media ; ID: uidatareader.\n> > run_query_media(2): [a1] REMOTE FATAL: dsa_allocate could not find 7\n> free pages\n>\n> > We would like not to stop the Production system and upgrade it to PG11.\n> And even though would this guarantee a permanent fix?\n> > Any suggestion?\n>\n> Hi Fabio,\n>\n> Thanks for your report. Could you please also show the query plan\n> that runs on the \"remote\" node (where the error occurred)?\n>\n> There is no indication that upgrading to PG11 would help here. It\n> seems we have an undiagnosed bug (in 10 and 11), and so far no one has\n> been able to reproduce it at will. I personally have chewed a lot of\n> CPU time on several machines trying various plan shapes and not seen\n> this or the possibly related symptom from bug #15585 even once. But\n> we have about three reports of each of the two symptoms. One reporter\n> wrote to me off-list to say that they'd seen #15585 twice, the second\n> time by running the same query in a tight loop for 8 hours, and then\n> not seen it again in the past 3 weeks. Clearly there is issue needing\n> a fix here, but I don't yet know what it is.\n>\n> --\n> Thomas Munro\n> http://www.enterprisedb.com\n>\n>\n\nHi Thomas, I was one of the reporter in the early Dec last year. I somehow dropped the ball and forgot about the issue. Anyhow I upgraded the clusters to pg11.1 and nothing changed. I also have a rule to coredump but a segfault does not happen while this is occurring.I see the error showing up every night on 2 different servers. But it's a bit of a heisenbug because If I go there now it won't be reproducible.It was suggested by Justin Pryzby that I recompile pg src with his patch that would cause a coredump. But I don't feel comfortable doing this especially if I would have to run this with prod data.My question is. Can I do anything like increasing logging level or enable some additional options? It's a production server but I'm willing to sacrifice a bit of it's performance if that would help.--regards,pozdrawiam,Jakub GlapaOn Wed, Jan 30, 2019 at 4:13 AM Thomas Munro <[email protected]> wrote:On Tue, Jan 29, 2019 at 10:32 PM Fabio Isabettini\n<[email protected]> wrote:\n>  we are facing a similar issue on a Production system using a Postgresql 10.6:\n>\n> org.postgresql.util.PSQLException: ERROR: EXCEPTION on getstatistics ; ID: EXCEPTION on getstatistics_media ; ID: uidatareader.\n> run_query_media(2): [a1] REMOTE FATAL: dsa_allocate could not find 7 free pages\n\n> We would like not to stop the Production system and upgrade it to PG11. And even though would this guarantee a permanent fix?\n> Any suggestion?\n\nHi Fabio,\n\nThanks for your report.  Could you please also show the query plan\nthat runs on the \"remote\" node (where the error occurred)?\n\nThere is no indication that upgrading to PG11 would help here.  It\nseems we have an undiagnosed bug (in 10 and 11), and so far no one has\nbeen able to reproduce it at will.  I personally have chewed a lot of\nCPU time on several machines trying various plan shapes and not seen\nthis or the possibly related symptom from bug #15585 even once.  But\nwe have about three reports of each of the two symptoms.  One reporter\nwrote to me off-list to say that they'd seen #15585 twice, the second\ntime by running the same query in a tight loop for 8 hours, and then\nnot seen it again in the past 3 weeks.  Clearly there is issue needing\na fix here, but I don't yet know what it is.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com", "msg_date": "Mon, 4 Feb 2019 08:52:17 +0100", "msg_from": "Jakub Glapa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Mon, Feb 4, 2019 at 6:52 PM Jakub Glapa <[email protected]> wrote:\n> I see the error showing up every night on 2 different servers. But it's a bit of a heisenbug because If I go there now it won't be reproducible.\n\nHuh. Ok well that's a lot more frequent that I thought. Is it always\nthe same query? Any chance you can get the plan? Are there more\nthings going on on the server, like perhaps concurrent parallel\nqueries?\n\n> It was suggested by Justin Pryzby that I recompile pg src with his patch that would cause a coredump.\n\nSmall correction to Justin's suggestion: don't abort() after\nelog(ERROR, ...), it'll never be reached.\n\n> But I don't feel comfortable doing this especially if I would have to run this with prod data.\n> My question is. Can I do anything like increasing logging level or enable some additional options?\n> It's a production server but I'm willing to sacrifice a bit of it's performance if that would help.\n\nIf you're able to run a throwaway copy of your production database on\nanother system that you don't have to worry about crashing, you could\njust replace ERROR with PANIC and run a high-speed loop of the query\nthat crashed in product, or something. This might at least tell us\nwhether it's reach that condition via something dereferencing a\ndsa_pointer or something manipulating the segment lists while\nallocating/freeing.\n\nIn my own 100% unsuccessful attempts to reproduce this I was mostly\nrunning the same query (based on my guess at what ingredients are\nneeded), but perhaps it requires a particular allocation pattern that\nwill require more randomness to reach... hmm.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Mon, 4 Feb 2019 19:22:28 +1100", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "It's definitely a quite a relatively complex pattern. The query I set you last time was minimal with respect to predicates (so removing any single one of the predicates converted that one into a working query).\r\n> Huh. Ok well that's a lot more frequent that I thought. Is it always the same query? Any chance you can get the plan? Are there more things going on on the server, like perhaps concurrent parallel queries?\r\nI had this bug occurring while I was the only one working on the server. I checked there was just one transaction with a snapshot at all and it was a autovacuum busy with a totally unrelated relation my colleague was working on.\r\n\r\nThe bug is indeed behaving like a ghost.\r\nOne child relation needed a few new rows to test a particular application a colleague of mine was working on. The insert triggered an autoanalyze and the explain changed slightly:\r\nBesides row and cost estimates the change is that the line\r\nRecheck Cond: (((COALESCE((fid)::bigint, fallback) ) >= 1) AND ((COALESCE((fid)::bigint, fallback) ) <= 1) AND (gid && '{853078,853080,853082}'::integer[]))\r\nis now \r\nRecheck Cond: ((gid && '{853078,853080,853082}'::integer[]) AND ((COALESCE((fid)::bigint, fallback) ) >= 1) AND ((COALESCE((fid)::bigint, fallback) ) <= 1))\r\nand the error vanished.\r\n\r\nI could try to hunt down another query by assembling seemingly random queries. I don't see a very clear pattern from the queries aborting with this error on our production servers. I'm not surprised that bug is had to chase on production servers. They usually are quite lively.\r\n\r\n>If you're able to run a throwaway copy of your production database on another system that you don't have to worry about crashing, you could just replace ERROR with PANIC and run a high-speed loop of the query that crashed in product, or something. This might at least tell us whether it's reach that condition via something dereferencing a dsa_pointer or something manipulating the segment lists while allocating/freeing.\r\n\r\nI could take a backup and restore the relevant tables on a throwaway system. You are just suggesting to replace line 728\r\nelog(FATAL,\r\n \"dsa_allocate could not find %zu free pages\", npages);\r\nby\r\nelog(PANIC,\r\n \"dsa_allocate could not find %zu free pages\", npages);\r\ncorrect? Just for my understanding: why would the shutdown of the whole instance create more helpful logging?\r\n\r\nAll the best\r\nArne\r\n", "msg_date": "Mon, 4 Feb 2019 20:31:47 +0000", "msg_from": "Arne Roland <[email protected]>", "msg_from_op": false, "msg_subject": "RE: dsa_allocate() faliure" }, { "msg_contents": "On Mon, Feb 04, 2019 at 08:31:47PM +0000, Arne Roland wrote:\n> I could take a backup and restore the relevant tables on a throwaway system. You are just suggesting to replace line 728\n> elog(FATAL,\n> \"dsa_allocate could not find %zu free pages\", npages);\n> by\n> elog(PANIC,\n> \"dsa_allocate could not find %zu free pages\", npages);\n> correct? Just for my understanding: why would the shutdown of the whole instance create more helpful logging?\n\nYou'd also start with pg_ctl -c, which would allow it to dump core, which could\nbe inspected with GDB to show a backtrace and other internals, which up to now\nnobody (including myself) has been able to provide.\n\nJustin\n\n", "msg_date": "Mon, 4 Feb 2019 15:47:08 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "Moving to -hackers, hopefully it doesn't confuse the list scripts too much.\n\nOn Mon, Feb 04, 2019 at 08:52:17AM +0100, Jakub Glapa wrote:\n> I see the error showing up every night on 2 different servers. But it's a\n> bit of a heisenbug because If I go there now it won't be reproducible.\n\nDo you have query logging enabled ? If not, could you consider it on at least\none of those servers ? I'm interested to know what ELSE is running at the time\nthat query failed. \n\nPerhaps you could enable query logging JUST for the interval of time that the\nserver usually errors ? The CSV logs can be imported to postgres for analysis.\nYou might do something like SELECT left(message,99),COUNT(1),max(session_id) FROM postgres_log WHERE log_time BETWEEN .. AND .. GROUP BY 1 ORDER BY 2;\nAnd just maybe there'd be a query there that only runs once per day which would\nallow reproducing the error at will. Or utility command like vacuum..\n\nI think ideally you'd set:\n\nlog_statement = all\nlog_min_messages = info\nlog_destination = 'stderr,csvlog'\n# stderr isn't important for this purpose, but I keep it set to capture crash messages, too\n\nYou should set these to something that works well at your site:\n\nlog_rotation_age = '2min'\nlog_rotation_size = '32MB'\n\nI would normally set these, and I don't see any reason why you wouldn't set\nthem too:\n\nlog_checkpoints = on\nlog_lock_waits = on\nlog_temp_files = on\nlog_min_error_statement = notice\nlog_temp_files = 0\nlog_min_duration_statement = '9sec'\nlog_autovacuum_min_duration = '999sec'\n\nAnd I would set these too but maybe you'd prefer to do something else:\n\nlog_directory = /var/log/postgresql\nlog_file_mode = 0640\nlog_filename = postgresql-%Y-%m-%d_%H%M%S.log\n\nJustin\n\n", "msg_date": "Wed, 6 Feb 2019 17:21:11 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "> Do you have query logging enabled ? If not, could you consider it on at\nleast\none of those servers ? I'm interested to know what ELSE is running at the\ntime\nthat query failed.\n\nOk, I have configured that and will enable in the time window when the\nerrors usually occur. I'll report as soon as I have something.\n\n\n--\nregards,\npozdrawiam,\nJakub Glapa\n\n\nOn Thu, Feb 7, 2019 at 12:21 AM Justin Pryzby <[email protected]> wrote:\n\n> Moving to -hackers, hopefully it doesn't confuse the list scripts too much.\n>\n> On Mon, Feb 04, 2019 at 08:52:17AM +0100, Jakub Glapa wrote:\n> > I see the error showing up every night on 2 different servers. But it's a\n> > bit of a heisenbug because If I go there now it won't be reproducible.\n>\n> Do you have query logging enabled ? If not, could you consider it on at\n> least\n> one of those servers ? I'm interested to know what ELSE is running at the\n> time\n> that query failed.\n>\n> Perhaps you could enable query logging JUST for the interval of time that\n> the\n> server usually errors ? The CSV logs can be imported to postgres for\n> analysis.\n> You might do something like SELECT\n> left(message,99),COUNT(1),max(session_id) FROM postgres_log WHERE log_time\n> BETWEEN .. AND .. GROUP BY 1 ORDER BY 2;\n> And just maybe there'd be a query there that only runs once per day which\n> would\n> allow reproducing the error at will. Or utility command like vacuum..\n>\n> I think ideally you'd set:\n>\n> log_statement = all\n> log_min_messages = info\n> log_destination = 'stderr,csvlog'\n> # stderr isn't important for this purpose, but I keep it set to capture\n> crash messages, too\n>\n> You should set these to something that works well at your site:\n>\n> log_rotation_age = '2min'\n> log_rotation_size = '32MB'\n>\n> I would normally set these, and I don't see any reason why you wouldn't set\n> them too:\n>\n> log_checkpoints = on\n> log_lock_waits = on\n> log_temp_files = on\n> log_min_error_statement = notice\n> log_temp_files = 0\n> log_min_duration_statement = '9sec'\n> log_autovacuum_min_duration = '999sec'\n>\n> And I would set these too but maybe you'd prefer to do something else:\n>\n> log_directory = /var/log/postgresql\n> log_file_mode = 0640\n> log_filename = postgresql-%Y-%m-%d_%H%M%S.log\n>\n> Justin\n>\n\n> Do you have query logging enabled ?  If not, could you consider it on at least\none of those servers ?  I'm interested to know what ELSE is running at the time\nthat query failed.  Ok, I have configured that and will enable in the time window when the errors usually occur. I'll report as soon as I have something.--regards,pozdrawiam,Jakub GlapaOn Thu, Feb 7, 2019 at 12:21 AM Justin Pryzby <[email protected]> wrote:Moving to -hackers, hopefully it doesn't confuse the list scripts too much.\n\nOn Mon, Feb 04, 2019 at 08:52:17AM +0100, Jakub Glapa wrote:\n> I see the error showing up every night on 2 different servers. But it's a\n> bit of a heisenbug because If I go there now it won't be reproducible.\n\nDo you have query logging enabled ?  If not, could you consider it on at least\none of those servers ?  I'm interested to know what ELSE is running at the time\nthat query failed.  \n\nPerhaps you could enable query logging JUST for the interval of time that the\nserver usually errors ?  The CSV logs can be imported to postgres for analysis.\nYou might do something like SELECT left(message,99),COUNT(1),max(session_id) FROM postgres_log WHERE log_time BETWEEN .. AND .. GROUP BY 1 ORDER BY 2;\nAnd just maybe there'd be a query there that only runs once per day which would\nallow reproducing the error at will.  Or utility command like vacuum..\n\nI think ideally you'd set:\n\nlog_statement                = all\nlog_min_messages             = info\nlog_destination              = 'stderr,csvlog'\n# stderr isn't important for this purpose, but I keep it set to capture crash messages, too\n\nYou should set these to something that works well at your site:\n\nlog_rotation_age            = '2min'\nlog_rotation_size           = '32MB'\n\nI would normally set these, and I don't see any reason why you wouldn't set\nthem too:\n\nlog_checkpoints             = on\nlog_lock_waits              = on\nlog_temp_files              = on\nlog_min_error_statement     = notice\nlog_temp_files              = 0\nlog_min_duration_statement  = '9sec'\nlog_autovacuum_min_duration = '999sec'\n\nAnd I would set these too but maybe you'd prefer to do something else:\n\nlog_directory               = /var/log/postgresql\nlog_file_mode               = 0640\nlog_filename                = postgresql-%Y-%m-%d_%H%M%S.log\n\nJustin", "msg_date": "Thu, 7 Feb 2019 11:10:44 +0100", "msg_from": "Jakub Glapa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Thu, Feb 7, 2019 at 9:10 PM Jakub Glapa <[email protected]> wrote:\n> > Do you have query logging enabled ? If not, could you consider it on at least\n> one of those servers ? I'm interested to know what ELSE is running at the time\n> that query failed.\n>\n> Ok, I have configured that and will enable in the time window when the errors usually occur. I'll report as soon as I have something.\n\nI don't have the answer yet but I have some progress: I finally\nreproduced the \"could not find %d free pages\" error by running lots of\nconcurrent parallel queries. Will investigate.\n\nSet up:\n\ncreate table foo (p int, a int, b int) partition by list (p);\ncreate table foo_1 partition of foo for values in (1);\ncreate table foo_2 partition of foo for values in (2);\ncreate table foo_3 partition of foo for values in (3);\nalter table foo_1 set (parallel_workers = 4);\nalter table foo_2 set (parallel_workers = 4);\nalter table foo_3 set (parallel_workers = 4);\ninsert into foo\nselect generate_series(1, 10000000)::int % 3 + 1,\n generate_series(1, 10000000)::int % 50,\n generate_series(1, 10000000)::int % 50;\ncreate index on foo_1(a);\ncreate index on foo_2(a);\ncreate index on foo_3(a);\ncreate index on foo_1(b);\ncreate index on foo_2(b);\ncreate index on foo_3(b);\nanalyze;\n\nThen I ran three copies of :\n\n#!/bin/sh\n(\n echo \"set max_parallel_workers_per_gather = 4;\"\n for I in `seq 1 100000`; do\n echo \"explain analyze select count(*) from foo where a between 5\nand 6 or b between 5 and 6;\"\n done\n) | psql postgres\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Fri, 8 Feb 2019 04:49:05 +1100", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Fri, Feb 8, 2019 at 4:49 AM Thomas Munro\n<[email protected]> wrote:\n> I don't have the answer yet but I have some progress: I finally\n> reproduced the \"could not find %d free pages\" error by running lots of\n> concurrent parallel queries. Will investigate.\n\nSometimes FreeManagerPutInternal() returns a\nnumber-of-contiguous-pages-created-by-this-insertion that is too large\nby one. If this happens to be a new max-number-of-contiguous-pages,\nit causes trouble some arbitrary time later because the max is wrong\nand this FPM cannot satisfy a request that large, and it may not be\nrecomputed for some time because the incorrect value prevents\nrecomputation. Not sure yet if this is due to the lazy computation\nlogic or a plain old fence-post error in the btree consolidation code\nor something else.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Fri, 8 Feb 2019 13:29:27 +1100", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Fri, Feb 8, 2019 at 8:00 AM Thomas Munro\n<[email protected]> wrote:\n> Sometimes FreeManagerPutInternal() returns a\n> number-of-contiguous-pages-created-by-this-insertion that is too large\n> by one. If this happens to be a new max-number-of-contiguous-pages,\n> it causes trouble some arbitrary time later because the max is wrong\n> and this FPM cannot satisfy a request that large, and it may not be\n> recomputed for some time because the incorrect value prevents\n> recomputation. Not sure yet if this is due to the lazy computation\n> logic or a plain old fence-post error in the btree consolidation code\n> or something else.\n\nI spent a long time thinking about this and starting at code this\nafternoon, but I didn't really come up with much of anything useful.\nIt seems like a strange failure mode, because\nFreePageManagerPutInternal() normally just returns its third argument\nunmodified. The only cases where anything else happens are the ones\nwhere we're able to consolidate the returned span with a preceding or\nfollowing span, and I'm scratching my head as to how that logic could\nbe wrong, especially since it also has some Assert() statements that\nseem like they would detect the kinds of inconsistencies that would\nlead to trouble. For example, if we somehow ended up with two spans\nthat (improperly) overlapped, we'd trip an Assert(). And if that\ndidn't happen -- because we're not in an Assert-enabled build -- the\ncode is written so that it only relies on the npages value of the last\nof the consolidated scans, so an error in the npages value of one of\nthe earlier spans would just get fixed up.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Sat, 9 Feb 2019 15:51:12 +0530", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Sat, Feb 9, 2019 at 9:21 PM Robert Haas <[email protected]> wrote:\n> On Fri, Feb 8, 2019 at 8:00 AM Thomas Munro\n> <[email protected]> wrote:\n> > Sometimes FreeManagerPutInternal() returns a\n> > number-of-contiguous-pages-created-by-this-insertion that is too large\n> > by one. [...]\n>\n> I spent a long time thinking about this and starting at code this\n> afternoon, but I didn't really come up with much of anything useful.\n> It seems like a strange failure mode, because\n> FreePageManagerPutInternal() normally just returns its third argument\n> unmodified. [...]\n\nBleugh. Yeah. What I said before wasn't quite right. The value\nreturned by FreePageManagerPutInternal() is actually correct at the\nmoment it is returned, but it ceases to be correct immediately\nafterwards if the following call to FreePageBtreeCleanup() happens to\nreduce the size of that particular span. The problem is that we\nclobber fpm->contiguous_pages with the earlier (and by now incorrect)\nvalue that we were holding in a local variable.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Sun, 10 Feb 2019 07:24:53 +1100", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Sun, Feb 10, 2019 at 7:24 AM Thomas Munro\n<[email protected]> wrote:\n> On Sat, Feb 9, 2019 at 9:21 PM Robert Haas <[email protected]> wrote:\n> > On Fri, Feb 8, 2019 at 8:00 AM Thomas Munro\n> > <[email protected]> wrote:\n> > > Sometimes FreeManagerPutInternal() returns a\n> > > number-of-contiguous-pages-created-by-this-insertion that is too large\n> > > by one. [...]\n> >\n> > I spent a long time thinking about this and starting at code this\n> > afternoon, but I didn't really come up with much of anything useful.\n> > It seems like a strange failure mode, because\n> > FreePageManagerPutInternal() normally just returns its third argument\n> > unmodified. [...]\n>\n> Bleugh. Yeah. What I said before wasn't quite right. The value\n> returned by FreePageManagerPutInternal() is actually correct at the\n> moment it is returned, but it ceases to be correct immediately\n> afterwards if the following call to FreePageBtreeCleanup() happens to\n> reduce the size of that particular span.\n\n... but why would it do that? I can reproduce cases where (for\nexample) FreePageManagerPutInternal() returns 179, and then\nFreePageManagerLargestContiguous() returns 179, but then after\nFreePageBtreeCleanup() it returns 178. At that point FreePageDump()\nsays:\n\n btree depth 1:\n 77@0 l: 27(1) 78(178)\n freelists:\n 1: 27\n 129: 78(178)\n\nBut at first glance it shouldn't be allocating pages, because it just\ndoes consolidation to try to convert to singleton format, and then it\ndoes recycle list cleanup using soft=true so that no allocation of\nbtree pages should occur.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Sun, 10 Feb 2019 08:06:30 +1100", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Sun, Feb 10, 2019 at 1:55 AM Thomas Munro\n<[email protected]> wrote:\n> Bleugh. Yeah. What I said before wasn't quite right. The value\n> returned by FreePageManagerPutInternal() is actually correct at the\n> moment it is returned, but it ceases to be correct immediately\n> afterwards if the following call to FreePageBtreeCleanup() happens to\n> reduce the size of that particular span. The problem is that we\n> clobber fpm->contiguous_pages with the earlier (and by now incorrect)\n> value that we were holding in a local variable.\n\nYeah, I had similar bugs to that during the initial development work I\ndid on freepage.c, and that's why I got rid of some lazy recomputation\nthing that I had tried at some point. The version that got committed\nbrought that back again, but possibly it's got the same kind of\nproblem.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n", "msg_date": "Sun, 10 Feb 2019 11:56:14 +0530", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Sun, Feb 10, 2019 at 2:37 AM Thomas Munro\n<[email protected]> wrote:\n> ... but why would it do that? I can reproduce cases where (for\n> example) FreePageManagerPutInternal() returns 179, and then\n> FreePageManagerLargestContiguous() returns 179, but then after\n> FreePageBtreeCleanup() it returns 178. At that point FreePageDump()\n> says:\n>\n> btree depth 1:\n> 77@0 l: 27(1) 78(178)\n> freelists:\n> 1: 27\n> 129: 78(178)\n>\n> But at first glance it shouldn't be allocating pages, because it just\n> does consolidation to try to convert to singleton format, and then it\n> does recycle list cleanup using soft=true so that no allocation of\n> btree pages should occur.\n\nI think I see what's happening. At the moment the problem occurs,\nthere is no btree - there is only a singleton range. So\nFreePageManagerInternal() takes the fpm->btree_depth == 0 branch and\nthen ends up in the section with the comment /* Not contiguous; we\nneed to initialize the btree. */. And that section, sadly, does not\nrespect the 'soft' flag, so kaboom. Something like the attached might\nfix it.\n\nBoy, I love FreePageManagerDump!\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 10 Feb 2019 12:10:52 +0530", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Sun, Feb 10, 2019 at 12:10:52PM +0530, Robert Haas wrote:\n> I think I see what's happening. At the moment the problem occurs,\n> there is no btree - there is only a singleton range. So\n> FreePageManagerInternal() takes the fpm->btree_depth == 0 branch and\n> then ends up in the section with the comment /* Not contiguous; we\n> need to initialize the btree. */. And that section, sadly, does not\n> respect the 'soft' flag, so kaboom. Something like the attached might\n> fix it.\n\nI ran overnight with this patch, but all parallel processes ended up stuck in\nthe style of bug#15585. So that's either not the root cause, or there's a 2nd\nissue.\n\nhttps://www.postgresql.org/message-id/flat/15585-324ff6a93a18da46%40postgresql.org\n\nJustin\n\n", "msg_date": "Sun, 10 Feb 2019 10:00:35 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "Hi\n\n> I ran overnight with this patch, but all parallel processes ended up stuck in\n> the style of bug#15585. So that's either not the root cause, or there's a 2nd\n> issue.\n\nMaybe i missed something in this discussion, but you can reproduce bug#15585? How? With this testcase: https://www.postgresql.org/message-id/CAEepm%3D1MvOE-Sfv1chudx5KEmw_qHYrj8F9Og_WmdBRhXSQ%2B%2Bw%40mail.gmail.com ?\n\nregards, Sergei\n\n", "msg_date": "Sun, 10 Feb 2019 19:11:22 +0300", "msg_from": "Sergei Kornilov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Sun, Feb 10, 2019 at 07:11:22PM +0300, Sergei Kornilov wrote:\n> > I ran overnight with this patch, but all parallel processes ended up stuck in\n> > the style of bug#15585. So that's either not the root cause, or there's a 2nd\n> > issue.\n> \n> Maybe i missed something in this discussion, but you can reproduce bug#15585? How? With this testcase: https://www.postgresql.org/message-id/CAEepm%3D1MvOE-Sfv1chudx5KEmw_qHYrj8F9Og_WmdBRhXSQ%2B%2Bw%40mail.gmail.com ?\n\nBy running the queued_alters query multiple times in a loop:\nhttps://www.postgresql.org/message-id/20181231221734.GB25379%40telsasoft.com\n\nI'm able to trigger dsa \"ERROR\"s with that query on a newly initdb cluster with\nonly that table. But I think some servers are more likely to hit it than\nothers.\n\nI've only tripped on 15585 twice, and only while trying to trigger other DSA\nbugs (the working hypothesis is that bug is 2ndary issue which happens after\nhitting some other bug). And not consistently or quickly.\n\nJustin\n\n", "msg_date": "Sun, 10 Feb 2019 10:50:07 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Sun, Feb 10, 2019 at 5:41 PM Robert Haas <[email protected]> wrote:\n> On Sun, Feb 10, 2019 at 2:37 AM Thomas Munro\n> <[email protected]> wrote:\n> > But at first glance it shouldn't be allocating pages, because it just\n> > does consolidation to try to convert to singleton format, and then it\n> > does recycle list cleanup using soft=true so that no allocation of\n> > btree pages should occur.\n>\n> I think I see what's happening. At the moment the problem occurs,\n> there is no btree - there is only a singleton range. So\n> FreePageManagerInternal() takes the fpm->btree_depth == 0 branch and\n> then ends up in the section with the comment /* Not contiguous; we\n> need to initialize the btree. */. And that section, sadly, does not\n> respect the 'soft' flag, so kaboom. Something like the attached might\n> fix it.\n\nOuch. Yeah, that'd do it and matches the evidence. With this change,\nI couldn't reproduce the problem after 90 minutes with a test case\nthat otherwise hits it within a couple of minutes.\n\nHere's a patch with a commit message explaining the change.\n\nIt also removes an obsolete comment, which is in fact related. The\ncomment refers to an output parameter internal_pages_used, which must\nhave been used to report this exact phenomenon in an earlier\ndevelopment version. But there is no such parameter in the committed\nversion, and instead there is the soft flag to prevent internal\nallocation. I have no view on which approach is best, but yeah, if\nwe're using a soft flag, it has to work reliably.\n\nThis brings us to a difficult choice: we're about to cut a new\nrelease, and this could in theory be included. Even though the fix is\nquite convincing, it doesn't seem wise to change such complicated code\nat the last minute, and I know from an off-list chat that that is also\nRobert's view. So I'll wait until after the release, and we'll have\nto live with the bug for another 3 months.\n\nNote that this patch addresses the error \"dsa_allocate could not find\n%zu free pages\". (The error \"dsa_area could not attach to segment\" is\nsomething else and apparently rarer.)\n\n> Boy, I love FreePageManagerDump!\n\nYeah. And I love reproducible bugs.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com", "msg_date": "Mon, 11 Feb 2019 09:45:07 +1100", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> This brings us to a difficult choice: we're about to cut a new\n> release, and this could in theory be included. Even though the fix is\n> quite convincing, it doesn't seem wise to change such complicated code\n> at the last minute, and I know from an off-list chat that that is also\n> Robert's view.\n\nYeah ... at this point we're just too close to the release deadline,\nI'm afraid, even though the fix *looks* pretty safe. Not worth the risk\ngiven that this seems to be a low-probability bug.\n\nI observe from\n\nhttps://coverage.postgresql.org/src/backend/utils/mmgr/freepage.c.gcov.html\n\nthat the edge cases in this function aren't too well exercised by\nour regression tests, meaning that the buildfarm might not prove\nmuch either way about the correctness of this patch. That is one\nfactor pushing me to think we shouldn't risk it. But, taking a\nlonger view, is that something that's practical to improve?\n\n> So I'll wait until after the release, and we'll have\n> to live with the bug for another 3 months.\n\nCheck. Please hold off committing until you see the release tags\nappear, probably late Tuesday my time / Wednesday noonish yours.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 10 Feb 2019 18:33:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Mon, Feb 11, 2019 at 09:45:07AM +1100, Thomas Munro wrote:\n> Ouch. Yeah, that'd do it and matches the evidence. With this change,\n> I couldn't reproduce the problem after 90 minutes with a test case\n> that otherwise hits it within a couple of minutes.\n...\n> Note that this patch addresses the error \"dsa_allocate could not find\n> %zu free pages\". (The error \"dsa_area could not attach to segment\" is\n> something else and apparently rarer.)\n\n\"could not attach\" is the error reported early this morning while\nstress-testing this patch with queued_alters queries in loops, so that's\nconsistent with your understanding. And I guess it preceded getting stuck on\nlock; although I don't how long between the first happened and the second, I'm\nguess not long and perhaps immedidately; since the rest of the processes were\nall stuck as in bug#15585 rather than ERRORing once every few minutes.\n\nI mentioned that \"could not attach to segment\" occurs in leader either/or\nparallel worker. And most of the time causes an ERROR only, and doesn't wedge\nall future parallel workers. Maybe bug#15585 \"wedged\" state maybe only occurs\nafter some pattern of leader+worker failures (?) I've just triggered bug#15585\nagain, but if there's a pattern, I don't see it.\n\nPlease let me know whether you're able to reproduce the \"not attach\" bug using\nsimultaneous loops around the queued_alters query; it's easy here.\n\nJustin\n\n", "msg_date": "Sun, 10 Feb 2019 18:02:15 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Mon, Feb 11, 2019 at 11:02 AM Justin Pryzby <[email protected]> wrote:\n> On Mon, Feb 11, 2019 at 09:45:07AM +1100, Thomas Munro wrote:\n> > Ouch. Yeah, that'd do it and matches the evidence. With this change,\n> > I couldn't reproduce the problem after 90 minutes with a test case\n> > that otherwise hits it within a couple of minutes.\n> ...\n> > Note that this patch addresses the error \"dsa_allocate could not find\n> > %zu free pages\". (The error \"dsa_area could not attach to segment\" is\n> > something else and apparently rarer.)\n>\n> \"could not attach\" is the error reported early this morning while\n> stress-testing this patch with queued_alters queries in loops, so that's\n> consistent with your understanding. And I guess it preceded getting stuck on\n> lock; although I don't how long between the first happened and the second, I'm\n> guess not long and perhaps immedidately; since the rest of the processes were\n> all stuck as in bug#15585 rather than ERRORing once every few minutes.\n>\n> I mentioned that \"could not attach to segment\" occurs in leader either/or\n> parallel worker. And most of the time causes an ERROR only, and doesn't wedge\n> all future parallel workers. Maybe bug#15585 \"wedged\" state maybe only occurs\n> after some pattern of leader+worker failures (?) I've just triggered bug#15585\n> again, but if there's a pattern, I don't see it.\n>\n> Please let me know whether you're able to reproduce the \"not attach\" bug using\n> simultaneous loops around the queued_alters query; it's easy here.\n\nI haven't ever managed to reproduce that one yet. It's great you have\na reliable repro... Let's discuss it on the #15585 thread.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Mon, 11 Feb 2019 11:11:32 +1100", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Mon, Feb 11, 2019 at 10:33 AM Tom Lane <[email protected]> wrote:\n> I observe from\n>\n> https://coverage.postgresql.org/src/backend/utils/mmgr/freepage.c.gcov.html\n>\n> that the edge cases in this function aren't too well exercised by\n> our regression tests, meaning that the buildfarm might not prove\n> much either way about the correctness of this patch. That is one\n> factor pushing me to think we shouldn't risk it. But, taking a\n> longer view, is that something that's practical to improve?\n\nYeah. This is a nice example of code that really deserves unit tests\nwritten in C. Could be good motivation to built the infrastructure I\nmentioned here:\n\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D2heu%2B5zwB65jWap3XY-UP6PpJZiKLQRSV2UQH9BmVRXQ%40mail.gmail.com\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Mon, 11 Feb 2019 11:24:38 +1100", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Mon, Feb 11, 2019 at 10:33 AM Tom Lane <[email protected]> wrote:\n>> I observe from\n>> https://coverage.postgresql.org/src/backend/utils/mmgr/freepage.c.gcov.html\n>> that the edge cases in this function aren't too well exercised by\n>> our regression tests, meaning that the buildfarm might not prove\n>> much either way about the correctness of this patch. That is one\n>> factor pushing me to think we shouldn't risk it. But, taking a\n>> longer view, is that something that's practical to improve?\n\n> Yeah. This is a nice example of code that really deserves unit tests\n> written in C. Could be good motivation to built the infrastructure I\n> mentioned here:\n> https://www.postgresql.org/message-id/flat/CAEepm%3D2heu%2B5zwB65jWap3XY-UP6PpJZiKLQRSV2UQH9BmVRXQ%40mail.gmail.com\n\nMeh. I think if you hold out for that, you're going to be waiting a\nlong time. I was thinking more along the lines of making a test API\nin src/test/modules/, akin to what we've got for predtest or rbtree.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 10 Feb 2019 20:22:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" }, { "msg_contents": "On Mon, Feb 11, 2019 at 10:33 AM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > So I'll wait until after the release, and we'll have\n> > to live with the bug for another 3 months.\n>\n> Check. Please hold off committing until you see the release tags\n> appear, probably late Tuesday my time / Wednesday noonish yours.\n\nPushed.\n\n-- \nThomas Munro\nhttp://www.enterprisedb.com\n\n", "msg_date": "Wed, 13 Feb 2019 11:52:45 +1100", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dsa_allocate() faliure" } ]
[ { "msg_contents": "Can somebody help me avoid nested loops in below query:\n--\nap_poc_db=# explain (analyze,buffers)\nap_poc_db-# select site_id, account_id FROM ap.site_exposure se\nap_poc_db-# WHERE se.portfolio_id=-1191836\nap_poc_db-# AND EXISTS (select 1 from ap.catevent_flood_sc_split sp where sp.migration_sourcename= 'KatRisk_SC_Flood_2015_v9' AND ST_Intersects(se.shape, sp.shape))\nap_poc_db-# group by site_id, account_id;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGroup (cost=23479854.04..23479880.06 rows=206 width=16) (actual time=1387.825..1389.134 rows=1532 loops=1)\n Group Key: se.site_id, se.account_id\n Buffers: shared hit=172041\n -> Gather Merge (cost=23479854.04..23479879.04 rows=205 width=16) (actual time=1387.823..1388.676 rows=1532 loops=1)\n Workers Planned: 5\n Workers Launched: 5\n Buffers: shared hit=172041\n -> Group (cost=23478853.96..23478854.27 rows=41 width=16) (actual time=1346.044..1346.176 rows=255 loops=6)\n Group Key: se.site_id, se.account_id\n Buffers: shared hit=864280\n -> Sort (cost=23478853.96..23478854.07 rows=41 width=16) (actual time=1346.041..1346.079 rows=255 loops=6)\n Sort Key: se.site_id, se.account_id\n Sort Method: quicksort Memory: 37kB\n Buffers: shared hit=864280\n -> Nested Loop Semi Join (cost=4.53..23478852.87 rows=41 width=16) (actual time=34.772..1345.489 rows=255 loops=6)\n Buffers: shared hit=864235\n -> Append (cost=0.00..156424.56 rows=123645 width=48) (actual time=1.011..204.748 rows=102990 loops=6)\n Buffers: shared hit=154879\n -> Parallel Seq Scan on site_exposure_1191836 se (cost=0.00..156424.56 rows=123645 width=48) (actual time=1.004..187.702 rows=102990 loops=6)\n Filter: (portfolio_id = '-1191836'::integer)\n Buffers: shared hit=154879\n -> Bitmap Heap Scan on catevent_flood_sc_split sp (cost=4.53..188.54 rows=15 width=492) (actual time=0.007..0.007 rows=0 loops=617937)\n Recheck Cond: (se.shape && shape)\n Filter: ((migration_sourcename = 'KatRisk_SC_Flood_2015_v9'::bpchar) AND _st_intersects(se.shape, shape))\n Rows Removed by Filter: 0\n Heap Blocks: exact=1060\n Buffers: shared hit=709356\n -> Bitmap Index Scan on catevent_flood_sc_split_shape_mig_src_gix (cost=0.00..4.52 rows=45 width=0) (actual time=0.005..0.005 rows=0 loops=617937)\n Index Cond: (se.shape && shape)\n Buffers: shared hit=691115\nPlanning time: 116.141 ms\nExecution time: 1391.785 ms\n(32 rows)\n\n\nap_poc_db=#\n\nThank you in advance!\n\n\nRegards,\nVirendra\n\n\n________________________________\n\nThis message is intended only for the use of the addressee and may contain\ninformation that is PRIVILEGED AND CONFIDENTIAL.\n\nIf you are not the intended recipient, you are hereby notified that any\ndissemination of this communication is strictly prohibited. If you have\nreceived this communication in error, please erase all copies of the message\nand its attachments and notify the sender immediately. Thank you.\n\n\n\n\n\n\n\n\n\nCan somebody help me avoid nested loops in below query:\n--\nap_poc_db=# explain (analyze,buffers)\nap_poc_db-# select site_id, account_id FROM ap.site_exposure se\nap_poc_db-#         WHERE se.portfolio_id=-1191836\nap_poc_db-#             AND EXISTS (select 1 from ap.catevent_flood_sc_split sp where sp.migration_sourcename= 'KatRisk_SC_Flood_2015_v9' AND ST_Intersects(se.shape, sp.shape))\nap_poc_db-#             group by site_id, account_id;\n                                                                                      QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGroup  (cost=23479854.04..23479880.06 rows=206 width=16) (actual time=1387.825..1389.134 rows=1532 loops=1)\n   Group Key: se.site_id, se.account_id\n   Buffers: shared hit=172041\n   ->  Gather Merge  (cost=23479854.04..23479879.04 rows=205 width=16) (actual time=1387.823..1388.676 rows=1532 loops=1)\n         Workers Planned: 5\n         Workers Launched: 5\n         Buffers: shared hit=172041\n         ->  Group  (cost=23478853.96..23478854.27 rows=41 width=16) (actual time=1346.044..1346.176 rows=255 loops=6)\n               Group Key: se.site_id, se.account_id\n               Buffers: shared hit=864280\n               ->  Sort  (cost=23478853.96..23478854.07 rows=41 width=16) (actual time=1346.041..1346.079 rows=255 loops=6)\n                     Sort Key: se.site_id, se.account_id\n                     Sort Method: quicksort  Memory: 37kB\n                     Buffers: shared hit=864280\n                     ->  Nested Loop Semi Join  (cost=4.53..23478852.87 rows=41 width=16) (actual time=34.772..1345.489 rows=255 loops=6)\n                           Buffers: shared hit=864235\n                           ->  Append  (cost=0.00..156424.56 rows=123645 width=48) (actual time=1.011..204.748 rows=102990 loops=6)\n                                 Buffers: shared hit=154879\n                                 ->  Parallel Seq Scan on site_exposure_1191836 se  (cost=0.00..156424.56 rows=123645 width=48) (actual time=1.004..187.702 rows=102990 loops=6)\n                                       Filter: (portfolio_id = '-1191836'::integer)\n                                       Buffers: shared hit=154879\n                           ->  Bitmap Heap Scan on catevent_flood_sc_split sp  (cost=4.53..188.54 rows=15 width=492) (actual time=0.007..0.007 rows=0 loops=617937)\n                                 Recheck Cond: (se.shape && shape)\n                                 Filter: ((migration_sourcename = 'KatRisk_SC_Flood_2015_v9'::bpchar) AND _st_intersects(se.shape, shape))\n                                 Rows Removed by Filter: 0\n                                 Heap Blocks: exact=1060\n                                 Buffers: shared hit=709356\n                                 ->  Bitmap Index Scan on catevent_flood_sc_split_shape_mig_src_gix  (cost=0.00..4.52 rows=45 width=0) (actual time=0.005..0.005 rows=0 loops=617937)\n                                       Index Cond: (se.shape && shape)\n                                       Buffers: shared hit=691115\nPlanning time: 116.141 ms\nExecution time: 1391.785 ms\n(32 rows)\n \n \nap_poc_db=#\n \nThank you in advance!\n \n \nRegards,\nVirendra\n \n\n\n\n\nThis message is intended only for the use of the addressee and may contain\ninformation that is PRIVILEGED AND CONFIDENTIAL.\n\nIf you are not the intended recipient, you are hereby notified that any\ndissemination of this communication is strictly prohibited. If you have\nreceived this communication in error, please erase all copies of the message\nand its attachments and notify the sender immediately. Thank you.", "msg_date": "Wed, 31 Jan 2018 06:37:07 +0000", "msg_from": "\"Kumar, Virendra\" <[email protected]>", "msg_from_op": true, "msg_subject": "Nested Loops" }, { "msg_contents": "Kumar, Virendra wrote:\n> Can somebody help me avoid nested loops in below query:\n> --\n> ap_poc_db=# explain (analyze,buffers)\n> ap_poc_db-# select site_id, account_id FROM ap.site_exposure se\n> ap_poc_db-# WHERE se.portfolio_id=-1191836\n> ap_poc_db-# AND EXISTS (select 1 from ap.catevent_flood_sc_split sp where sp.migration_sourcename= 'KatRisk_SC_Flood_2015_v9' AND ST_Intersects(se.shape, sp.shape))\n> ap_poc_db-# group by site_id, account_id;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n[...]\n> Buffers: shared hit=172041\n> -> Gather Merge (cost=23479854.04..23479879.04 rows=205 width=16) (actual time=1387.823..1388.676 rows=1532 loops=1)\n> Workers Planned: 5\n> Workers Launched: 5\n> Buffers: shared hit=172041\n[...]\n> -> Nested Loop Semi Join (cost=4.53..23478852.87 rows=41 width=16) (actual time=34.772..1345.489 rows=255 loops=6)\n> Buffers: shared hit=864235\n> -> Append (cost=0.00..156424.56 rows=123645 width=48) (actual time=1.011..204.748 rows=102990 loops=6)\n> Buffers: shared hit=154879\n> -> Parallel Seq Scan on site_exposure_1191836 se (cost=0.00..156424.56 rows=123645 width=48) (actual time=1.004..187.702 rows=102990 loops=6)\n> Filter: (portfolio_id = '-1191836'::integer)\n> Buffers: shared hit=154879\n> -> Bitmap Heap Scan on catevent_flood_sc_split sp (cost=4.53..188.54 rows=15 width=492) (actual time=0.007..0.007 rows=0 loops=617937)\n> Recheck Cond: (se.shape && shape)\n> Filter: ((migration_sourcename = 'KatRisk_SC_Flood_2015_v9'::bpchar) AND _st_intersects(se.shape, shape))\n> Rows Removed by Filter: 0\n> Heap Blocks: exact=1060\n> Buffers: shared hit=709356\n> -> Bitmap Index Scan on catevent_flood_sc_split_shape_mig_src_gix (cost=0.00..4.52 rows=45 width=0) (actual time=0.005..0.005 rows=0 loops=617937)\n> Index Cond: (se.shape && shape)\n> Buffers: shared hit=691115\n> Planning time: 116.141 ms\n> Execution time: 1391.785 ms\n\nWith a join condition like that (using on a function result),\nonly a nested loop join is possible.\n\nI don't know how selective sp.migration_sourcename= 'KatRisk_SC_Flood_2015_v9'\nis; perhaps an index on the column can help a little.\n\nBut you won't get around the 617937 loops, which is the cause of the\nlong query duration. I don't think there is a lot of potential for optimization.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 31 Jan 2018 09:42:59 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loops" } ]
[ { "msg_contents": "Hi,\n\nI've tried to run a benchmark, similar to this one:\n\nhttps://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n\nCREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';\n\npgbench -i -s 1000 --tablespace=test pgbench\n\necho \"\" >test.txt\nfor i in 0 1 2 4 8 16 32 64 128 256 ; do\n   sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart\n   echo \"effective_io_concurrency=$i\" >>test.txt\n   psql pgbench -c \"set effective_io_concurrency=$i; set \nenable_indexscan=off; explain (analyze, buffers)  select * from \npgbench_accounts where aid between 1000 and 10000000 and abalance != 0;\" \n >>test.txt\ndone\n\nI get the following results:\n\neffective_io_concurrency=0\n  Execution time: 40262.781 ms\neffective_io_concurrency=1\n  Execution time: 98125.987 ms\neffective_io_concurrency=2\n  Execution time: 55343.776 ms\neffective_io_concurrency=4\n  Execution time: 52505.638 ms\neffective_io_concurrency=8\n  Execution time: 54954.024 ms\neffective_io_concurrency=16\n  Execution time: 54346.455 ms\neffective_io_concurrency=32\n  Execution time: 55196.626 ms\neffective_io_concurrency=64\n  Execution time: 55057.956 ms\neffective_io_concurrency=128\n  Execution time: 54963.510 ms\neffective_io_concurrency=256\n  Execution time: 54339.258 ms\n\nThe test was using 100 GB gp2 SSD EBS. More detailed query plans are \nattached.\n\nPostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu \n5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n\nThe results look really confusing to me in two ways. The first one is \nthat I've seen recommendations to set effective_io_concurrency=256 (or \nmore) on EBS. The other one is that effective_io_concurrency=1 (the \nworst case) is actually the default for PostgreSQL on Linux.\n\nThoughts?\n\nRegards,\nVitaliy", "msg_date": "Wed, 31 Jan 2018 14:03:17 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "effective_io_concurrency on EBS/gp2" }, { "msg_contents": "We moved our stuff out of AWS a little over a year ago because the\nperformance was crazy inconsistent and unpredictable. I think they do a\nlot of oversubscribing so you get strange sawtooth performance patterns\ndepending on who else is sharing your infrastructure and what they are\ndoing at the time.\n\nThe same unit of work would take 20 minutes each for several hours, and\nthen take 2 1/2 hours each for a day, and then back to 20 minutes, and\nsometimes anywhere in between for hours or days at a stretch. I could\nnever tell the business when the processing would be done, which made it\nhard for them to set expectations with customers, promise deliverables, or\nmanage the business. Smaller nodes seemed to be worse than larger nodes, I\nonly have theories as to why. I never got good support from AWS to help me\nfigure out what was happening.\n\nMy first thought is to run the same test on different days of the week and\ndifferent times of day to see if the numbers change radically. Maybe spin\nup a node in another data center and availability zone and try the test\nthere too.\n\nMy real suggestion is to move to Google Cloud or Rackspace or Digital Ocean\nor somewhere other than AWS. (We moved to Google Cloud and have been very\nhappy there. The performance is much more consistent, the management UI is\nmore intuitive, AND the cost for equivalent infrastructure is lower too.)\n\n\nOn Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich <\[email protected]> wrote:\n\n> Hi,\n>\n> I've tried to run a benchmark, similar to this one:\n>\n> https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9\n> cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.\n> gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTd\n> [email protected]\n>\n> CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';\n>\n> pgbench -i -s 1000 --tablespace=test pgbench\n>\n> echo \"\" >test.txt\n> for i in 0 1 2 4 8 16 32 64 128 256 ; do\n> sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart\n> echo \"effective_io_concurrency=$i\" >>test.txt\n> psql pgbench -c \"set effective_io_concurrency=$i; set\n> enable_indexscan=off; explain (analyze, buffers) select * from\n> pgbench_accounts where aid between 1000 and 10000000 and abalance != 0;\"\n> >>test.txt\n> done\n>\n> I get the following results:\n>\n> effective_io_concurrency=0\n> Execution time: 40262.781 ms\n> effective_io_concurrency=1\n> Execution time: 98125.987 ms\n> effective_io_concurrency=2\n> Execution time: 55343.776 ms\n> effective_io_concurrency=4\n> Execution time: 52505.638 ms\n> effective_io_concurrency=8\n> Execution time: 54954.024 ms\n> effective_io_concurrency=16\n> Execution time: 54346.455 ms\n> effective_io_concurrency=32\n> Execution time: 55196.626 ms\n> effective_io_concurrency=64\n> Execution time: 55057.956 ms\n> effective_io_concurrency=128\n> Execution time: 54963.510 ms\n> effective_io_concurrency=256\n> Execution time: 54339.258 ms\n>\n> The test was using 100 GB gp2 SSD EBS. More detailed query plans are\n> attached.\n>\n> PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n> 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n>\n> The results look really confusing to me in two ways. The first one is that\n> I've seen recommendations to set effective_io_concurrency=256 (or more) on\n> EBS. The other one is that effective_io_concurrency=1 (the worst case) is\n> actually the default for PostgreSQL on Linux.\n>\n> Thoughts?\n>\n> Regards,\n> Vitaliy\n>\n>\n\nWe moved our stuff out of AWS a little over a year ago because the performance was crazy inconsistent and unpredictable.  I think they do a lot of oversubscribing so you get strange sawtooth performance patterns depending on who else is sharing your infrastructure and what they are doing at the time.The same unit of work would take 20 minutes each for several hours, and then take 2 1/2 hours each for a day, and then back to 20 minutes, and sometimes anywhere in between for hours or days at a stretch.  I could never tell the business when the processing would be done, which made it hard for them to set expectations with customers, promise deliverables, or manage the business.  Smaller nodes seemed to be worse than larger nodes, I only have theories as to why.  I never got good support from AWS to help me figure out what was happening.My first thought is to run the same test on different days of the week and different times of day to see if the numbers change radically.  Maybe spin up a node in another data center and availability zone and try the test there too.My real suggestion is to move to Google Cloud or Rackspace or Digital Ocean or somewhere other than AWS.   (We moved to Google Cloud and have been very happy there.  The performance is much more consistent, the management UI is more intuitive, AND the cost for equivalent infrastructure is lower too.)On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich <[email protected]> wrote:Hi,\n\nI've tried to run a benchmark, similar to this one:\n\nhttps://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n\nCREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';\n\npgbench -i -s 1000 --tablespace=test pgbench\n\necho \"\" >test.txt\nfor i in 0 1 2 4 8 16 32 64 128 256 ; do\n  sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart\n  echo \"effective_io_concurrency=$i\" >>test.txt\n  psql pgbench -c \"set effective_io_concurrency=$i; set enable_indexscan=off; explain (analyze, buffers)  select * from pgbench_accounts where aid between 1000 and 10000000 and abalance != 0;\" >>test.txt\ndone\n\nI get the following results:\n\neffective_io_concurrency=0\n Execution time: 40262.781 ms\neffective_io_concurrency=1\n Execution time: 98125.987 ms\neffective_io_concurrency=2\n Execution time: 55343.776 ms\neffective_io_concurrency=4\n Execution time: 52505.638 ms\neffective_io_concurrency=8\n Execution time: 54954.024 ms\neffective_io_concurrency=16\n Execution time: 54346.455 ms\neffective_io_concurrency=32\n Execution time: 55196.626 ms\neffective_io_concurrency=64\n Execution time: 55057.956 ms\neffective_io_concurrency=128\n Execution time: 54963.510 ms\neffective_io_concurrency=256\n Execution time: 54339.258 ms\n\nThe test was using 100 GB gp2 SSD EBS. More detailed query plans are attached.\n\nPostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n\nThe results look really confusing to me in two ways. The first one is that I've seen recommendations to set effective_io_concurrency=256 (or more) on EBS. The other one is that effective_io_concurrency=1 (the worst case) is actually the default for PostgreSQL on Linux.\n\nThoughts?\n\nRegards,\nVitaliy", "msg_date": "Wed, 31 Jan 2018 08:01:27 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "I've tried to re-run the test for some specific values of \neffective_io_concurrency. The results were the same.\n\nThat's why I don't think the order of tests or variability in \"hardware\" \nperformance affected the results.\n\nRegards,\nVitaliy\n\nOn 31/01/2018 15:01, Rick Otten wrote:\n> We moved our stuff out of AWS a little over a year ago because the \n> performance was crazy inconsistent and unpredictable.  I think they do \n> a lot of oversubscribing so you get strange sawtooth performance \n> patterns depending on who else is sharing your infrastructure and what \n> they are doing at the time.\n>\n> The same unit of work would take 20 minutes each for several hours, \n> and then take 2 1/2 hours each for a day, and then back to 20 minutes, \n> and sometimes anywhere in between for hours or days at a stretch.  I \n> could never tell the business when the processing would be done, which \n> made it hard for them to set expectations with customers, promise \n> deliverables, or manage the business.  Smaller nodes seemed to be \n> worse than larger nodes, I only have theories as to why.  I never got \n> good support from AWS to help me figure out what was happening.\n>\n> My first thought is to run the same test on different days of the week \n> and different times of day to see if the numbers change radically.  \n> Maybe spin up a node in another data center and availability zone and \n> try the test there too.\n>\n> My real suggestion is to move to Google Cloud or Rackspace or Digital \n> Ocean or somewhere other than AWS.   (We moved to Google Cloud and \n> have been very happy there.  The performance is much more consistent, \n> the management UI is more intuitive, AND the cost for equivalent \n> infrastructure is lower too.)\n>\n>\n> On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> I've tried to run a benchmark, similar to this one:\n>\n> https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n> <https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com>\n>\n> CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';\n>\n> pgbench -i -s 1000 --tablespace=test pgbench\n>\n> echo \"\" >test.txt\n> for i in 0 1 2 4 8 16 32 64 128 256 ; do\n>   sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart\n>   echo \"effective_io_concurrency=$i\" >>test.txt\n>   psql pgbench -c \"set effective_io_concurrency=$i; set\n> enable_indexscan=off; explain (analyze, buffers)  select * from\n> pgbench_accounts where aid between 1000 and 10000000 and abalance\n> != 0;\" >>test.txt\n> done\n>\n> I get the following results:\n>\n> effective_io_concurrency=0\n>  Execution time: 40262.781 ms\n> effective_io_concurrency=1\n>  Execution time: 98125.987 ms\n> effective_io_concurrency=2\n>  Execution time: 55343.776 ms\n> effective_io_concurrency=4\n>  Execution time: 52505.638 ms\n> effective_io_concurrency=8\n>  Execution time: 54954.024 ms\n> effective_io_concurrency=16\n>  Execution time: 54346.455 ms\n> effective_io_concurrency=32\n>  Execution time: 55196.626 ms\n> effective_io_concurrency=64\n>  Execution time: 55057.956 ms\n> effective_io_concurrency=128\n>  Execution time: 54963.510 ms\n> effective_io_concurrency=256\n>  Execution time: 54339.258 ms\n>\n> The test was using 100 GB gp2 SSD EBS. More detailed query plans\n> are attached.\n>\n> PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n> 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n>\n> The results look really confusing to me in two ways. The first one\n> is that I've seen recommendations to set\n> effective_io_concurrency=256 (or more) on EBS. The other one is\n> that effective_io_concurrency=1 (the worst case) is actually the\n> default for PostgreSQL on Linux.\n>\n> Thoughts?\n>\n> Regards,\n> Vitaliy\n>\n>\n\n\n\n\n\n\n\nI've tried to re-run the test for some\n specific values of effective_io_concurrency. The results were the\n same. \n\n That's why I don't think the order of tests or variability in\n \"hardware\" performance affected the results.\n\n Regards,\n Vitaliy\n\n On 31/01/2018 15:01, Rick Otten wrote:\n\n\nWe moved our stuff out of AWS a little over a year\n ago because the performance was crazy inconsistent and\n unpredictable.  I think they do a lot of oversubscribing so you\n get strange sawtooth performance patterns depending on who else\n is sharing your infrastructure and what they are doing at the\n time.\n \n\nThe same unit of work would take 20 minutes each for\n several hours, and then take 2 1/2 hours each for a day, and\n then back to 20 minutes, and sometimes anywhere in between for\n hours or days at a stretch.  I could never tell the business\n when the processing would be done, which made it hard for them\n to set expectations with customers, promise deliverables, or\n manage the business.  Smaller nodes seemed to be worse than\n larger nodes, I only have theories as to why.  I never got\n good support from AWS to help me figure out what was\n happening.\n\n\nMy first thought is to run the same test on different\n days of the week and different times of day to see if the\n numbers change radically.  Maybe spin up a node in another\n data center and availability zone and try the test there\n too.\n\n\n\nMy real suggestion is to move to Google Cloud or Rackspace\n or Digital Ocean or somewhere other than AWS.   (We moved to\n Google Cloud and have been very happy there.  The performance\n is much more consistent, the management UI is more intuitive,\n AND the cost for equivalent infrastructure is lower too.)\n\n\n\n\nOn Wed, Jan 31, 2018 at 7:03 AM,\n Vitaliy Garnashevich <[email protected]>\n wrote:\nHi,\n\n I've tried to run a benchmark, similar to this one:\n\nhttps://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n\n CREATE TABLESPACE test OWNER postgres LOCATION\n '/path/to/ebs';\n\n pgbench -i -s 1000 --tablespace=test pgbench\n\n echo \"\" >test.txt\n for i in 0 1 2 4 8 16 32 64 128 256 ; do\n   sync; echo 3 > /proc/sys/vm/drop_caches; service\n postgresql restart\n   echo \"effective_io_concurrency=$i\" >>test.txt\n   psql pgbench -c \"set effective_io_concurrency=$i; set\n enable_indexscan=off; explain (analyze, buffers)  select *\n from pgbench_accounts where aid between 1000 and 10000000\n and abalance != 0;\" >>test.txt\n done\n\n I get the following results:\n\n effective_io_concurrency=0\n  Execution time: 40262.781 ms\n effective_io_concurrency=1\n  Execution time: 98125.987 ms\n effective_io_concurrency=2\n  Execution time: 55343.776 ms\n effective_io_concurrency=4\n  Execution time: 52505.638 ms\n effective_io_concurrency=8\n  Execution time: 54954.024 ms\n effective_io_concurrency=16\n  Execution time: 54346.455 ms\n effective_io_concurrency=32\n  Execution time: 55196.626 ms\n effective_io_concurrency=64\n  Execution time: 55057.956 ms\n effective_io_concurrency=128\n  Execution time: 54963.510 ms\n effective_io_concurrency=256\n  Execution time: 54339.258 ms\n\n The test was using 100 GB gp2 SSD EBS. More detailed query\n plans are attached.\n\n PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc\n (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n\n The results look really confusing to me in two ways. The\n first one is that I've seen recommendations to set\n effective_io_concurrency=256 (or more) on EBS. The other one\n is that effective_io_concurrency=1 (the worst case) is\n actually the default for PostgreSQL on Linux.\n\n Thoughts?\n\n Regards,\n Vitaliy", "msg_date": "Wed, 31 Jan 2018 15:15:30 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "2018-01-31 14:15 GMT+01:00 Vitaliy Garnashevich <[email protected]>:\n\n> I've tried to re-run the test for some specific values of\n> effective_io_concurrency. The results were the same.\n>\n> That's why I don't think the order of tests or variability in \"hardware\"\n> performance affected the results.\n>\n\nAWS uses some intelligent throttling, so it can be related to hardware.\n\n\n> Regards,\n> Vitaliy\n>\n>\n> On 31/01/2018 15:01, Rick Otten wrote:\n>\n> We moved our stuff out of AWS a little over a year ago because the\n> performance was crazy inconsistent and unpredictable. I think they do a\n> lot of oversubscribing so you get strange sawtooth performance patterns\n> depending on who else is sharing your infrastructure and what they are\n> doing at the time.\n>\n> The same unit of work would take 20 minutes each for several hours, and\n> then take 2 1/2 hours each for a day, and then back to 20 minutes, and\n> sometimes anywhere in between for hours or days at a stretch. I could\n> never tell the business when the processing would be done, which made it\n> hard for them to set expectations with customers, promise deliverables, or\n> manage the business. Smaller nodes seemed to be worse than larger nodes, I\n> only have theories as to why. I never got good support from AWS to help me\n> figure out what was happening.\n>\n> My first thought is to run the same test on different days of the week and\n> different times of day to see if the numbers change radically. Maybe spin\n> up a node in another data center and availability zone and try the test\n> there too.\n>\n> My real suggestion is to move to Google Cloud or Rackspace or Digital\n> Ocean or somewhere other than AWS. (We moved to Google Cloud and have\n> been very happy there. The performance is much more consistent, the\n> management UI is more intuitive, AND the cost for equivalent infrastructure\n> is lower too.)\n>\n>\n> On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich <\n> [email protected]> wrote:\n>\n>> Hi,\n>>\n>> I've tried to run a benchmark, similar to this one:\n>>\n>> https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9\n>> cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#\n>> CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n>>\n>> CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';\n>>\n>> pgbench -i -s 1000 --tablespace=test pgbench\n>>\n>> echo \"\" >test.txt\n>> for i in 0 1 2 4 8 16 32 64 128 256 ; do\n>> sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart\n>> echo \"effective_io_concurrency=$i\" >>test.txt\n>> psql pgbench -c \"set effective_io_concurrency=$i; set\n>> enable_indexscan=off; explain (analyze, buffers) select * from\n>> pgbench_accounts where aid between 1000 and 10000000 and abalance != 0;\"\n>> >>test.txt\n>> done\n>>\n>> I get the following results:\n>>\n>> effective_io_concurrency=0\n>> Execution time: 40262.781 ms\n>> effective_io_concurrency=1\n>> Execution time: 98125.987 ms\n>> effective_io_concurrency=2\n>> Execution time: 55343.776 ms\n>> effective_io_concurrency=4\n>> Execution time: 52505.638 ms\n>> effective_io_concurrency=8\n>> Execution time: 54954.024 ms\n>> effective_io_concurrency=16\n>> Execution time: 54346.455 ms\n>> effective_io_concurrency=32\n>> Execution time: 55196.626 ms\n>> effective_io_concurrency=64\n>> Execution time: 55057.956 ms\n>> effective_io_concurrency=128\n>> Execution time: 54963.510 ms\n>> effective_io_concurrency=256\n>> Execution time: 54339.258 ms\n>>\n>> The test was using 100 GB gp2 SSD EBS. More detailed query plans are\n>> attached.\n>>\n>> PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n>> 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n>>\n>> The results look really confusing to me in two ways. The first one is\n>> that I've seen recommendations to set effective_io_concurrency=256 (or\n>> more) on EBS. The other one is that effective_io_concurrency=1 (the worst\n>> case) is actually the default for PostgreSQL on Linux.\n>>\n>> Thoughts?\n>>\n>> Regards,\n>> Vitaliy\n>>\n>>\n>\n>\n\n2018-01-31 14:15 GMT+01:00 Vitaliy Garnashevich <[email protected]>:\n\nI've tried to re-run the test for some\n specific values of effective_io_concurrency. The results were the\n same. \n\n That's why I don't think the order of tests or variability in\n \"hardware\" performance affected the results.AWS uses some intelligent throttling, so it can be related to hardware. \n\n Regards,\n Vitaliy\n\n On 31/01/2018 15:01, Rick Otten wrote:\n\n\nWe moved our stuff out of AWS a little over a year\n ago because the performance was crazy inconsistent and\n unpredictable.  I think they do a lot of oversubscribing so you\n get strange sawtooth performance patterns depending on who else\n is sharing your infrastructure and what they are doing at the\n time.\n \n\nThe same unit of work would take 20 minutes each for\n several hours, and then take 2 1/2 hours each for a day, and\n then back to 20 minutes, and sometimes anywhere in between for\n hours or days at a stretch.  I could never tell the business\n when the processing would be done, which made it hard for them\n to set expectations with customers, promise deliverables, or\n manage the business.  Smaller nodes seemed to be worse than\n larger nodes, I only have theories as to why.  I never got\n good support from AWS to help me figure out what was\n happening.\n\n\nMy first thought is to run the same test on different\n days of the week and different times of day to see if the\n numbers change radically.  Maybe spin up a node in another\n data center and availability zone and try the test there\n too.\n\n\n\nMy real suggestion is to move to Google Cloud or Rackspace\n or Digital Ocean or somewhere other than AWS.   (We moved to\n Google Cloud and have been very happy there.  The performance\n is much more consistent, the management UI is more intuitive,\n AND the cost for equivalent infrastructure is lower too.)\n\n\n\n\nOn Wed, Jan 31, 2018 at 7:03 AM,\n Vitaliy Garnashevich <[email protected]>\n wrote:\nHi,\n\n I've tried to run a benchmark, similar to this one:\n\nhttps://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n\n CREATE TABLESPACE test OWNER postgres LOCATION\n '/path/to/ebs';\n\n pgbench -i -s 1000 --tablespace=test pgbench\n\n echo \"\" >test.txt\n for i in 0 1 2 4 8 16 32 64 128 256 ; do\n   sync; echo 3 > /proc/sys/vm/drop_caches; service\n postgresql restart\n   echo \"effective_io_concurrency=$i\" >>test.txt\n   psql pgbench -c \"set effective_io_concurrency=$i; set\n enable_indexscan=off; explain (analyze, buffers)  select *\n from pgbench_accounts where aid between 1000 and 10000000\n and abalance != 0;\" >>test.txt\n done\n\n I get the following results:\n\n effective_io_concurrency=0\n  Execution time: 40262.781 ms\n effective_io_concurrency=1\n  Execution time: 98125.987 ms\n effective_io_concurrency=2\n  Execution time: 55343.776 ms\n effective_io_concurrency=4\n  Execution time: 52505.638 ms\n effective_io_concurrency=8\n  Execution time: 54954.024 ms\n effective_io_concurrency=16\n  Execution time: 54346.455 ms\n effective_io_concurrency=32\n  Execution time: 55196.626 ms\n effective_io_concurrency=64\n  Execution time: 55057.956 ms\n effective_io_concurrency=128\n  Execution time: 54963.510 ms\n effective_io_concurrency=256\n  Execution time: 54339.258 ms\n\n The test was using 100 GB gp2 SSD EBS. More detailed query\n plans are attached.\n\n PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc\n (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n\n The results look really confusing to me in two ways. The\n first one is that I've seen recommendations to set\n effective_io_concurrency=256 (or more) on EBS. The other one\n is that effective_io_concurrency=1 (the worst case) is\n actually the default for PostgreSQL on Linux.\n\n Thoughts?\n\n Regards,\n Vitaliy", "msg_date": "Wed, 31 Jan 2018 14:24:23 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "> I've tried to re-run the test for some specific values of effective_io_concurrency. The results were the same. \n\n > That's why I don't think the order of tests or variability in \"hardware\" performance affected the results.\n\n\n\nWe run many MS SQL server VMs in AWS with more than adequate performance.\n\n \n\nAWS EBS performance is variable and depends on various factors, mainly the size of the volume and the size of the VM it is attached to. The bigger the VM, the more EBS “bandwidth” is available, especially if the VM is EBS Optimised.\n\n \n\nThe size of the disk determines the IOPS available, with smaller disks naturally getting less. However, even a small disk with (say) 300 IOPS is allowed to burst up to 3000 IOPS for a while and then gets clobbered. If you want predictable performance then get a bigger disk! If you really want maximum, predictable performance get an EBS Optimised VM and use Provisioned IOPS EBS volumes…. At a price!\n\n \n\nCheers,\n\nGary.\n\nOn 31/01/2018 15:01, Rick Otten wrote:\n\nWe moved our stuff out of AWS a little over a year ago because the performance was crazy inconsistent and unpredictable. I think they do a lot of oversubscribing so you get strange sawtooth performance patterns depending on who else is sharing your infrastructure and what they are doing at the time. \n\n \n\nThe same unit of work would take 20 minutes each for several hours, and then take 2 1/2 hours each for a day, and then back to 20 minutes, and sometimes anywhere in between for hours or days at a stretch. I could never tell the business when the processing would be done, which made it hard for them to set expectations with customers, promise deliverables, or manage the business. Smaller nodes seemed to be worse than larger nodes, I only have theories as to why. I never got good support from AWS to help me figure out what was happening.\n\n \n\nMy first thought is to run the same test on different days of the week and different times of day to see if the numbers change radically. Maybe spin up a node in another data center and availability zone and try the test there too.\n\n \n\nMy real suggestion is to move to Google Cloud or Rackspace or Digital Ocean or somewhere other than AWS. (We moved to Google Cloud and have been very happy there. The performance is much more consistent, the management UI is more intuitive, AND the cost for equivalent infrastructure is lower too.)\n\n \n\n \n\nOn Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich <[email protected] <mailto:[email protected]> > wrote:\n\nHi,\n\nI've tried to run a benchmark, similar to this one:\n\nhttps://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n\nCREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';\n\npgbench -i -s 1000 --tablespace=test pgbench\n\necho \"\" >test.txt\nfor i in 0 1 2 4 8 16 32 64 128 256 ; do\n sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart\n echo \"effective_io_concurrency=$i\" >>test.txt\n psql pgbench -c \"set effective_io_concurrency=$i; set enable_indexscan=off; explain (analyze, buffers) select * from pgbench_accounts where aid between 1000 and 10000000 and abalance != 0;\" >>test.txt\ndone\n\nI get the following results:\n\neffective_io_concurrency=0\n Execution time: 40262.781 ms\neffective_io_concurrency=1\n Execution time: 98125.987 ms\neffective_io_concurrency=2\n Execution time: 55343.776 ms\neffective_io_concurrency=4\n Execution time: 52505.638 ms\neffective_io_concurrency=8\n Execution time: 54954.024 ms\neffective_io_concurrency=16\n Execution time: 54346.455 ms\neffective_io_concurrency=32\n Execution time: 55196.626 ms\neffective_io_concurrency=64\n Execution time: 55057.956 ms\neffective_io_concurrency=128\n Execution time: 54963.510 ms\neffective_io_concurrency=256\n Execution time: 54339.258 ms\n\nThe test was using 100 GB gp2 SSD EBS. More detailed query plans are attached.\n\nPostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n\nThe results look really confusing to me in two ways. The first one is that I've seen recommendations to set effective_io_concurrency=256 (or more) on EBS. The other one is that effective_io_concurrency=1 (the worst case) is actually the default for PostgreSQL on Linux.\n\nThoughts?\n\nRegards,\nVitaliy\n\n \n\n \n\n\n   >  I've tried to re-run the test for some specific values of effective_io_concurrency. The results were the same.  > That's why I don't think the order of tests or variability in \"hardware\" performance affected the results.We run many MS SQL server VMs in AWS with more than adequate performance. AWS EBS performance is variable and depends on various factors, mainly the size of the volume and the size of the VM it is attached to. The bigger the VM, the more EBS “bandwidth” is available, especially if the VM is EBS Optimised. The size of the disk determines the IOPS available, with smaller disks naturally getting less. However, even a small disk with (say) 300 IOPS is allowed to burst up to 3000 IOPS for a while and then gets clobbered. If you want predictable performance then get a bigger disk! If you really want maximum, predictable performance get an EBS Optimised VM and use Provisioned IOPS EBS volumes…. At a price! Cheers,Gary.On 31/01/2018 15:01, Rick Otten wrote:We moved our stuff out of AWS a little over a year ago because the performance was crazy inconsistent and unpredictable.  I think they do a lot of oversubscribing so you get strange sawtooth performance patterns depending on who else is sharing your infrastructure and what they are doing at the time.  The same unit of work would take 20 minutes each for several hours, and then take 2 1/2 hours each for a day, and then back to 20 minutes, and sometimes anywhere in between for hours or days at a stretch.  I could never tell the business when the processing would be done, which made it hard for them to set expectations with customers, promise deliverables, or manage the business.  Smaller nodes seemed to be worse than larger nodes, I only have theories as to why.  I never got good support from AWS to help me figure out what was happening. My first thought is to run the same test on different days of the week and different times of day to see if the numbers change radically.  Maybe spin up a node in another data center and availability zone and try the test there too. My real suggestion is to move to Google Cloud or Rackspace or Digital Ocean or somewhere other than AWS.   (We moved to Google Cloud and have been very happy there.  The performance is much more consistent, the management UI is more intuitive, AND the cost for equivalent infrastructure is lower too.)  On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich <[email protected]> wrote:Hi,I've tried to run a benchmark, similar to this one:https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.comCREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';pgbench -i -s 1000 --tablespace=test pgbenchecho \"\" >test.txtfor i in 0 1 2 4 8 16 32 64 128 256 ; do  sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart  echo \"effective_io_concurrency=$i\" >>test.txt  psql pgbench -c \"set effective_io_concurrency=$i; set enable_indexscan=off; explain (analyze, buffers)  select * from pgbench_accounts where aid between 1000 and 10000000 and abalance != 0;\" >>test.txtdoneI get the following results:effective_io_concurrency=0 Execution time: 40262.781 mseffective_io_concurrency=1 Execution time: 98125.987 mseffective_io_concurrency=2 Execution time: 55343.776 mseffective_io_concurrency=4 Execution time: 52505.638 mseffective_io_concurrency=8 Execution time: 54954.024 mseffective_io_concurrency=16 Execution time: 54346.455 mseffective_io_concurrency=32 Execution time: 55196.626 mseffective_io_concurrency=64 Execution time: 55057.956 mseffective_io_concurrency=128 Execution time: 54963.510 mseffective_io_concurrency=256 Execution time: 54339.258 msThe test was using 100 GB gp2 SSD EBS. More detailed query plans are attached.PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bitThe results look really confusing to me in two ways. The first one is that I've seen recommendations to set effective_io_concurrency=256 (or more) on EBS. The other one is that effective_io_concurrency=1 (the worst case) is actually the default for PostgreSQL on Linux.Thoughts?Regards,Vitaliy", "msg_date": "Wed, 31 Jan 2018 13:46:00 -0000", "msg_from": "\"Gary Doades\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "More tests:\n\nio1, 100 GB:\n\neffective_io_concurrency=0\n  Execution time: 40333.626 ms\neffective_io_concurrency=1\n  Execution time: 163840.500 ms\neffective_io_concurrency=2\n  Execution time: 162606.330 ms\neffective_io_concurrency=4\n  Execution time: 163670.405 ms\neffective_io_concurrency=8\n  Execution time: 161800.478 ms\neffective_io_concurrency=16\n  Execution time: 161962.319 ms\neffective_io_concurrency=32\n  Execution time: 160451.435 ms\neffective_io_concurrency=64\n  Execution time: 161763.632 ms\neffective_io_concurrency=128\n  Execution time: 161687.398 ms\neffective_io_concurrency=256\n  Execution time: 160945.066 ms\n\neffective_io_concurrency=256\n  Execution time: 161226.440 ms\neffective_io_concurrency=128\n  Execution time: 161977.954 ms\neffective_io_concurrency=64\n  Execution time: 159122.006 ms\neffective_io_concurrency=32\n  Execution time: 154923.569 ms\neffective_io_concurrency=16\n  Execution time: 160922.819 ms\neffective_io_concurrency=8\n  Execution time: 160577.122 ms\neffective_io_concurrency=4\n  Execution time: 157509.481 ms\neffective_io_concurrency=2\n  Execution time: 161806.713 ms\neffective_io_concurrency=1\n  Execution time: 164026.708 ms\neffective_io_concurrency=0\n  Execution time: 40196.182 ms\n\n\nst1, 500 GB:\n\neffective_io_concurrency=0\n  Execution time: 40542.583 ms\neffective_io_concurrency=1\n  Execution time: 119996.892 ms\neffective_io_concurrency=2\n  Execution time: 51137.998 ms\neffective_io_concurrency=4\n  Execution time: 42301.922 ms\neffective_io_concurrency=8\n  Execution time: 42081.877 ms\neffective_io_concurrency=16\n  Execution time: 42253.782 ms\neffective_io_concurrency=32\n  Execution time: 42087.216 ms\neffective_io_concurrency=64\n  Execution time: 42112.105 ms\neffective_io_concurrency=128\n  Execution time: 42271.850 ms\neffective_io_concurrency=256\n  Execution time: 42213.074 ms\n\neffective_io_concurrency=256\n  Execution time: 42255.568 ms\neffective_io_concurrency=128\n  Execution time: 42030.515 ms\neffective_io_concurrency=64\n  Execution time: 41713.753 ms\neffective_io_concurrency=32\n  Execution time: 42035.436 ms\neffective_io_concurrency=16\n  Execution time: 42221.581 ms\neffective_io_concurrency=8\n  Execution time: 42203.730 ms\neffective_io_concurrency=4\n  Execution time: 42236.082 ms\neffective_io_concurrency=2\n  Execution time: 49531.558 ms\neffective_io_concurrency=1\n  Execution time: 117160.222 ms\neffective_io_concurrency=0\n  Execution time: 40059.259 ms\n\nRegards,\nVitaliy\n\nOn 31/01/2018 15:46, Gary Doades wrote:\n>\n> > I've tried to re-run the test for some specific values of \n> effective_io_concurrency. The results were the same.\n>\n>  > That's why I don't think the order of tests or variability in \n> \"hardware\" performance affected the results.\n>\n> We run many MS SQL server VMs in AWS with more than adequate performance.\n>\n> AWS EBS performance is variable and depends on various factors, mainly \n> the size of the volume and the size of the VM it is attached to. The \n> bigger the VM, the more EBS “bandwidth” is available, especially if \n> the VM is EBS Optimised.\n>\n> The size of the disk determines the IOPS available, with smaller disks \n> naturally getting less. However, even a small disk with (say) 300 IOPS \n> is allowed to burst up to 3000 IOPS for a while and then gets \n> clobbered. If you want predictable performance then get a bigger disk! \n> If you really want maximum, predictable performance get an EBS \n> Optimised VM and use Provisioned IOPS EBS volumes…. At a price!\n>\n> Cheers,\n>\n> Gary.\n>\n> On 31/01/2018 15:01, Rick Otten wrote:\n>\n> We moved our stuff out of AWS a little over a year ago because the\n> performance was crazy inconsistent and unpredictable.  I think\n> they do a lot of oversubscribing so you get strange sawtooth\n> performance patterns depending on who else is sharing your\n> infrastructure and what they are doing at the time.\n>\n> The same unit of work would take 20 minutes each for several\n> hours, and then take 2 1/2 hours each for a day, and then back to\n> 20 minutes, and sometimes anywhere in between for hours or days at\n> a stretch.  I could never tell the business when the processing\n> would be done, which made it hard for them to set expectations\n> with customers, promise deliverables, or manage the business. \n> Smaller nodes seemed to be worse than larger nodes, I only have\n> theories as to why.  I never got good support from AWS to help me\n> figure out what was happening.\n>\n> My first thought is to run the same test on different days of the\n> week and different times of day to see if the numbers change\n> radically.  Maybe spin up a node in another data center and\n> availability zone and try the test there too.\n>\n> My real suggestion is to move to Google Cloud or Rackspace or\n> Digital Ocean or somewhere other than AWS.   (We moved to Google\n> Cloud and have been very happy there.  The performance is much\n> more consistent, the management UI is more intuitive, AND the cost\n> for equivalent infrastructure is lower too.)\n>\n> On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich\n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> I've tried to run a benchmark, similar to this one:\n>\n> https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n>\n> CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';\n>\n> pgbench -i -s 1000 --tablespace=test pgbench\n>\n> echo \"\" >test.txt\n> for i in 0 1 2 4 8 16 32 64 128 256 ; do\n>   sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql\n> restart\n>   echo \"effective_io_concurrency=$i\" >>test.txt\n>   psql pgbench -c \"set effective_io_concurrency=$i; set\n> enable_indexscan=off; explain (analyze, buffers) select * from\n> pgbench_accounts where aid between 1000 and 10000000 and\n> abalance != 0;\" >>test.txt\n> done\n>\n> I get the following results:\n>\n> effective_io_concurrency=0\n>  Execution time: 40262.781 ms\n> effective_io_concurrency=1\n>  Execution time: 98125.987 ms\n> effective_io_concurrency=2\n>  Execution time: 55343.776 ms\n> effective_io_concurrency=4\n>  Execution time: 52505.638 ms\n> effective_io_concurrency=8\n>  Execution time: 54954.024 ms\n> effective_io_concurrency=16\n>  Execution time: 54346.455 ms\n> effective_io_concurrency=32\n>  Execution time: 55196.626 ms\n> effective_io_concurrency=64\n>  Execution time: 55057.956 ms\n> effective_io_concurrency=128\n>  Execution time: 54963.510 ms\n> effective_io_concurrency=256\n>  Execution time: 54339.258 ms\n>\n> The test was using 100 GB gp2 SSD EBS. More detailed query\n> plans are attached.\n>\n> PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc\n> (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n>\n> The results look really confusing to me in two ways. The first\n> one is that I've seen recommendations to set\n> effective_io_concurrency=256 (or more) on EBS. The other one\n> is that effective_io_concurrency=1 (the worst case) is\n> actually the default for PostgreSQL on Linux.\n>\n> Thoughts?\n>\n> Regards,\n> Vitaliy\n>\n\n\n\n\n\n\n\nMore tests:\n\n io1, 100 GB:\n\n effective_io_concurrency=0\n  Execution time: 40333.626 ms\n effective_io_concurrency=1\n  Execution time: 163840.500 ms\n effective_io_concurrency=2\n  Execution time: 162606.330 ms\n effective_io_concurrency=4\n  Execution time: 163670.405 ms\n effective_io_concurrency=8\n  Execution time: 161800.478 ms\n effective_io_concurrency=16\n  Execution time: 161962.319 ms\n effective_io_concurrency=32\n  Execution time: 160451.435 ms\n effective_io_concurrency=64\n  Execution time: 161763.632 ms\n effective_io_concurrency=128\n  Execution time: 161687.398 ms\n effective_io_concurrency=256\n  Execution time: 160945.066 ms\n\n effective_io_concurrency=256\n  Execution time: 161226.440 ms\n effective_io_concurrency=128\n  Execution time: 161977.954 ms\n effective_io_concurrency=64\n  Execution time: 159122.006 ms\n effective_io_concurrency=32\n  Execution time: 154923.569 ms\n effective_io_concurrency=16\n  Execution time: 160922.819 ms\n effective_io_concurrency=8\n  Execution time: 160577.122 ms\n effective_io_concurrency=4\n  Execution time: 157509.481 ms\n effective_io_concurrency=2\n  Execution time: 161806.713 ms\n effective_io_concurrency=1\n  Execution time: 164026.708 ms\n effective_io_concurrency=0\n  Execution time: 40196.182 ms\n\n\n st1, 500 GB:\n\n effective_io_concurrency=0\n  Execution time: 40542.583 ms\n effective_io_concurrency=1\n  Execution time: 119996.892 ms\n effective_io_concurrency=2\n  Execution time: 51137.998 ms\n effective_io_concurrency=4\n  Execution time: 42301.922 ms\n effective_io_concurrency=8\n  Execution time: 42081.877 ms\n effective_io_concurrency=16\n  Execution time: 42253.782 ms\n effective_io_concurrency=32\n  Execution time: 42087.216 ms\n effective_io_concurrency=64\n  Execution time: 42112.105 ms\n effective_io_concurrency=128\n  Execution time: 42271.850 ms\n effective_io_concurrency=256\n  Execution time: 42213.074 ms\n\n effective_io_concurrency=256\n  Execution time: 42255.568 ms\n effective_io_concurrency=128\n  Execution time: 42030.515 ms\n effective_io_concurrency=64\n  Execution time: 41713.753 ms\n effective_io_concurrency=32\n  Execution time: 42035.436 ms\n effective_io_concurrency=16\n  Execution time: 42221.581 ms\n effective_io_concurrency=8\n  Execution time: 42203.730 ms\n effective_io_concurrency=4\n  Execution time: 42236.082 ms\n effective_io_concurrency=2\n  Execution time: 49531.558 ms\n effective_io_concurrency=1\n  Execution time: 117160.222 ms\n effective_io_concurrency=0\n  Execution time: 40059.259 ms\n\n Regards,\n Vitaliy\n\n On 31/01/2018 15:46, Gary Doades wrote:\n\n\n\n\n\n\n \n \n\n >  I've\n tried to re-run the test for some specific values of\n effective_io_concurrency. The results were the same. \n\n > That's why I don't\n think the order of tests or variability in \"hardware\"\n performance affected the results.\n\n\nWe\n run many MS SQL server VMs in AWS with more than adequate\n performance.\n \nAWS\n EBS performance is variable and depends on various\n factors, mainly the size of the volume and the size of the\n VM it is attached to. The bigger the VM, the more EBS\n “bandwidth” is available, especially if the VM is EBS\n Optimised.\n \nThe\n size of the disk determines the IOPS available, with\n smaller disks naturally getting less. However, even a\n small disk with (say) 300 IOPS is allowed to burst up to\n 3000 IOPS for a while and then gets clobbered. If you want\n predictable performance then get a bigger disk! If you\n really want maximum, predictable performance get an EBS\n Optimised VM and use Provisioned IOPS EBS volumes…. At a\n price!\n \nCheers,\nGary.\n\n On 31/01/2018 15:01, Rick Otten wrote:\n\n\n\nWe moved our stuff out of AWS a little\n over a year ago because the performance was crazy\n inconsistent and unpredictable.  I think they do a lot of\n oversubscribing so you get strange sawtooth performance\n patterns depending on who else is sharing your\n infrastructure and what they are doing at the time. \n\n \n\n\nThe same unit of work would take 20\n minutes each for several hours, and then take 2 1/2\n hours each for a day, and then back to 20 minutes, and\n sometimes anywhere in between for hours or days at a\n stretch.  I could never tell the business when the\n processing would be done, which made it hard for them to\n set expectations with customers, promise deliverables,\n or manage the business.  Smaller nodes seemed to be\n worse than larger nodes, I only have theories as to\n why.  I never got good support from AWS to help me\n figure out what was happening.\n\n \n\n\nMy first thought is to run the same\n test on different days of the week and different times\n of day to see if the numbers change radically.  Maybe\n spin up a node in another data center and availability\n zone and try the test there too.\n\n\n\n \n\n\nMy real suggestion is to move to\n Google Cloud or Rackspace or Digital Ocean or somewhere\n other than AWS.   (We moved to Google Cloud and have\n been very happy there.  The performance is much more\n consistent, the management UI is more intuitive, AND the\n cost for equivalent infrastructure is lower too.)\n\n\n \n\n\n\n \n\nOn Wed, Jan 31, 2018 at 7:03 AM,\n Vitaliy Garnashevich <[email protected]>\n wrote:\n\nHi,\n\n I've tried to run a benchmark, similar to this one:\n\nhttps://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n\n CREATE TABLESPACE test OWNER postgres LOCATION\n '/path/to/ebs';\n\n pgbench -i -s 1000 --tablespace=test pgbench\n\n echo \"\" >test.txt\n for i in 0 1 2 4 8 16 32 64 128 256 ; do\n   sync; echo 3 > /proc/sys/vm/drop_caches; service\n postgresql restart\n   echo \"effective_io_concurrency=$i\" >>test.txt\n   psql pgbench -c \"set effective_io_concurrency=$i;\n set enable_indexscan=off; explain (analyze, buffers) \n select * from pgbench_accounts where aid between 1000\n and 10000000 and abalance != 0;\" >>test.txt\n done\n\n I get the following results:\n\n effective_io_concurrency=0\n  Execution time: 40262.781 ms\n effective_io_concurrency=1\n  Execution time: 98125.987 ms\n effective_io_concurrency=2\n  Execution time: 55343.776 ms\n effective_io_concurrency=4\n  Execution time: 52505.638 ms\n effective_io_concurrency=8\n  Execution time: 54954.024 ms\n effective_io_concurrency=16\n  Execution time: 54346.455 ms\n effective_io_concurrency=32\n  Execution time: 55196.626 ms\n effective_io_concurrency=64\n  Execution time: 55057.956 ms\n effective_io_concurrency=128\n  Execution time: 54963.510 ms\n effective_io_concurrency=256\n  Execution time: 54339.258 ms\n\n The test was using 100 GB gp2 SSD EBS. More detailed\n query plans are attached.\n\n PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by\n gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609,\n 64-bit\n\n The results look really confusing to me in two ways.\n The first one is that I've seen recommendations to set\n effective_io_concurrency=256 (or more) on EBS. The\n other one is that effective_io_concurrency=1 (the\n worst case) is actually the default for PostgreSQL on\n Linux.\n\n Thoughts?\n\n Regards,\n Vitaliy", "msg_date": "Wed, 31 Jan 2018 18:57:14 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "On Wed, Jan 31, 2018 at 1:57 PM, Vitaliy Garnashevich\n<[email protected]> wrote:\n> More tests:\n>\n> io1, 100 GB:\n>\n> effective_io_concurrency=0\n> Execution time: 40333.626 ms\n> effective_io_concurrency=1\n> Execution time: 163840.500 ms\n\nIn my experience playing with prefetch, e_i_c>0 interferes with kernel\nread-ahead. What you've got there would make sense if what postgres\nthinks will be random I/O ends up being sequential. With e_i_c=0, the\nkernel will optimize the hell out of it, because it's a predictable\npattern. But with e_i_c=1, the kernel's optimization gets disabled but\npostgres isn't reading much ahead, so you get the worst possible case.\n\n", "msg_date": "Wed, 31 Jan 2018 16:34:18 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "I've done some more tests. Here they are all:\n\nio1, 100 GB SSD, 1000 IOPS\neffective_io_concurrency=0 Execution time: 40333.626 ms\neffective_io_concurrency=1 Execution time: 163840.500 ms\neffective_io_concurrency=2 Execution time: 162606.330 ms\neffective_io_concurrency=4 Execution time: 163670.405 ms\neffective_io_concurrency=8 Execution time: 161800.478 ms\neffective_io_concurrency=16 Execution time: 161962.319 ms\neffective_io_concurrency=32 Execution time: 160451.435 ms\neffective_io_concurrency=64 Execution time: 161763.632 ms\neffective_io_concurrency=128 Execution time: 161687.398 ms\neffective_io_concurrency=256 Execution time: 160945.066 ms\neffective_io_concurrency=256 Execution time: 161226.440 ms\neffective_io_concurrency=128 Execution time: 161977.954 ms\neffective_io_concurrency=64 Execution time: 159122.006 ms\neffective_io_concurrency=32 Execution time: 154923.569 ms\neffective_io_concurrency=16 Execution time: 160922.819 ms\neffective_io_concurrency=8 Execution time: 160577.122 ms\neffective_io_concurrency=4 Execution time: 157509.481 ms\neffective_io_concurrency=2 Execution time: 161806.713 ms\neffective_io_concurrency=1 Execution time: 164026.708 ms\neffective_io_concurrency=0 Execution time: 40196.182 ms\n\ngp2, 100 GB SSD\neffective_io_concurrency=0 Execution time: 40262.781 ms\neffective_io_concurrency=1 Execution time: 98125.987 ms\neffective_io_concurrency=2 Execution time: 55343.776 ms\neffective_io_concurrency=4 Execution time: 52505.638 ms\neffective_io_concurrency=8 Execution time: 54954.024 ms\neffective_io_concurrency=16 Execution time: 54346.455 ms\neffective_io_concurrency=32 Execution time: 55196.626 ms\neffective_io_concurrency=64 Execution time: 55057.956 ms\neffective_io_concurrency=128 Execution time: 54963.510 ms\neffective_io_concurrency=256 Execution time: 54339.258 ms\n\nio1, 1 TB SSD, 3000 IOPS\neffective_io_concurrency=0 Execution time: 40691.396 ms\neffective_io_concurrency=1 Execution time: 87524.939 ms\neffective_io_concurrency=2 Execution time: 54197.982 ms\neffective_io_concurrency=4 Execution time: 55082.740 ms\neffective_io_concurrency=8 Execution time: 54838.161 ms\neffective_io_concurrency=16 Execution time: 52561.553 ms\neffective_io_concurrency=32 Execution time: 54266.847 ms\neffective_io_concurrency=64 Execution time: 54683.102 ms\neffective_io_concurrency=128 Execution time: 54643.874 ms\neffective_io_concurrency=256 Execution time: 42944.938 ms\n\ngp2, 1 TB SSD\neffective_io_concurrency=0 Execution time: 40072.880 ms\neffective_io_concurrency=1 Execution time: 83528.679 ms\neffective_io_concurrency=2 Execution time: 55706.941 ms\neffective_io_concurrency=4 Execution time: 55664.646 ms\neffective_io_concurrency=8 Execution time: 54699.658 ms\neffective_io_concurrency=16 Execution time: 54632.291 ms\neffective_io_concurrency=32 Execution time: 54793.305 ms\neffective_io_concurrency=64 Execution time: 55227.875 ms\neffective_io_concurrency=128 Execution time: 54638.744 ms\neffective_io_concurrency=256 Execution time: 54869.761 ms\n\nst1, 500 GB HDD\neffective_io_concurrency=0 Execution time: 40542.583 ms\neffective_io_concurrency=1 Execution time: 119996.892 ms\neffective_io_concurrency=2 Execution time: 51137.998 ms\neffective_io_concurrency=4 Execution time: 42301.922 ms\neffective_io_concurrency=8 Execution time: 42081.877 ms\neffective_io_concurrency=16 Execution time: 42253.782 ms\neffective_io_concurrency=32 Execution time: 42087.216 ms\neffective_io_concurrency=64 Execution time: 42112.105 ms\neffective_io_concurrency=128 Execution time: 42271.850 ms\neffective_io_concurrency=256 Execution time: 42213.074 ms\n\nRegards,\nVitaliy\n\n\n", "msg_date": "Wed, 31 Jan 2018 22:29:09 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "On Wed, Jan 31, 2018 at 4:03 AM, Vitaliy Garnashevich <\[email protected]> wrote:\n\n>\n> The results look really confusing to me in two ways. The first one is that\n> I've seen recommendations to set effective_io_concurrency=256 (or more) on\n> EBS.\n\n\nI would not expect this to make much of a difference on a table which is\nperfectly correlated with the index. You would have to create an accounts\ntable which is randomly ordered to have a meaningful benchmark of the eic\nparameter.\n\nI don't know why the default for eic is 1. It seems like that just turns\non the eic mechanism, without any hope of benefiting from it.\n\nCheers,\n\nJeff\n\nOn Wed, Jan 31, 2018 at 4:03 AM, Vitaliy Garnashevich <[email protected]> wrote:\nThe results look really confusing to me in two ways. The first one is that I've seen recommendations to set effective_io_concurrency=256 (or more) on EBS. I would not expect this to make much of a difference on a table which is perfectly correlated with the index.  You would have to create an accounts table which is randomly ordered to have a meaningful benchmark of the eic parameter.I don't know why the default for eic is 1.  It seems like that just turns on the eic mechanism, without any hope of benefiting from it.Cheers,Jeff", "msg_date": "Wed, 31 Jan 2018 13:00:46 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "\n\n\n\n\n\n\n\nHI      I think this parameter will be usefull when the storage using RAID stripe , otherwise turn up this parameter is meaningless when only has one device。\n\n\n\n\n\n发自网易邮箱大师 \n\nOn 2/1/2018 04:29,Vitaliy Garnashevich<[email protected]> wrote: \n\n\nI've done some more tests. Here they are all:io1, 100 GB SSD, 1000 IOPSeffective_io_concurrency=0 Execution time: 40333.626 mseffective_io_concurrency=1 Execution time: 163840.500 mseffective_io_concurrency=2 Execution time: 162606.330 mseffective_io_concurrency=4 Execution time: 163670.405 mseffective_io_concurrency=8 Execution time: 161800.478 mseffective_io_concurrency=16 Execution time: 161962.319 mseffective_io_concurrency=32 Execution time: 160451.435 mseffective_io_concurrency=64 Execution time: 161763.632 mseffective_io_concurrency=128 Execution time: 161687.398 mseffective_io_concurrency=256 Execution time: 160945.066 mseffective_io_concurrency=256 Execution time: 161226.440 mseffective_io_concurrency=128 Execution time: 161977.954 mseffective_io_concurrency=64 Execution time: 159122.006 mseffective_io_concurrency=32 Execution time: 154923.569 mseffective_io_concurrency=16 Execution time: 160922.819 mseffective_io_concurrency=8 Execution time: 160577.122 mseffective_io_concurrency=4 Execution time: 157509.481 mseffective_io_concurrency=2 Execution time: 161806.713 mseffective_io_concurrency=1 Execution time: 164026.708 mseffective_io_concurrency=0 Execution time: 40196.182 msgp2, 100 GB SSDeffective_io_concurrency=0 Execution time: 40262.781 mseffective_io_concurrency=1 Execution time: 98125.987 mseffective_io_concurrency=2 Execution time: 55343.776 mseffective_io_concurrency=4 Execution time: 52505.638 mseffective_io_concurrency=8 Execution time: 54954.024 mseffective_io_concurrency=16 Execution time: 54346.455 mseffective_io_concurrency=32 Execution time: 55196.626 mseffective_io_concurrency=64 Execution time: 55057.956 mseffective_io_concurrency=128 Execution time: 54963.510 mseffective_io_concurrency=256 Execution time: 54339.258 msio1, 1 TB SSD, 3000 IOPSeffective_io_concurrency=0 Execution time: 40691.396 mseffective_io_concurrency=1 Execution time: 87524.939 mseffective_io_concurrency=2 Execution time: 54197.982 mseffective_io_concurrency=4 Execution time: 55082.740 mseffective_io_concurrency=8 Execution time: 54838.161 mseffective_io_concurrency=16 Execution time: 52561.553 mseffective_io_concurrency=32 Execution time: 54266.847 mseffective_io_concurrency=64 Execution time: 54683.102 mseffective_io_concurrency=128 Execution time: 54643.874 mseffective_io_concurrency=256 Execution time: 42944.938 msgp2, 1 TB SSDeffective_io_concurrency=0 Execution time: 40072.880 mseffective_io_concurrency=1 Execution time: 83528.679 mseffective_io_concurrency=2 Execution time: 55706.941 mseffective_io_concurrency=4 Execution time: 55664.646 mseffective_io_concurrency=8 Execution time: 54699.658 mseffective_io_concurrency=16 Execution time: 54632.291 mseffective_io_concurrency=32 Execution time: 54793.305 mseffective_io_concurrency=64 Execution time: 55227.875 mseffective_io_concurrency=128 Execution time: 54638.744 mseffective_io_concurrency=256 Execution time: 54869.761 msst1, 500 GB HDDeffective_io_concurrency=0 Execution time: 40542.583 mseffective_io_concurrency=1 Execution time: 119996.892 mseffective_io_concurrency=2 Execution time: 51137.998 mseffective_io_concurrency=4 Execution time: 42301.922 mseffective_io_concurrency=8 Execution time: 42081.877 mseffective_io_concurrency=16 Execution time: 42253.782 mseffective_io_concurrency=32 Execution time: 42087.216 mseffective_io_concurrency=64 Execution time: 42112.105 mseffective_io_concurrency=128 Execution time: 42271.850 mseffective_io_concurrency=256 Execution time: 42213.074 msRegards,Vitaliy\n\n\n", "msg_date": "Thu, 1 Feb 2018 10:21:28 +0800", "msg_from": "hzzhangjiazhi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "On Wed, Jan 31, 2018 at 11:21 PM, hzzhangjiazhi\n<[email protected]> wrote:\n> HI\n>\n> I think this parameter will be usefull when the storage using RAID\n> stripe , otherwise turn up this parameter is meaningless when only has one\n> device。\n\nNot at all. Especially on EBS, where keeping a relatively full queue\nis necessary to get max thoughput out of the drive.\n\nProblem is, if you're scanning a highly correlated index, the\nmechanism is counterproductive. I had worked on some POC patches for\ncorrecting that, I guess I could work something out, but it's\nlow-priority for me. Especially since it's actually a kernel \"bug\" (or\nshortcoming), that could be fixed in the kernel rather than worked\naround by postgres.\n\n", "msg_date": "Thu, 1 Feb 2018 15:39:07 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "I did some more tests. I've made an SQL dump of the table. Then used \nhead/tail commands to cut the data part. Then used shuf command to \nshuffle rows, and then joined the pieces back and restored the table \nback into DB.\n\nBefore:\nselect array_agg(aid) from (select aid from pgbench_accounts order by \nctid limit 20)_;\n{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20}\n\neffective_io_concurrency=0 Execution time: 1455.336 ms\neffective_io_concurrency=1 Execution time: 8365.070 ms\neffective_io_concurrency=2 Execution time: 4791.961 ms\neffective_io_concurrency=4 Execution time: 4113.713 ms\neffective_io_concurrency=8 Execution time: 1584.862 ms\neffective_io_concurrency=16 Execution time: 1533.096 ms\neffective_io_concurrency=8 Execution time: 1494.494 ms\neffective_io_concurrency=4 Execution time: 3235.892 ms\neffective_io_concurrency=2 Execution time: 4624.334 ms\neffective_io_concurrency=1 Execution time: 7831.310 ms\neffective_io_concurrency=0 Execution time: 1422.203 ms\n\nAfter:\nselect array_agg(aid) from (select aid from pgbench_accounts order by \nctid limit 20)_;\n{6861090,18316007,2361004,11880097,5079470,9859942,13776329,12687163,3793362,18312052,15912971,9928864,10179242,9307499,2737986,13911147,5337329,12582498,3019085,4631617}\n\neffective_io_concurrency=0 Execution time: 71321.723 ms\neffective_io_concurrency=1 Execution time: 180230.742 ms\neffective_io_concurrency=2 Execution time: 98635.566 ms\neffective_io_concurrency=4 Execution time: 91464.375 ms\neffective_io_concurrency=8 Execution time: 91048.939 ms\neffective_io_concurrency=16 Execution time: 97682.475 ms\neffective_io_concurrency=8 Execution time: 91262.404 ms\neffective_io_concurrency=4 Execution time: 90945.560 ms\neffective_io_concurrency=2 Execution time: 97019.504 ms\neffective_io_concurrency=1 Execution time: 180331.474 ms\neffective_io_concurrency=0 Execution time: 71469.484 ms\n\nThe numbers are not directly comparable with the previous tests, because \nthis time I used scale factor 200.\n\nRegards,\nVitaliy\n\nOn 2018-02-01 20:39, Claudio Freire wrote:\n> On Wed, Jan 31, 2018 at 11:21 PM, hzzhangjiazhi\n> <[email protected]> wrote:\n>> HI\n>>\n>> I think this parameter will be usefull when the storage using RAID\n>> stripe , otherwise turn up this parameter is meaningless when only has one\n>> device。\n> Not at all. Especially on EBS, where keeping a relatively full queue\n> is necessary to get max thoughput out of the drive.\n>\n> Problem is, if you're scanning a highly correlated index, the\n> mechanism is counterproductive. I had worked on some POC patches for\n> correcting that, I guess I could work something out, but it's\n> low-priority for me. Especially since it's actually a kernel \"bug\" (or\n> shortcoming), that could be fixed in the kernel rather than worked\n> around by postgres.\n>", "msg_date": "Fri, 2 Feb 2018 13:46:22 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "> Problem is, if you're scanning a highly correlated index, the\n> mechanism is counterproductive.\n\n> I would not expect this to make much of a difference on a table which \n> is perfectly correlated with the index.  You would have to create an \n> accounts table which is randomly ordered to have a meaningful \n> benchmark of the eic parameter.\n\nIf I read the postgres source code correctly, then the pages are sorted \nin tbm_begin_iterate() before being iterated, so I don't think \ncorrelation of index should matter. The tests on shuffled records show \nthe same trend in execution time for different eic values.\n\nI did some more tests, this time on DigitalOcean/SSD. I also tried \ndifferent kernel versions (3.13 and 4.4). I've run each test several times.\n\nUbuntu 16.04.3 LTS\nLinux ubuntu-s-2vcpu-4gb-ams3-01 4.4.0-112-generic #135-Ubuntu SMP Fri \nJan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\nPostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu \n5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit\n\nselect array_agg(aid) from (select aid from pgbench_accounts order by \nctid limit 20)_;\n                       array_agg\n------------------------------------------------------\n  {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20}\n(1 row)\n\neffective_io_concurrency=0 Execution time: 3910.770 ms\neffective_io_concurrency=1 Execution time: 10754.483 ms\neffective_io_concurrency=2 Execution time: 5347.845 ms\neffective_io_concurrency=4 Execution time: 5737.166 ms\neffective_io_concurrency=8 Execution time: 4904.962 ms\neffective_io_concurrency=16 Execution time: 4947.941 ms\neffective_io_concurrency=8 Execution time: 4737.117 ms\neffective_io_concurrency=4 Execution time: 4749.065 ms\neffective_io_concurrency=2 Execution time: 5031.390 ms\neffective_io_concurrency=1 Execution time: 10117.927 ms\neffective_io_concurrency=0 Execution time: 3769.260 ms\n\nselect array_agg(aid) from (select aid from pgbench_accounts order by \nctid limit 20)_;\narray_agg\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n  {14845391,12121312,18579380,9075771,7602183,762831,8485877,1035607,4451695,4686093,1925254,3462677,9634221,14144638,17894662,8247722,17996891,14842493,13832379,2052647}\n(1 row)\n\neffective_io_concurrency=0 Execution time: 6801.229 ms\neffective_io_concurrency=1 Execution time: 14217.719 ms\neffective_io_concurrency=2 Execution time: 9126.216 ms\neffective_io_concurrency=4 Execution time: 8797.717 ms\neffective_io_concurrency=8 Execution time: 8759.317 ms\neffective_io_concurrency=16 Execution time: 8431.835 ms\neffective_io_concurrency=8 Execution time: 9387.119 ms\neffective_io_concurrency=4 Execution time: 9064.808 ms\neffective_io_concurrency=2 Execution time: 9359.062 ms\neffective_io_concurrency=1 Execution time: 16639.386 ms\neffective_io_concurrency=0 Execution time: 6560.935 ms\n\n\nUbuntu 14.04.5 LTS\nLinux ubuntu-s-2vcpu-4gb-ams3-02 3.13.0-139-generic #188-Ubuntu SMP Tue \nJan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\nPostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu \n4.8.4-2ubuntu1~14.04.3) 4.8.4, 64-bit\n\nselect array_agg(aid) from (select aid from pgbench_accounts order by \nctid limit 20)_;\n                       array_agg\n------------------------------------------------------\n  {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20}\n(1 row)\n\neffective_io_concurrency=0 Execution time: 3760.865 ms\neffective_io_concurrency=1 Execution time: 11092.846 ms\neffective_io_concurrency=2 Execution time: 4933.662 ms\neffective_io_concurrency=4 Execution time: 4733.713 ms\neffective_io_concurrency=8 Execution time: 4860.886 ms\neffective_io_concurrency=16 Execution time: 5063.696 ms\neffective_io_concurrency=8 Execution time: 4670.155 ms\neffective_io_concurrency=4 Execution time: 5049.901 ms\neffective_io_concurrency=2 Execution time: 4785.219 ms\neffective_io_concurrency=1 Execution time: 11106.143 ms\neffective_io_concurrency=0 Execution time: 3779.058 ms\n\nselect array_agg(aid) from (select aid from pgbench_accounts order by \nctid limit 20)_;\narray_agg\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n  {8089611,788082,3477731,10034640,9256860,15432349,2412452,10087114,10386959,7199759,17253672,7798185,160908,1960920,13287370,14970792,18578221,13892448,3532901,3560583}\n(1 row)\n\neffective_io_concurrency=0 Execution time: 6243.600 ms\neffective_io_concurrency=1 Execution time: 14613.348 ms\neffective_io_concurrency=2 Execution time: 8250.552 ms\neffective_io_concurrency=4 Execution time: 8286.333 ms\neffective_io_concurrency=8 Execution time: 8167.817 ms\neffective_io_concurrency=16 Execution time: 8193.186 ms\neffective_io_concurrency=8 Execution time: 8206.614 ms\neffective_io_concurrency=4 Execution time: 8375.153 ms\neffective_io_concurrency=2 Execution time: 8354.106 ms\neffective_io_concurrency=1 Execution time: 14139.712 ms\neffective_io_concurrency=0 Execution time: 6409.229 ms\n\n\nLooks like this behavior is not caused by, and does not depend on:\n- variable performance in the cloud\n- order of rows in the table\n- whether the disk is EBS (backed by SSD or HDD), or ordinary SSD\n- kernel version\n\nDoes this mean that the default setting for eic on Linux is just \ninadequate for how the modern kernels behave? Or am I missing something \nelse in the tests?\n\nRegards,\nVitaliy", "msg_date": "Sun, 4 Feb 2018 01:05:05 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "On Sat, Feb 3, 2018 at 8:05 PM, Vitaliy Garnashevich\n<[email protected]> wrote:\n> Looks like this behavior is not caused by, and does not depend on:\n> - variable performance in the cloud\n> - order of rows in the table\n> - whether the disk is EBS (backed by SSD or HDD), or ordinary SSD\n> - kernel version\n>\n> Does this mean that the default setting for eic on Linux is just inadequate\n> for how the modern kernels behave? Or am I missing something else in the\n> tests?\n>\n> Regards,\n> Vitaliy\n\nI have analyzed this issue quite extensively in the past, and I can\nsay with high confidence that you're analysis on point 2 is most\nlikely wrong.\n\nNow, I don't have all the information to make that a categorical\nassertion, you might have a point, but I believe you're\nmisinterpreting the data.\n\nI mean, that the issue is indeed affected by the order of rows in the\ntable. Random heap access patterns result in sparse bitmap heap scans,\nwhereas less random heap access patterns result in denser bitmap heap\nscans. Dense scans have large portions of contiguous fetches, a\npattern that is quite adversely affected by the current prefetch\nmechanism in linux.\n\nThis analysis does point to the fact that I should probably revisit\nthis issue. There's a rather simple workaround for this, pg should\njust avoid issuing prefetch orders for sequential block patterns,\nsince those are already much better handled by the kernel itself.\n\n", "msg_date": "Sun, 4 Feb 2018 23:27:25 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "> I mean, that the issue is indeed affected by the order of rows in the\n> table. Random heap access patterns result in sparse bitmap heap scans,\n> whereas less random heap access patterns result in denser bitmap heap\n> scans. Dense scans have large portions of contiguous fetches, a\n> pattern that is quite adversely affected by the current prefetch\n> mechanism in linux.\n>\n\nThanks for your input.\n\nHow can I test a sparse bitmap scan? Can you think of any SQL commands \nwhich would generate data and run such scans?\n\nWould a bitmap scan over expression index ((aid%1000)=0) do a sparse \nbitmap scan?\n\nRegards,\nVitaliy\n\n", "msg_date": "Mon, 5 Feb 2018 13:26:43 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "On Mon, Feb 5, 2018 at 8:26 AM, Vitaliy Garnashevich\n<[email protected]> wrote:\n>> I mean, that the issue is indeed affected by the order of rows in the\n>> table. Random heap access patterns result in sparse bitmap heap scans,\n>> whereas less random heap access patterns result in denser bitmap heap\n>> scans. Dense scans have large portions of contiguous fetches, a\n>> pattern that is quite adversely affected by the current prefetch\n>> mechanism in linux.\n>>\n>\n> Thanks for your input.\n>\n> How can I test a sparse bitmap scan? Can you think of any SQL commands which\n> would generate data and run such scans?\n>\n> Would a bitmap scan over expression index ((aid%1000)=0) do a sparse bitmap\n> scan?\n\nIf you have a minimally correlated index (ie: totally random order),\nand suppose you have N tuples per page, you need to select less (much\nless) than 1/Nth of the table.\n\n", "msg_date": "Mon, 5 Feb 2018 17:14:53 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "On Wed, Jan 31, 2018 at 04:34:18PM -0300, Claudio Freire wrote:\n> In my experience playing with prefetch, e_i_c>0 interferes with kernel\n> read-ahead. What you've got there would make sense if what postgres\n> thinks will be random I/O ends up being sequential. With e_i_c=0, the\n> kernel will optimize the hell out of it, because it's a predictable\n> pattern. But with e_i_c=1, the kernel's optimization gets disabled but\n> postgres isn't reading much ahead, so you get the worst possible case.\n\nOn Thu, Feb 01, 2018 at 03:39:07PM -0300, Claudio Freire wrote:\n> Problem is, if you're scanning a highly correlated index, the\n> mechanism is counterproductive. I had worked on some POC patches for\n> correcting that, I guess I could work something out, but it's\n> low-priority for me. Especially since it's actually a kernel \"bug\" (or\n> shortcoming), that could be fixed in the kernel rather than worked\n> around by postgres.\n\nOn Sun, Feb 04, 2018 at 11:27:25PM -0300, Claudio Freire wrote:\n> ... Dense scans have large portions of contiguous fetches, a pattern that is\n> quite adversely affected by the current prefetch mechanism in linux.\n> \n> ... There's a rather simple workaround for this, pg should just avoid issuing\n> prefetch orders for sequential block patterns, since those are already much\n> better handled by the kernel itself.\n\nThinking out loud.. if prefetch were a separate process, I imagine this\nwouldn't be an issue ; is it possible the parallel worker code could take on\nresponsibility of prefetching (?)\n\nJustin\n\n", "msg_date": "Tue, 6 Feb 2018 23:42:27 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": ">> Would a bitmap scan over expression index ((aid%1000)=0) do a sparse bitmap\n>> scan?\n> If you have a minimally correlated index (ie: totally random order),\n> and suppose you have N tuples per page, you need to select less (much\n> less) than 1/Nth of the table.\n>\n\nI've done a test with a sparse bitmap scan. The positive effect of \neffective_io_concurrency is visible in that case.\n\nIn the test, I'm creating a table with 100k rows, 10 tuples per page. \nThen I create an index on expression ((id%100)=0), and then query the \ntable using a bitmap scan over this index. Before each query, I also \nrestart postgresql service and clear OS caches, to make all reads happen \nfrom disk.\n\ncreate table test as select generate_series(1, 100000) id, repeat('x', \n750) val;\ncreate index sparse_idx on test (((id%100)=0));\n\nexplain (analyze, buffers) select * from test where ((id%100)=0) and val \n!= '';\n\neffective_io_concurrency=0 Execution time: 3258.220 ms\neffective_io_concurrency=1 Execution time: 3345.689 ms\neffective_io_concurrency=2 Execution time: 2516.558 ms\neffective_io_concurrency=4 Execution time: 1816.150 ms\neffective_io_concurrency=8 Execution time: 1083.018 ms\neffective_io_concurrency=16 Execution time: 2349.064 ms\neffective_io_concurrency=32 Execution time: 771.776 ms\neffective_io_concurrency=64 Execution time: 1536.146 ms\neffective_io_concurrency=128 Execution time: 560.471 ms\neffective_io_concurrency=256 Execution time: 404.113 ms\neffective_io_concurrency=512 Execution time: 318.271 ms\neffective_io_concurrency=1000 Execution time: 411.978 ms\n\neffective_io_concurrency=0 Execution time: 3655.124 ms\neffective_io_concurrency=1 Execution time: 3337.614 ms\neffective_io_concurrency=2 Execution time: 2914.609 ms\neffective_io_concurrency=4 Execution time: 2133.285 ms\neffective_io_concurrency=8 Execution time: 1326.740 ms\neffective_io_concurrency=16 Execution time: 1765.848 ms\neffective_io_concurrency=32 Execution time: 583.176 ms\neffective_io_concurrency=64 Execution time: 541.667 ms\neffective_io_concurrency=128 Execution time: 362.409 ms\neffective_io_concurrency=256 Execution time: 446.026 ms\neffective_io_concurrency=512 Execution time: 416.469 ms\neffective_io_concurrency=1000 Execution time: 301.295 ms\n\neffective_io_concurrency=0 Execution time: 4611.075 ms\neffective_io_concurrency=1 Execution time: 3583.286 ms\neffective_io_concurrency=2 Execution time: 2404.817 ms\neffective_io_concurrency=4 Execution time: 1602.766 ms\neffective_io_concurrency=8 Execution time: 1811.409 ms\neffective_io_concurrency=16 Execution time: 1688.752 ms\neffective_io_concurrency=32 Execution time: 613.454 ms\neffective_io_concurrency=64 Execution time: 686.325 ms\neffective_io_concurrency=128 Execution time: 425.590 ms\neffective_io_concurrency=256 Execution time: 1394.318 ms\neffective_io_concurrency=512 Execution time: 1579.458 ms\neffective_io_concurrency=1000 Execution time: 414.184 ms\n\nRegards,\nVitaliy", "msg_date": "Thu, 8 Feb 2018 18:05:00 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "Anyway, there are still some strange things happening when \neffective_io_concurrency is non-zero.\n\nI've found that the real reason for the poor Bitmap Scan performance was \nrelated not only with sparsity of the rows/pages to be rechecked, but \nalso with the value of starting ID from which the scan begins:\n\ncreate table test as select generate_series(1, 100000) id, repeat('x', \n90) val;\nalter table test add constraint test_pkey primary key (id);\n\nselect count(*) tup_per_page from test group by (ctid::text::point)[0] \norder by count(*) desc limit 5;\n  tup_per_page\n--------------\n            65\n            65\n            65\n            65\n            65\n(5 rows)\n\nselect * from test where id between X and 100000 and val != ''\n\n\neffective_io_concurrency=0; id between 0 and 100000; Execution time: \n524.671 ms\neffective_io_concurrency=1; id between 0 and 100000; Execution time: \n420.000 ms\neffective_io_concurrency=0; id between 0 and 100000; Execution time: \n441.813 ms\neffective_io_concurrency=1; id between 0 and 100000; Execution time: \n498.591 ms\neffective_io_concurrency=0; id between 0 and 100000; Execution time: \n662.838 ms\neffective_io_concurrency=1; id between 0 and 100000; Execution time: \n431.503 ms\n\neffective_io_concurrency=0; id between 10 and 100000; Execution time: \n1210.436 ms\neffective_io_concurrency=1; id between 10 and 100000; Execution time: \n1056.646 ms\neffective_io_concurrency=0; id between 10 and 100000; Execution time: \n578.102 ms\neffective_io_concurrency=1; id between 10 and 100000; Execution time: \n396.996 ms\neffective_io_concurrency=0; id between 10 and 100000; Execution time: \n598.842 ms\neffective_io_concurrency=1; id between 10 and 100000; Execution time: \n555.258 ms\n\neffective_io_concurrency=0; id between 50 and 100000; Execution time: \n4017.999 ms\neffective_io_concurrency=1; id between 50 and 100000; Execution time: \n383.694 ms\neffective_io_concurrency=0; id between 50 and 100000; Execution time: \n535.686 ms\neffective_io_concurrency=1; id between 50 and 100000; Execution time: \n570.221 ms\neffective_io_concurrency=0; id between 50 and 100000; Execution time: \n852.960 ms\neffective_io_concurrency=1; id between 50 and 100000; Execution time: \n656.097 ms\n\neffective_io_concurrency=0; id between 64 and 100000; Execution time: \n385.628 ms\neffective_io_concurrency=1; id between 64 and 100000; Execution time: \n712.261 ms\neffective_io_concurrency=0; id between 64 and 100000; Execution time: \n1610.618 ms\neffective_io_concurrency=1; id between 64 and 100000; Execution time: \n438.211 ms\neffective_io_concurrency=0; id between 64 and 100000; Execution time: \n393.341 ms\neffective_io_concurrency=1; id between 64 and 100000; Execution time: \n744.768 ms\n\neffective_io_concurrency=0; id between 65 and 100000; Execution time: \n846.759 ms\neffective_io_concurrency=1; id between 65 and 100000; Execution time: \n514.668 ms\neffective_io_concurrency=0; id between 65 and 100000; Execution time: \n536.640 ms\neffective_io_concurrency=1; id between 65 and 100000; Execution time: \n461.966 ms\neffective_io_concurrency=0; id between 65 and 100000; Execution time: \n1810.677 ms\neffective_io_concurrency=1; id between 65 and 100000; Execution time: \n545.359 ms\n\neffective_io_concurrency=0; id between 66 and 100000; Execution time: \n663.920 ms\neffective_io_concurrency=1; id between 66 and 100000; Execution time: \n5571.118 ms\neffective_io_concurrency=0; id between 66 and 100000; Execution time: \n683.056 ms\neffective_io_concurrency=1; id between 66 and 100000; Execution time: \n5883.359 ms\neffective_io_concurrency=0; id between 66 and 100000; Execution time: \n472.809 ms\neffective_io_concurrency=1; id between 66 and 100000; Execution time: \n5461.794 ms\n\neffective_io_concurrency=0; id between 100 and 100000; Execution time: \n647.292 ms\neffective_io_concurrency=1; id between 100 and 100000; Execution time: \n7810.344 ms\neffective_io_concurrency=0; id between 100 and 100000; Execution time: \n773.750 ms\neffective_io_concurrency=1; id between 100 and 100000; Execution time: \n5637.014 ms\neffective_io_concurrency=0; id between 100 and 100000; Execution time: \n726.111 ms\neffective_io_concurrency=1; id between 100 and 100000; Execution time: \n7740.607 ms\n\neffective_io_concurrency=0; id between 200 and 100000; Execution time: \n549.281 ms\neffective_io_concurrency=1; id between 200 and 100000; Execution time: \n5032.522 ms\neffective_io_concurrency=0; id between 200 and 100000; Execution time: \n692.631 ms\neffective_io_concurrency=1; id between 200 and 100000; Execution time: \n5138.669 ms\neffective_io_concurrency=0; id between 200 and 100000; Execution time: \n793.342 ms\neffective_io_concurrency=1; id between 200 and 100000; Execution time: \n5375.822 ms\n\neffective_io_concurrency=0; id between 1000 and 100000; Execution time: \n596.754 ms\neffective_io_concurrency=1; id between 1000 and 100000; Execution time: \n5278.683 ms\neffective_io_concurrency=0; id between 1000 and 100000; Execution time: \n638.706 ms\neffective_io_concurrency=1; id between 1000 and 100000; Execution time: \n5404.002 ms\neffective_io_concurrency=0; id between 1000 and 100000; Execution time: \n730.667 ms\neffective_io_concurrency=1; id between 1000 and 100000; Execution time: \n5761.312 ms\n\neffective_io_concurrency=0; id between 2000 and 100000; Execution time: \n656.086 ms\neffective_io_concurrency=1; id between 2000 and 100000; Execution time: \n6156.003 ms\neffective_io_concurrency=0; id between 2000 and 100000; Execution time: \n768.288 ms\neffective_io_concurrency=1; id between 2000 and 100000; Execution time: \n4917.423 ms\neffective_io_concurrency=0; id between 2000 and 100000; Execution time: \n500.931 ms\neffective_io_concurrency=1; id between 2000 and 100000; Execution time: \n5659.255 ms\n\neffective_io_concurrency=0; id between 5000 and 100000; Execution time: \n755.440 ms\neffective_io_concurrency=1; id between 5000 and 100000; Execution time: \n5141.671 ms\neffective_io_concurrency=0; id between 5000 and 100000; Execution time: \n542.174 ms\neffective_io_concurrency=1; id between 5000 and 100000; Execution time: \n6074.953 ms\neffective_io_concurrency=0; id between 5000 and 100000; Execution time: \n570.615 ms\neffective_io_concurrency=1; id between 5000 and 100000; Execution time: \n6922.402 ms\n\neffective_io_concurrency=0; id between 10000 and 100000; Execution time: \n469.544 ms\neffective_io_concurrency=1; id between 10000 and 100000; Execution time: \n6083.361 ms\neffective_io_concurrency=0; id between 10000 and 100000; Execution time: \n706.078 ms\neffective_io_concurrency=1; id between 10000 and 100000; Execution time: \n4069.171 ms\neffective_io_concurrency=0; id between 10000 and 100000; Execution time: \n526.792 ms\neffective_io_concurrency=1; id between 10000 and 100000; Execution time: \n5289.984 ms\n\neffective_io_concurrency=0; id between 20000 and 100000; Execution time: \n435.503 ms\neffective_io_concurrency=1; id between 20000 and 100000; Execution time: \n5460.730 ms\neffective_io_concurrency=0; id between 20000 and 100000; Execution time: \n454.323 ms\neffective_io_concurrency=1; id between 20000 and 100000; Execution time: \n4163.030 ms\neffective_io_concurrency=0; id between 20000 and 100000; Execution time: \n674.382 ms\neffective_io_concurrency=1; id between 20000 and 100000; Execution time: \n3703.045 ms\n\neffective_io_concurrency=0; id between 50000 and 100000; Execution time: \n226.094 ms\neffective_io_concurrency=1; id between 50000 and 100000; Execution time: \n2584.720 ms\neffective_io_concurrency=0; id between 50000 and 100000; Execution time: \n1431.037 ms\neffective_io_concurrency=1; id between 50000 and 100000; Execution time: \n2651.834 ms\neffective_io_concurrency=0; id between 50000 and 100000; Execution time: \n345.194 ms\neffective_io_concurrency=1; id between 50000 and 100000; Execution time: \n2328.844 ms\n\neffective_io_concurrency=0; id between 75000 and 100000; Execution time: \n120.121 ms\neffective_io_concurrency=1; id between 75000 and 100000; Execution time: \n2125.927 ms\neffective_io_concurrency=0; id between 75000 and 100000; Execution time: \n115.865 ms\neffective_io_concurrency=1; id between 75000 and 100000; Execution time: \n1616.534 ms\neffective_io_concurrency=0; id between 75000 and 100000; Execution time: \n138.005 ms\neffective_io_concurrency=1; id between 75000 and 100000; Execution time: \n1651.880 ms\n\neffective_io_concurrency=0; id between 90000 and 100000; Execution time: \n66.322 ms\neffective_io_concurrency=1; id between 90000 and 100000; Execution time: \n443.317 ms\neffective_io_concurrency=0; id between 90000 and 100000; Execution time: \n53.138 ms\neffective_io_concurrency=1; id between 90000 and 100000; Execution time: \n566.945 ms\neffective_io_concurrency=0; id between 90000 and 100000; Execution time: \n57.441 ms\neffective_io_concurrency=1; id between 90000 and 100000; Execution time: \n525.749 ms\n\nFor some reason, with dense bitmap scans, when Bitmap Heap Scan / \nRecheck starts not from the first page of the table, the \neffective_io_concurrency=0 consistently and significantly outperforms \neffective_io_concurrency=1.\n\nRegards,\nVitaliy", "msg_date": "Thu, 8 Feb 2018 18:40:12 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "On Thu, Feb 8, 2018 at 11:40 AM, Vitaliy Garnashevich <\[email protected]> wrote:\n\n> Anyway, there are still some strange things happening when\n> effective_io_concurrency is non-zero.\n>\n> ...\n>\n\n\n> Vitaliy\n>\n>\nI was researching whether I could optimize a concatenated lvm2 volume when\nI have disks of different speeds (concatenated - not striped - and I think\nI can if I concatenate them in the right order - still testing on that\nfront), when I came across this article from a few years ago:\nhttp://www.techforce.com.br/content/lvm-raid-xfs-and-ext3-file-systems-tuning-small-files-massive-heavy-load-concurrent-parallel\n\nIn the article he talks about the performance of parallel io on different\nfile systems.\n\nSince I am already running XFS that led me to this tunable:\nhttp://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure/tmp/en-US/html/Allocation_Groups.html\n\nWhich brought me back to this discussion about effective_io_concurrency\nfrom a couple of weeks ago. I noticed that the recent round of tests being\ndiscussed never mentioned the file system used. Was it XFS? Does changing\nthe agcount change the behaviour?\n\nOn Thu, Feb 8, 2018 at 11:40 AM, Vitaliy Garnashevich <[email protected]> wrote:Anyway, there are still some strange things happening when effective_io_concurrency is non-zero.\n... \nVitaliy\n\nI was researching whether I could optimize a concatenated lvm2 volume when I have disks of different speeds (concatenated - not striped - and I think I can if I concatenate them in the right order - still testing on that front), when I came across this article from a few years ago:http://www.techforce.com.br/content/lvm-raid-xfs-and-ext3-file-systems-tuning-small-files-massive-heavy-load-concurrent-parallelIn the article he talks about the performance of parallel io on different file systems.Since I am already running XFS that led me to this tunable:http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure/tmp/en-US/html/Allocation_Groups.htmlWhich brought me back to this discussion about effective_io_concurrency from a couple of weeks ago.  I noticed that the recent round of tests being discussed never mentioned the file system used.  Was it XFS?  Does changing the agcount change the behaviour?", "msg_date": "Fri, 23 Feb 2018 10:23:03 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" }, { "msg_contents": "> I noticed that the recent round of tests being discussed never \n> mentioned the file system used.  Was it XFS?  Does changing the \n> agcount change the behaviour?\n\nIt was ext4.\n\nRegards,\nVitaliy\n\n\n", "msg_date": "Fri, 23 Feb 2018 17:35:06 +0200", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: effective_io_concurrency on EBS/gp2" } ]
[ { "msg_contents": "Hello!\n\nI brought this issue up about two years ago but without getting any\nreal explanation or solution. The problem is that PostgreSQL does\nreally bad plans using nested loops. With \"enable_nestloop = 0\" the\nsame query is run about 20 times faster.\n\nThe sugested solution I got back then was to upgrade to the latest\nversion of PostgreSQL (then 9.5). It did not help. The solution we\nfinally applied was a horribly ugly patch to the perl-module\nSearchBuilder that recognized queries that would perform badly and put\nthem inside transaction blocks with \"SET LOCAL enable_nestloop = 0\".\n\nLast week I upgraded PostgreSQL for this application (Request Tracker)\nto version 10.1 and just for fun I decied to test to remove the patch\nto see if the problem still persisted. For two cases it did not. The\nplanner handled them just fine. For one case however, the same problem\nstill remains.\n\nBad plan: https://explain.depesz.com/s/avtZ\nGood plan: https://explain.depesz.com/s/SJSt\n\nAny suggestions on how to make the planner make better decisions for\nthis query?\n\n\n / Eskil\n\n\n", "msg_date": "Thu, 01 Feb 2018 11:42:07 +0100", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": true, "msg_subject": "bad plan using nested loops" }, { "msg_contents": "Johan Fredriksson <[email protected]> writes:\n> Bad plan: https://explain.depesz.com/s/avtZ\n> Good plan: https://explain.depesz.com/s/SJSt\n> Any suggestions on how to make the planner make better decisions for\n> this query?\n\nCore of the problem looks to be the misestimation here:\n\n\tIndex Only Scan using shredder_cgm1 on public.cachedgroupmembers cachedgroupmembers_4 (cost=0.43..2.33 rows=79 width=8) (actual time=0.020..0.903 rows=1492 loops=804)\n\t Output: cachedgroupmembers_4.memberid, cachedgroupmembers_4.groupid, cachedgroupmembers_4.disabled\n\t Index Cond: ((cachedgroupmembers_4.memberid = principals_1.id) AND (cachedgroupmembers_4.disabled = 0))\n\t Heap Fetches: 5018\n\nProbably, memberid and disabled are correlated but the planner doesn't\nknow that, so it thinks the index condition is way more selective than it\nactually is. In PG 10, you could very possibly fix that by installing\nextended statistics on that pair of columns. See\n\nhttps://www.postgresql.org/docs/current/static/planner-stats.html#PLANNER-STATS-EXTENDED\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 01 Feb 2018 10:00:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad plan using nested loops" }, { "msg_contents": "\n> Johan Fredriksson <[email protected]> writes:\n> > Bad plan: https://explain.depesz.com/s/avtZ\n> > Good plan: https://explain.depesz.com/s/SJSt\n> > Any suggestions on how to make the planner make better decisions for\n> > this query?\n> \n> Core of the problem looks to be the misestimation here:\n> \n> Index Only Scan using shredder_cgm1 on public.cachedgroupmembers cachedgroupmembers_4\n> (cost=0.43..2.33 rows=79 width=8) (actual time=0.020..0.903 rows=1492 loops=804)\n> Output: cachedgroupmembers_4.memberid, cachedgroupmembers_4.groupid,\n> cachedgroupmembers_4.disabled\n> Index Cond: ((cachedgroupmembers_4.memberid = principals_1.id) AND\n> (cachedgroupmembers_4.disabled = 0))\n> Heap Fetches: 5018\n>\n> Probably, memberid and disabled are correlated but the planner doesn't\n> know that, so it thinks the index condition is way more selective than it\n> actually is. In PG 10, you could very possibly fix that by installing\n> extended statistics on that pair of columns. See\n> \n> https://www.postgresql.org/docs/current/static/planner-stats.html#PLANNER-STATS-EXTENDED\n\nI'm not sure what you mean by correlated, but there are only a handful (164 when I check it) disabled groupmembers out of total 7.5 million.\nI'll give CREATE STATISTICS on those columns a shot and see if it gets any better.\n\n / Eskil\n\n", "msg_date": "Thu, 1 Feb 2018 20:34:24 +0000", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": true, "msg_subject": "SV: bad plan using nested loops" }, { "msg_contents": "tor 2018-02-01 klockan 20:34 +0000 skrev Johan Fredriksson:\n> > Johan Fredriksson <[email protected]> writes:\n> > > Bad plan: https://explain.depesz.com/s/avtZ\n> > > Good plan: https://explain.depesz.com/s/SJSt\n> > > Any suggestions on how to make the planner make better decisions\n> > > for\n> > > this query?\n> > \n> > Core of the problem looks to be the misestimation here:\n> > \n> >        Index Only Scan using shredder_cgm1 on\n> > public.cachedgroupmembers cachedgroupmembers_4\n> > (cost=0.43..2.33 rows=79 width=8) (actual time=0.020..0.903\n> > rows=1492 loops=804)\n> >          Output: cachedgroupmembers_4.memberid,\n> > cachedgroupmembers_4.groupid,\n> > cachedgroupmembers_4.disabled\n> >          Index Cond: ((cachedgroupmembers_4.memberid =\n> > principals_1.id) AND\n> > (cachedgroupmembers_4.disabled = 0))\n> >          Heap Fetches: 5018\n> > \n> > Probably, memberid and disabled are correlated but the planner\n> > doesn't\n> > know that, so it thinks the index condition is way more selective\n> > than it\n> > actually is.  In PG 10, you could very possibly fix that by\n> > installing\n> > extended statistics on that pair of columns.  See\n> > \n> > https://www.postgresql.org/docs/current/static/planner-stats.html#P\n> > LANNER-STATS-EXTENDED\n> \n> I'm not sure what you mean by correlated, but there are only a\n> handful (164 when I check it) disabled groupmembers out of total 7.5\n> million.\n> I'll give CREATE STATISTICS on those columns a shot and see if it\n> gets any better.\n\nIt looks like you are right, Tom. There actually exists full\ncorrelation between memberid, groupid and disabled.\n\nrt4=# SELECT stxname, stxkeys, stxdependencies FROM pg_statistic_ext;\n stxname  | stxkeys |   stxdependencies    \n-----------+---------+----------------------\n cgm_stat2 | 2 6     | {\"2\n=> 6\": 1.000000}\n cgm_stat1 | 3 6     | {\"3 => 6\": 1.000000}\n(2 rows)\n\nHowever, this does not help the planner. It still picks the bad plan.\n\n\n / Eskil\n\n\n", "msg_date": "Fri, 02 Feb 2018 10:02:07 +0100", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SV: bad plan using nested loops" }, { "msg_contents": "\n\nOn 02/02/2018 10:02 AM, Johan Fredriksson wrote:\n> tor 2018-02-01 klockan 20:34 +0000 skrev Johan Fredriksson:\n>>> Johan Fredriksson <[email protected]> writes:\n>>>> Bad plan: https://explain.depesz.com/s/avtZ\n>>>> Good plan: https://explain.depesz.com/s/SJSt\n>>>> Any suggestions on how to make the planner make better decisions\n>>>> for\n>>>> this query?\n>>>\n>>> Core of the problem looks to be the misestimation here:\n>>>\n>>>        Index Only Scan using shredder_cgm1 on\n>>> public.cachedgroupmembers cachedgroupmembers_4\n>>> (cost=0.43..2.33 rows=79 width=8) (actual time=0.020..0.903\n>>> rows=1492 loops=804)\n>>>          Output: cachedgroupmembers_4.memberid,\n>>> cachedgroupmembers_4.groupid,\n>>> cachedgroupmembers_4.disabled\n>>>          Index Cond: ((cachedgroupmembers_4.memberid =\n>>> principals_1.id) AND\n>>> (cachedgroupmembers_4.disabled = 0))\n>>>          Heap Fetches: 5018\n>>>\n>>> Probably, memberid and disabled are correlated but the planner\n>>> doesn't\n>>> know that, so it thinks the index condition is way more selective\n>>> than it\n>>> actually is.  In PG 10, you could very possibly fix that by\n>>> installing\n>>> extended statistics on that pair of columns.  See\n>>>\n>>> https://www.postgresql.org/docs/current/static/planner-stats.html#P\n>>> LANNER-STATS-EXTENDED\n>>\n>> I'm not sure what you mean by correlated, but there are only a\n>> handful (164 when I check it) disabled groupmembers out of total 7.5\n>> million.\n>> I'll give CREATE STATISTICS on those columns a shot and see if it\n>> gets any better.\n> \n> It looks like you are right, Tom. There actually exists full\n> correlation between memberid, groupid and disabled.\n> \n> rt4=# SELECT stxname, stxkeys, stxdependencies FROM pg_statistic_ext;\n>  stxname  | stxkeys |   stxdependencies    \n> -----------+---------+----------------------\n>  cgm_stat2 | 2 6     | {\"2\n> => 6\": 1.000000}\n>  cgm_stat1 | 3 6     | {\"3 => 6\": 1.000000}\n> (2 rows)\n> \n> However, this does not help the planner. It still picks the bad plan.\n> \n\nYeah :-( Unfortunately, we're not using the extended statistics to\nimprove join cardinality estimates yet. PostgreSQL 10 can only use them\nto improve estimates on individual tables, and judging by the progress\non already submitted improvements, it doesn't seem very likely to change\nin PostgreSQL 11.\n\nregards\nTomas\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Mon, 5 Feb 2018 20:15:02 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SV: bad plan using nested loops" } ]
[ { "msg_contents": "Hi,\n\nI am using Postgres version 9.4.4 on a Mac machine. I have 2 queries that\ndiffer only in the order by clause. One of it has 'nulls last' and the\nother one does not have it. The performance difference between the two is\nconsiderable.\n\nThe slower of the two queries is\n\nSELECT wos.notificationstatus,\n wos.unrepliedcount,\n wos.shownotestotech,\n wos.ownerid,\n wos.isfcr,\n aau.user_id,\n wo.workorderid AS \"WOID\",\n wo.is_catalog_template AS \"TemplateType\",\n wo.title AS \"Title\",\n wo.is_catalog_template,\n aau.first_name AS \"Requester\",\n cd.categoryname AS \"Category\",\n ti.first_name AS \"Owner\",\n wo.duebytime AS \"DueBy\",\n wo.fr_duetime,\n wo.completedtime AS \"CompletedTime\",\n wo.respondedtime AS \"RespondedTime\",\n wo.resolvedtime AS \"ResolvedTime\",\n qd.queuename AS \"Group\",\n std.statusname AS \"Status\",\n wo.createdtime AS \"CreatedDate\",\n wos.isread,\n wos.hasattachment,\n wos.appr_statusid,\n wos.priorityid,\n wo.templateid AS \"TemplateId\",\n pd.priorityid,\n pd.priorityname AS \"Priority\",\n pd.prioritycolor AS \"PriorityColor\",\n wos.isoverdue,\n wos.is_fr_overdue,\n wos.linkedworkorderid,\n wos.editing_status,\n wos.editorid,\n wos.linkedworkorderid,\n wo.isparent,\n sduser.isvipuser,\n sduser_onbehalfof.isvipuser AS \"ONBEHALFOFVIP\",\n wo.isparent,\n wos.statusid,\n sdorganization.name AS \"Site\",\n wo.workorderid AS \"RequestID\"\nFROM workorder wo\nleft join workorder_fields wof\nON wo.workorderid=wof.workorderid\nleft join servicecatalog_fields scf\nON wo.workorderid=scf.workorderid\nleft join wotoprojects wtp\nON wo.workorderid=wtp.workorderid\nleft join sitedefinition\nON wo.siteid=sitedefinition.siteid\nleft join sdorganization\nON sitedefinition.siteid=sdorganization.org_id\ninner join workorderstates wos\nON wo.workorderid=wos.workorderid\nleft join categorydefinition cd\nON wos.categoryid=cd.categoryid\nleft join aaauser ti\nON wos.ownerid=ti.user_id\nleft join aaauser aau\nON wo.requesterid=aau.user_id\nleft join prioritydefinition pd\nON wos.priorityid=pd.priorityid\nleft join statusdefinition std\nON wos.statusid=std.statusid\nleft join workorder_queue wo_queue\nON wo.workorderid=wo_queue.workorderid\nleft join queuedefinition qd\nON wo_queue.queueid=qd.queueid\nleft join departmentdefinition dpt\nON wo.deptid=dpt.deptid\nleft join leveldefinition lvd\nON wos.levelid=lvd.levelid\nleft join modedefinition mdd\nON wo.modeid=mdd.modeid\nleft join urgencydefinition urgdef\nON wos.urgencyid=urgdef.urgencyid\nleft join impactdefinition impdef\nON wos.impactid=impdef.impactid\nleft join requesttypedefinition rtdef\nON wos.requesttypeid=rtdef.requesttypeid\nleft join subcategorydefinition scd\nON wos.subcategoryid=scd.subcategoryid\nleft join itemdefinition icd\nON wos.itemid=icd.itemid\nleft join servicedefinition serdef\nON wo.serviceid=serdef.serviceid\nleft join aaauser cbau\nON wo.createdbyid=cbau.user_id\nleft join aaauser oboaau\nON wo.oboid=oboaau.user_id\nleft join sduser\nON wo.requesterid=sduser.userid\nleft join sduser sduser_onbehalfof\nON wo.oboid=sduser_onbehalfof.userid\nleft join workorder_fields\nON wo.workorderid=workorder_fields.workorderid\nWHERE ((\n wos.statusid = 1)\n AND (\n wo.isparent = TRUE))\nORDER BY 7 DESC nulls last limit 25\n\n\n\nOn removing 'nulls last' from the order by clause the query becomes very\nfast. I have attached the query plan for both the queries.\n\n From the plan it looks like the second query is able to efficiently use the\nworkorder_pk index ( The node 'Index Scan Backward using workorder_pk on\nworkorder' returns 25 rows) whereas the first query is not able to use the\nindex efficiently (more than 300k rows are returned from the same node).\n\nThe column workorderid is a PK column. The query optimizer should ideally\nknow that there is no nulls in this column and in effect there is no\ndifference between the two queries.\n\nI tried the same in Postgres 10 and the slower query performs much better\ndue to parallel sequential scans but still it is less efficient than the\nquery without 'nulls last'.\n\nI thought it would be best to raise this with the Postgres team.\n\nRegards,\nNanda", "msg_date": "Thu, 1 Feb 2018 20:00:29 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": true, "msg_subject": "Query optimiser is not using 'not null' constraint when 'order by\n nulls last' clause is used" }, { "msg_contents": "On Thu, 2018-02-01 at 20:00 +0530, Nandakumar M wrote:\n> Hi,\n> \n> I am using Postgres version 9.4.4 on a Mac machine.\n> I have 2 queries that differ only in the order by clause.\n> One of it has 'nulls last' and the other one does not have it.\n> The performance difference between the two is considerable.\n> \n> The slower of the two queries is\n> \n> SELECT [...]\n> FROM workorder wo\n> left join workorder_fields wof\n> ON wo.workorderid=wof.workorderid\n> left join servicecatalog_fields scf\n> ON wo.workorderid=scf.workorderid\n[...]\n> ORDER BY 7 DESC nulls last limit 25\n> \n> \n> \n> On removing 'nulls last' from the order by clause the query becomes very fast.\n> I have attached the query plan for both the queries.\n\nIn the above case, the optimizer does not know that it will get the rows\nin the correct order: indexes are sorted ASC NULLS LAST by default,\nso a backwards index scan will produce the results NULLS FIRST,\nwhich is the default for ORDER BY ... DESC.\n\nIf you want the nulls last, PostgreSQL has to retrieve *all* the rows and sort\nthem rather than using the first 25 results it gets by scanning then indexes.\n\nTo have the above query perform fast, add additional indexes with either\nASC NULLS FIRST or DESC NULLS LAST for all used keys.\n\nYours,\nLaurenz Albe\n\n", "msg_date": "Fri, 02 Feb 2018 10:36:40 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimiser is not using 'not null' constraint when 'order\n by nulls last' clause is used" }, { "msg_contents": "Hi,\n\nOn 2 Feb 2018 15:06, \"Laurenz Albe\" <[email protected]> wrote:\n\n\n>In the above case, the optimizer does >not know that it will get the rows\n>in the correct order: indexes are >sorted ASC NULLS LAST by default,\n>so a backwards index scan will >produce the results NULLS FIRST,\n>which is the default for ORDER BY ... >DESC.\n\n\nThe order by column has a not null constraint on it and so nulls last or\nfirst shouldn't make any difference.\n\n\n>If you want the nulls last, PostgreSQL >has to retrieve *all* the rows and\nsort\n>them rather than using the first 25 >results it gets by scanning then\n>indexes.\n\n>To have the above query perform >fast, add additional indexes with either\n>ASC NULLS FIRST or DESC NULLS >LAST for all used keys.\n\n\nFor now this is exactly what I have done. But it is in effect a duplicate\nindex on a PK column and I would be happy not to create it in the first\nplace.\n\nRegards\nNanda\n\nHi,On 2 Feb 2018 15:06, \"Laurenz Albe\" <[email protected]> wrote:>In the above case, the optimizer does >not know that it will get the rows>in the correct order: indexes are >sorted ASC NULLS LAST by default,>so a backwards index scan will >produce the results NULLS FIRST,>which is the default for ORDER BY ... >DESC.The order by column has a not null constraint on it and so nulls last or first shouldn't make any difference.\n>If you want the nulls last, PostgreSQL >has to retrieve *all* the rows and sort>them rather than using the first 25 >results it gets by scanning then >indexes.\n>To have the above query perform >fast, add additional indexes with either>ASC NULLS FIRST or DESC NULLS >LAST for all used keys.For now this is exactly what I have done. But it is in effect a duplicate index on a PK column and I would be happy not to create it in the first place.RegardsNanda", "msg_date": "Fri, 2 Feb 2018 19:34:30 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query optimiser is not using 'not null' constraint when 'order by\n nulls last' clause is used" }, { "msg_contents": "Nandakumar M <[email protected]> writes:\n> The order by column has a not null constraint on it and so nulls last or\n> first shouldn't make any difference.\n\nThe planner does not consider this and it doesn't really seem like\nsomething worth expending cycles on. If you know that there won't be\nnulls in the column, why are you insisting on specifying a nondefault\nvalue of NULLS FIRST/LAST in the query?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 02 Feb 2018 10:00:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimiser is not using 'not null' constraint when 'order by\n nulls last' clause is used" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 2, 2018 at 8:30 PM, Tom Lane <[email protected]> wrote:\n>\n> The planner does not consider this and it doesn't really seem like\n> something worth expending cycles on. If you know that there won't be\n> nulls in the column, why are you insisting on specifying a nondefault\n> value of NULLS FIRST/LAST in the query?\n\nThe query is generated by a framework that adds 'nulls last' to all\norder by clause.\n\nThis is done apparently to provide common behaviour in our application\nirrespective of the database that is used.\nSQL server treats nulls as lesser than non null values which is\nopposite to what Postgres does.\n\nFor any indexes that we create manually, we can do a\n\n--> create index on table_name(column_name nulls first);\n\nBut, for the PK column we are not in control of the index that is created.\n\nRegards,\nNanda\n\n", "msg_date": "Fri, 2 Feb 2018 21:19:50 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query optimiser is not using 'not null' constraint when 'order by\n nulls last' clause is used" }, { "msg_contents": "On Fri, Feb 2, 2018 at 8:49 AM, Nandakumar M <[email protected]> wrote:\n\n> But, for the PK column we are not in control of the index that is created.\n>\n\n​You probably can (I assume the nulls aspect of the index doesn't prevent\nPK usage), but you must add the PK to the table after creating the index\nand not let the system auto-generate the index for you.​\n\nhttps://www.postgresql.org/docs/10/static/sql-altertable.html\n\n​ALTER TABLE name ADD ​PRIMARY KEY USING INDEX index_name;\n\nDavid J.\n\nOn Fri, Feb 2, 2018 at 8:49 AM, Nandakumar M <[email protected]> wrote:But, for the PK column we are not in control of the index that is created.​You probably can (I assume the nulls aspect of the index doesn't prevent PK usage), but you must add the PK to the table after creating the index and not let the system auto-generate the index for you.​https://www.postgresql.org/docs/10/static/sql-altertable.html​ALTER TABLE name ADD ​PRIMARY KEY USING INDEX index_name;David J.", "msg_date": "Fri, 2 Feb 2018 08:58:42 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimiser is not using 'not null' constraint when 'order by\n nulls last' clause is used" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 2, 2018 at 9:28 PM, David G. Johnston\n<[email protected]> wrote:\n\n> You probably can (I assume the nulls aspect of the index doesn't prevent PK\n> usage), but you must add the PK to the table after creating the index and\n> not let the system auto-generate the index for you.\n>\n> https://www.postgresql.org/docs/10/static/sql-altertable.html\n>\n> ALTER TABLE name ADD PRIMARY KEY USING INDEX index_name;\n>\n\nI missed to notice this in the docs. Thank you David for pointing it out.\n\nRegards,\nNanda\n\n", "msg_date": "Fri, 2 Feb 2018 22:31:27 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query optimiser is not using 'not null' constraint when 'order by\n nulls last' clause is used" } ]
[ { "msg_contents": "Hi,\nI configured range partitions on a date column of my main\ntable(full_table). Each partition represents a day in the month. Every day\npartition has a list parition of 4 tables on a text column.\n\nfull table\n table_01_11_2017 -->\n\n table_02_11_2017\n .....\n\nHi,I configured range partitions on a date column of my main table(full_table). Each partition represents a day in the month. Every day partition has a list parition of 4 tables on a text column.full table             table_01_11_2017  -->            table_02_11_2017               .....", "msg_date": "Sun, 4 Feb 2018 11:23:01 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql 10.1 scanning all partitions instead of 1" }, { "msg_contents": "Sorry it didnt send the whole mail :\nHi,\nI configured range partitions on a date column of my main table(log_full).\nEach partition represents a day in the month. Every day partition has a\nlist parition of 4 tables on a text column.\n\nlog_full\n log_full_01_11_2017 -->\n log_full _01_11_2017_x1\n log_full _01_11_2017_x2\n log_full _01_11_2017_x3\n log_full _01_11_2017_x4\n log_full_02_11_2017\n log_full _02_11_2017_x1\n log_full _02_11_2017_x1\n log_full _02_11_2017_x1\n log_full _02_11_2017_x1\n\nand so on....\n\n\nThe date column consist of date in the next format : YYYY-MM-DD HH:24:SS\nfor example : 2017-11-01 00:01:40\n\nI wanted to check the plan that I'm getting for a query that is using the\ndate column and it seems that the planner choose to do seq scans on all\ntables.\n\n-Each partition consist from 15M rows.\nI have about 120 partitions.\n\nThe query :\nexplain select count(*) from log_full where end_date between\nto_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n\nThe output is too long but it do full scans on all paritions...\nany idea what can be the problem? Is it connected to the date format ?\n\nThanks , Mariel.\n\n\n\n2018-02-04 11:23 GMT+02:00 Mariel Cherkassky <[email protected]>:\n\n> Hi,\n> I configured range partitions on a date column of my main\n> table(full_table). Each partition represents a day in the month. Every day\n> partition has a list parition of 4 tables on a text column.\n>\n> full table\n> table_01_11_2017 -->\n>\n> table_02_11_2017\n> .....\n>\n\nSorry it didnt send the whole mail : \nHi,I configured range partitions on a date column of my main table(log_full). Each partition represents a day in the month. Every day partition has a list parition of 4 tables on a text column.log_full          log_full_01_11_2017  -->                                         \n\nlog_full\n\n_01_11_2017_x1\n                                          \n\nlog_full\n\n_01_11_2017_x2\n                                         \n\nlog_full\n\n_01_11_2017_x3\n\n                                         \n\nlog_full\n\n_01_11_2017_x4\n           \n\nlog_full_02_11_2017\n                                         \n\nlog_full\n\n_02_11_2017_x1\n\n                                         \n\nlog_full\n\n_02_11_2017_x1\n\n                                         \n\nlog_full\n\n_02_11_2017_x1\n\n                                         \n\nlog_full\n\n_02_11_2017_x1\nand so on....      The date column consist of date in the next format : YYYY-MM-DD HH:24:SS for example : 2017-11-01 00:01:40I wanted to check the plan that I'm getting for a query that is using the date column and it seems that the planner choose to do seq scans on all tables.-Each partition consist from 15M rows.I have about 120 partitions.The query : explain select count(*) from log_full where end_date between to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');The output is too long but it do full scans on all paritions...any idea what can be the problem? Is it connected to the date format ?Thanks , Mariel.\n2018-02-04 11:23 GMT+02:00 Mariel Cherkassky <[email protected]>:Hi,I configured range partitions on a date column of my main table(full_table). Each partition represents a day in the month. Every day partition has a list parition of 4 tables on a text column.full table             table_01_11_2017  -->            table_02_11_2017               .....", "msg_date": "Sun, 4 Feb 2018 11:28:47 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql 10.1 scanning all partitions instead of 1" } ]
[ { "msg_contents": "Hi,\nI configured range partitions on a date column of my main table(log_full).\nEach partition represents a day in the month. Every day partition has a\nlist parition of 4 tables on a text column.\n\nlog_full\n log_full_01_11_2017 -->\n log_full _01_11_2017_x1\n log_full _01_11_2017_x2\n log_full _01_11_2017_x3\n log_full _01_11_2017_x4\n log_full_02_11_2017\n log_full _02_11_2017_x1\n log_full _02_11_2017_x2\n log_full _02_11_2017_x3\n log_full _02_11_2017_x4\n\nand so on....\n\n\nThe date column consist of date in the next format : YYYY-MM-DD HH:24:SS\nfor example : 2017-11-01 00:01:40\n\nI wanted to check the plan that I'm getting for a query that is using the\ndate column and it seems that the planner choose to do seq scans on all\ntables.\n\n-Each partition consist from 15M rows.\nI have about 120 partitions.\n\nThe query :\nexplain select count(*) from log_full where end_date between\nto_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n\nThe output is too long but it do full scans on all paritions...\nany idea what can be the problem? Is it connected to the date format ?\n\nThanks , Mariel.\n\n\nHi,I configured range partitions on a date column of my main table(log_full). Each partition represents a day in the month. Every day partition has a list parition of 4 tables on a text column.log_full          log_full_01_11_2017  -->                                          log_full _01_11_2017_x1                                          log_full _01_11_2017_x2                                          log_full _01_11_2017_x3                                           log_full _01_11_2017_x4             log_full_02_11_2017                                          log_full _02_11_2017_x1                                           log_full _02_11_2017_x2                                           log_full _02_11_2017_x3                                           log_full _02_11_2017_x4and so on....      The date column consist of date in the next format : YYYY-MM-DD HH:24:SS for example : 2017-11-01 00:01:40I wanted to check the plan that I'm getting for a query that is using the date column and it seems that the planner choose to do seq scans on all tables.-Each partition consist from 15M rows.I have about 120 partitions.The query : explain select count(*) from log_full where end_date between to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');The output is too long but it do full scans on all paritions...any idea what can be the problem? Is it connected to the date format ?Thanks , Mariel.", "msg_date": "Sun, 4 Feb 2018 12:14:04 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "\n\nOn 02/04/2018 11:14 AM, Mariel Cherkassky wrote:\n> \n> Hi,\n> I configured range partitions on a date column of my main\n> table(log_full). Each partition represents a day in the month. Every day\n> partition has a list parition of 4 tables on a text column.\n> \n> log_full\n>           log_full_01_11_2017  -->\n>                                           log_full _01_11_2017_x1\n>                                           log_full _01_11_2017_x2\n>                                           log_full _01_11_2017_x3 \n>                                           log_full _01_11_2017_x4 \n>             log_full_02_11_2017\n>                                           log_full _02_11_2017_x1 \n>                                           log_full _02_11_2017_x2 \n>                                           log_full _02_11_2017_x3 \n>                                           log_full _02_11_2017_x4\n> \n> and so on....\n>       \n> \n> The date column consist of date in the next format : YYYY-MM-DD HH:24:SS\n> for example : 2017-11-01 00:01:40\n> \n> I wanted to check the plan that I'm getting for a query that is using\n> the date column and it seems that the planner choose to do seq scans on\n> all tables.\n> \n> -Each partition consist from 15M rows.\n> I have about 120 partitions.\n> \n> The query : \n> explain select count(*) from log_full where end_date between\n> to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n> \n> The output is too long but it do full scans on all paritions...\n> any idea what can be the problem? Is it connected to the date format ?\n> \n\nYou haven't shown us how the partitions are defined, nor the query plan.\nSo it's rather hard to say. You mentioned text format, but then you use\nto_date() to query the partitioned table. Which I guess might be the\ncause, but it's hard to say for sure.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Sun, 4 Feb 2018 13:03:27 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "On Sun, Feb 4, 2018 at 5:14 AM, Mariel Cherkassky <\[email protected]> wrote:\n\n>\n> Hi,\n> I configured range partitions on a date column of my main table(log_full).\n> Each partition represents a day in the month. Every day partition has a\n> list parition of 4 tables on a text column.\n>\n> log_full\n> log_full_01_11_2017 -->\n> log_full _01_11_2017_x1\n> log_full _01_11_2017_x2\n> log_full _01_11_2017_x3\n> log_full _01_11_2017_x4\n> log_full_02_11_2017\n> log_full _02_11_2017_x1\n> log_full _02_11_2017_x2\n> log_full _02_11_2017_x3\n> log_full _02_11_2017_x4\n>\n> and so on....\n>\n>\n> The date column consist of date in the next format : YYYY-MM-DD HH:24:SS\n> for example : 2017-11-01 00:01:40\n>\n> I wanted to check the plan that I'm getting for a query that is using the\n> date column and it seems that the planner choose to do seq scans on all\n> tables.\n>\n> -Each partition consist from 15M rows.\n> I have about 120 partitions.\n>\n> The query :\n> explain select count(*) from log_full where end_date between\n> to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n>\n> The output is too long but it do full scans on all paritions...\n> any idea what can be the problem? Is it connected to the date format ?\n>\n> Thanks , Mariel.\n>\n\nI'm wrestling with a very similar problem too - except instead of official\npartitions I have a views on top of a bunch (50+) of unioned materialized\nviews, each \"partition\" with 10M - 100M rows. On 9.6.6 the queries would\nuse the indexes on each materialized view. On 10.1, every materialized\nview is sequence scanned. (Killing the performance of many queries.) I\nhave 4 or 5 sets of materialized views organized this way with views on top\nof them.\n\nI've checked for invalid indexes.\n\nI've done Analyze, and Vaccuum Analyze on all sub-materialized views.\n\nI've reindexed the materialized views.\n\nI've experimented with geqo tunables.\nI've experimented with turning parallel gather off and on and setting it\nto different levels.\nI've tried setting random page cost very high, and very low.\nI tried turning nested loops on and off.\nI tried setting effective_cache_size very small.\n\nNone of the various queries using these views on top of my hand constructed\n\"partitions\" are using indexes.\n\nAll of the exact same queries used the indexes in 9.6.6 before the\nupgrade. Without the indexes, hitting these 1B+ row aggregate tables I'm\nseeing a 10x to 100x slowdown since upgrading. This is killing us.\n\nNot only that but with 50 tables under the view, and each one getting a\nparallel sequence scan, it is kind of impressive how much CPU one of these\nqueries can use at once.\n\nI'm mostly hoping with fingers crossed that something in 10.2, which is\ncoming out next week, fixes it. I was planning on posting my dilemma to\nthis list this morning since I'm running out of ideas. I really need to\nfix the issue this weekend to meet some business deadlines for data\nprocessing early in the week. So my other hail mary pass this weekend,\nbesides seeking ideas on this list, was to see if I could bump my version\nto 10.2 early. (I'm not sure how to do that since I've been using Ubuntu\npackages and waiting for official releases prior to now, but I'm sure I can\nfigure it out.)\n\nOn Sun, Feb 4, 2018 at 5:14 AM, Mariel Cherkassky <[email protected]> wrote:\nHi,I configured range partitions on a date column of my main table(log_full). Each partition represents a day in the month. Every day partition has a list parition of 4 tables on a text column.log_full          log_full_01_11_2017  -->                                          log_full _01_11_2017_x1                                          log_full _01_11_2017_x2                                          log_full _01_11_2017_x3                                           log_full _01_11_2017_x4             log_full_02_11_2017                                          log_full _02_11_2017_x1                                           log_full _02_11_2017_x2                                           log_full _02_11_2017_x3                                           log_full _02_11_2017_x4and so on....      The date column consist of date in the next format : YYYY-MM-DD HH:24:SS for example : 2017-11-01 00:01:40I wanted to check the plan that I'm getting for a query that is using the date column and it seems that the planner choose to do seq scans on all tables.-Each partition consist from 15M rows.I have about 120 partitions.The query : explain select count(*) from log_full where end_date between to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');The output is too long but it do full scans on all paritions...any idea what can be the problem? Is it connected to the date format ?Thanks , Mariel.I'm wrestling with a very similar problem too - except instead of official partitions I have a views on top of a bunch (50+) of unioned materialized views, each \"partition\" with 10M - 100M rows.  On 9.6.6 the queries would use the indexes on each materialized view.  On 10.1, every materialized view is sequence scanned.  (Killing the performance of many queries.)  I have 4 or 5 sets of materialized views organized this way with views on top of them.I've checked for invalid indexes.I've done Analyze, and Vaccuum Analyze on all sub-materialized views.I've reindexed the materialized views.I've experimented with geqo tunables.I've experimented with  turning parallel gather off and on and setting it to different levels.I've tried setting random page cost very high, and very low.I tried turning nested loops on and off.I tried setting effective_cache_size very small.None of the various queries using these views on top of my hand constructed \"partitions\" are using indexes.All of the exact same queries used the indexes in 9.6.6 before the upgrade.  Without the indexes, hitting these 1B+ row aggregate tables I'm seeing a 10x to 100x slowdown since upgrading.  This is killing us.Not only that but with 50 tables under the view, and each one getting a parallel sequence scan, it is kind of impressive how much CPU one of these queries can use at once.I'm mostly hoping with fingers crossed that something in 10.2, which is coming out next week, fixes it.  I was planning on posting my dilemma to this list this morning since I'm running out of ideas.  I really need to fix the issue this weekend to meet some business deadlines for data processing early in the week.  So my other hail mary pass this weekend, besides seeking ideas on this list, was to see if I could bump my version to 10.2 early.  (I'm not sure how to do that since I've been using Ubuntu packages and waiting for official releases prior to now, but I'm sure I can figure it out.)", "msg_date": "Sun, 4 Feb 2018 07:15:24 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "Mybe I wasnt clear. I'm having a 2 layers patitions mechanism :\nMy main table is called log_full :\nCREATE TABLE log_full (a text,b text,c text, start_stop text, end_Date\ndate) partition range by (end_date))\n\nEvery day I create a partition that represent data from that day :\ncreate table log_full_04_02_2018 partition of radius_log_full(end_date) for\nVALUES from ('04-02-2018 00:00:00') TO ('05-02-2018 00:00:00') partition by\nlist (start_stop) ;\n\nThe partition that represent the current day consist of 8 paritions on\ncolumn start_stop that look like that :\ncreate table log_full_04_02_2018_action_status partition of\nlog_full_04_02_2018 for VALUES in ('Start','Stop');\n\nALTER TABLE ONLY log_full_04_02_2018_action_status\n ADD CONSTRAINT log_full_04_02_2018_action_status_pkey PRIMARY KEY (a,\nb, c);\n\nI checked the plan of the next query :\nexplain select count(*) from log_full where end_date between\nto_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n\nand the result if full scan on all partitions.\n\nWhy it decided to run a full table scan on all partitions ?\n\n2018-02-04 14:03 GMT+02:00 Tomas Vondra <[email protected]>:\n\n>\n>\n> On 02/04/2018 11:14 AM, Mariel Cherkassky wrote:\n> >\n> > Hi,\n> > I configured range partitions on a date column of my main\n> > table(log_full). Each partition represents a day in the month. Every day\n> > partition has a list parition of 4 tables on a text column.\n> >\n> > log_full\n> > log_full_01_11_2017 -->\n> > log_full _01_11_2017_x1\n> > log_full _01_11_2017_x2\n> > log_full _01_11_2017_x3\n> > log_full _01_11_2017_x4\n> > log_full_02_11_2017\n> > log_full _02_11_2017_x1\n> > log_full _02_11_2017_x2\n> > log_full _02_11_2017_x3\n> > log_full _02_11_2017_x4\n> >\n> > and so on....\n> >\n> >\n> > The date column consist of date in the next format : YYYY-MM-DD HH:24:SS\n> > for example : 2017-11-01 00:01:40\n> >\n> > I wanted to check the plan that I'm getting for a query that is using\n> > the date column and it seems that the planner choose to do seq scans on\n> > all tables.\n> >\n> > -Each partition consist from 15M rows.\n> > I have about 120 partitions.\n> >\n> > The query :\n> > explain select count(*) from log_full where end_date between\n> > to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n> >\n> > The output is too long but it do full scans on all paritions...\n> > any idea what can be the problem? Is it connected to the date format ?\n> >\n>\n> You haven't shown us how the partitions are defined, nor the query plan.\n> So it's rather hard to say. You mentioned text format, but then you use\n> to_date() to query the partitioned table. Which I guess might be the\n> cause, but it's hard to say for sure.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nMybe I wasnt clear. I'm having a 2 layers patitions mechanism : My main table is called log_full : CREATE TABLE log_full (a text,b text,c text, start_stop text, end_Date date) partition range by (end_date))Every day I create a partition that represent data from that day : create table log_full_04_02_2018 partition of radius_log_full(end_date) for VALUES from ('04-02-2018 00:00:00') TO ('05-02-2018 00:00:00') partition by list (start_stop) ;The partition that represent the current day consist of 8 paritions on column start_stop that look like that : create table log_full_04_02_2018_action_status partition of log_full_04_02_2018 for VALUES in ('Start','Stop');ALTER TABLE ONLY log_full_04_02_2018_action_status    ADD CONSTRAINT log_full_04_02_2018_action_status_pkey PRIMARY KEY (a, b, c);I checked the plan of the next query : \nexplain select count(*) from log_full where end_date between to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\nand the result if full scan on all partitions.Why it decided to run a full table scan on all partitions ?2018-02-04 14:03 GMT+02:00 Tomas Vondra <[email protected]>:\n\nOn 02/04/2018 11:14 AM, Mariel Cherkassky wrote:\n>\n> Hi,\n> I configured range partitions on a date column of my main\n> table(log_full). Each partition represents a day in the month. Every day\n> partition has a list parition of 4 tables on a text column.\n>\n> log_full\n>           log_full_01_11_2017  -->\n>                                           log_full _01_11_2017_x1\n>                                           log_full _01_11_2017_x2\n>                                           log_full _01_11_2017_x3 \n>                                           log_full _01_11_2017_x4 \n>             log_full_02_11_2017\n>                                           log_full _02_11_2017_x1 \n>                                           log_full _02_11_2017_x2 \n>                                           log_full _02_11_2017_x3 \n>                                           log_full _02_11_2017_x4\n>\n> and so on....\n>       \n>\n> The date column consist of date in the next format : YYYY-MM-DD HH:24:SS\n> for example : 2017-11-01 00:01:40\n>\n> I wanted to check the plan that I'm getting for a query that is using\n> the date column and it seems that the planner choose to do seq scans on\n> all tables.\n>\n> -Each partition consist from 15M rows.\n> I have about 120 partitions.\n>\n> The query : \n> explain select count(*) from log_full where end_date between\n> to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n>\n> The output is too long but it do full scans on all paritions...\n> any idea what can be the problem? Is it connected to the date format ?\n>\n\nYou haven't shown us how the partitions are defined, nor the query plan.\nSo it's rather hard to say. You mentioned text format, but then you use\nto_date() to query the partitioned table. Which I guess might be the\ncause, but it's hard to say for sure.\n\nregards\n\n--\nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sun, 4 Feb 2018 14:19:26 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "What is the value of guc constrain_exclusion ?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Sun, 4 Feb 2018 06:19:26 -0700 (MST)", "msg_from": "legrand legrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "show constraint_exclusion;\n constraint_exclusion\n----------------------\n partition\n(1 row)\n\n2018-02-04 15:19 GMT+02:00 legrand legrand <[email protected]>:\n\n> What is the value of guc constrain_exclusion ?\n>\n>\n>\n> --\n> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-\n> f2050081.html\n>\n>\n\nshow constraint_exclusion; constraint_exclusion ---------------------- partition(1 row)2018-02-04 15:19 GMT+02:00 legrand legrand <[email protected]>:What is the value of guc constrain_exclusion ?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html", "msg_date": "Sun, 4 Feb 2018 15:22:20 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "Explain analyse\nOutput ?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Sun, 4 Feb 2018 06:29:02 -0700 (MST)", "msg_from": "legrand legrand <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "On Sun, Feb 4, 2018 at 8:19 AM, legrand legrand <[email protected]\n> wrote:\n\n> What is the value of guc constrain_exclusion ?\n>\n>\n>\nIn my use case, which is a big union all behind a view, setting this to\noff, on, or partition makes no difference. It still sequence scans all of\nthe sub-tables in pg 10.1 whereas it used the indexes in 9.6.\n\nOn Sun, Feb 4, 2018 at 8:19 AM, legrand legrand <[email protected]> wrote:What is the value of guc constrain_exclusion ?\n\nIn my use case, which is a big union all behind a view, setting this to off, on, or partition makes no difference.  It still sequence scans all of the sub-tables in pg 10.1 whereas it used the indexes in 9.6.", "msg_date": "Sun, 4 Feb 2018 08:38:46 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "explain analyze takes too much time.. hours ...\nI run it now but it will take some time.\nThe output of the explain :\n\nFinalize Aggregate (cost=38058211.38..38058211.39 rows=1 width=8)\n -> Gather (cost=38058211.16..38058211.37 rows=2 width=8)\n Workers Planned: 2\n -> Partial Aggregate (cost=38057211.16..38057211.17 rows=1\nwidth=8)\n -> Append (cost=0.00..38040836.26 rows=6549963 width=0)\n -> Parallel Seq Scan on\nlog_full_1_11_2017_action_status (cost=0.00..39863.21 rows=1 width=\n0)\n Filter: ((end_date >=\nto_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date(\n'2017/12/04'::text, 'YY/MM/DD'::text)))\n -> Parallel Seq Scan on\nlog_full_1_11_2017_alive_status (cost=0.00..702893.03 rows=1 width=\n0)\n Filter: ((end_date >=\nto_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date(\n'2017/12/04'::text, 'YY/MM/DD'::text)))\n -> Parallel Seq Scan on\nlog_full_1_11_2017_modem_status (cost=0.00..10.59 rows=1 width=0)\n Filter: ((end_date >=\nto_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date(\n'2017/12/04'::text, 'YY/MM/DD'::text)))\n\nand so on parallel seq for each partition that I have..\n\n\n2018-02-04 15:29 GMT+02:00 legrand legrand <[email protected]>:\n\n> Explain analyse\n> Output ?\n>\n>\n>\n> --\n> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-\n> f2050081.html\n>\n>\n\nexplain analyze takes too much time.. hours ...I run it now but it will take some time.The output of the explain : Finalize Aggregate  (cost=38058211.38..38058211.39 rows=1 width=8)   ->  Gather  (cost=38058211.16..38058211.37 rows=2 width=8)         Workers Planned: 2         ->  Partial Aggregate  (cost=38057211.16..38057211.17 rows=1 width=8)               ->  Append  (cost=0.00..38040836.26 rows=6549963 width=0)                     ->  Parallel Seq Scan on log_full_1_11_2017_action_status  (cost=0.00..39863.21 rows=1 width=0)                           Filter: ((end_date >= to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date('2017/12/04'::text, 'YY/MM/DD'::text)))                     ->  Parallel Seq Scan on log_full_1_11_2017_alive_status  (cost=0.00..702893.03 rows=1 width=0)                           Filter: ((end_date >= to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date('2017/12/04'::text, 'YY/MM/DD'::text)))                     ->  Parallel Seq Scan on log_full_1_11_2017_modem_status  (cost=0.00..10.59 rows=1 width=0)                           Filter: ((end_date >= to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date('2017/12/04'::text, 'YY/MM/DD'::text)))and so on parallel seq for each partition that I have..2018-02-04 15:29 GMT+02:00 legrand legrand <[email protected]>:Explain analyse\nOutput ?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html", "msg_date": "Sun, 4 Feb 2018 15:43:08 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "Output of explain analyze :\n\nexplain analyze select count(*) from log_full where end_date between\nto_date('2017/12/03','YY/MM/DD') and to_date('2017/12/04','YY/MM/DD');\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------\n Finalize Aggregate (cost=38058211.38..38058211.39 rows=1 width=8) (actual\ntime=3502304.726..3502304.726 rows=1 loops=1)\n -> Gather (cost=38058211.16..38058211.37 rows=2 width=8) (actual\ntime=3502179.810..3502251.520 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Partial Aggregate (cost=38057211.16..38057211.17 rows=1\nwidth=8) (actual time=3500338.084..3500338.084 rows\n=1 loops=3)\n -> Append (cost=0.00..38040836.26 rows=6549963 width=0)\n(actual time=1513398.593..3499538.302 rows=52402\n29 loops=3)\n -> Parallel Seq Scan on\nlog_full_1_11_2017_action_status (cost=0.00..39863.21 rows=1 width=\n0) (actual time=4047.915..4047.915 rows=0 loops=3)\n Filter: ((end_date >=\nto_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date(\n'2017/12/04'::text, 'YY/MM/DD'::text)))\n Rows Removed by Filter: 286924\n -> Parallel Seq Scan on\nlog_full_1_11_2017_alive_status (cost=0.00..702893.03 rows=1 width=\n0) (actual time=63648.476..63648.476 rows=0 loops=3)\n Filter: ((end_date >=\nto_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date(\n'2017/12/04'::text, 'YY/MM/DD'::text)))\n Rows Removed by Filter: 4955092\n -> Parallel Seq Scan on\nlog_full_1_11_2017_modem_status (cost=0.00..10.59 rows=1 width=0) (\nactual time=0.001..0.001 rows=0 loops=3)\n Filter: ((end_date >=\nto_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date(\n'2017/12/04'::text, 'YY/MM/DD'::text)))\n\n....................\n\nand so on full on on partitions..\n\n2018-02-04 15:43 GMT+02:00 Mariel Cherkassky <[email protected]>:\n\n> explain analyze takes too much time.. hours ...\n> I run it now but it will take some time.\n> The output of the explain :\n>\n> Finalize Aggregate (cost=38058211.38..38058211.39 rows=1 width=8)\n> -> Gather (cost=38058211.16..38058211.37 rows=2 width=8)\n> Workers Planned: 2\n> -> Partial Aggregate (cost=38057211.16..38057211.17 rows=1\n> width=8)\n> -> Append (cost=0.00..38040836.26 rows=6549963 width=0)\n> -> Parallel Seq Scan on log_full_1_11_2017_action_status\n> (cost=0.00..39863.21 rows=1 width=\n> 0)\n> Filter: ((end_date >=\n> to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date(\n> '2017/12/04'::text, 'YY/MM/DD'::text)))\n> -> Parallel Seq Scan on log_full_1_11_2017_alive_status\n> (cost=0.00..702893.03 rows=1 width=\n> 0)\n> Filter: ((end_date >=\n> to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date(\n> '2017/12/04'::text, 'YY/MM/DD'::text)))\n> -> Parallel Seq Scan on log_full_1_11_2017_modem_status\n> (cost=0.00..10.59 rows=1 width=0)\n> Filter: ((end_date >=\n> to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date(\n> '2017/12/04'::text, 'YY/MM/DD'::text)))\n>\n> and so on parallel seq for each partition that I have..\n>\n>\n> 2018-02-04 15:29 GMT+02:00 legrand legrand <[email protected]>:\n>\n>> Explain analyse\n>> Output ?\n>>\n>>\n>>\n>> --\n>> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-f20\n>> 50081.html\n>>\n>>\n>\n\nOutput of explain analyze : explain analyze select count(*) from log_full where end_date between to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/04','YY/MM/DD');                                                                                       QUERY PLAN                                                                                        ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Finalize Aggregate  (cost=38058211.38..38058211.39 rows=1 width=8) (actual time=3502304.726..3502304.726 rows=1 loops=1)   ->  Gather  (cost=38058211.16..38058211.37 rows=2 width=8) (actual time=3502179.810..3502251.520 rows=3 loops=1)         Workers Planned: 2         Workers Launched: 2         ->  Partial Aggregate  (cost=38057211.16..38057211.17 rows=1 width=8) (actual time=3500338.084..3500338.084 rows=1 loops=3)               ->  Append  (cost=0.00..38040836.26 rows=6549963 width=0) (actual time=1513398.593..3499538.302 rows=5240229 loops=3)                     ->  Parallel Seq Scan on log_full_1_11_2017_action_status  (cost=0.00..39863.21 rows=1 width=0) (actual time=4047.915..4047.915 rows=0 loops=3)                           Filter: ((end_date >= to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date('2017/12/04'::text, 'YY/MM/DD'::text)))                           Rows Removed by Filter: 286924                     ->  Parallel Seq Scan on log_full_1_11_2017_alive_status  (cost=0.00..702893.03 rows=1 width=0) (actual time=63648.476..63648.476 rows=0 loops=3)                           Filter: ((end_date >= to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date('2017/12/04'::text, 'YY/MM/DD'::text)))                           Rows Removed by Filter: 4955092                     ->  Parallel Seq Scan on log_full_1_11_2017_modem_status  (cost=0.00..10.59 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=3)                           Filter: ((end_date >= to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date('2017/12/04'::text, 'YY/MM/DD'::text)))....................and so on full on on partitions..2018-02-04 15:43 GMT+02:00 Mariel Cherkassky <[email protected]>:explain analyze takes too much time.. hours ...I run it now but it will take some time.The output of the explain : Finalize Aggregate  (cost=38058211.38..38058211.39 rows=1 width=8)   ->  Gather  (cost=38058211.16..38058211.37 rows=2 width=8)         Workers Planned: 2         ->  Partial Aggregate  (cost=38057211.16..38057211.17 rows=1 width=8)               ->  Append  (cost=0.00..38040836.26 rows=6549963 width=0)                     ->  Parallel Seq Scan on log_full_1_11_2017_action_status  (cost=0.00..39863.21 rows=1 width=0)                           Filter: ((end_date >= to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date('2017/12/04'::text, 'YY/MM/DD'::text)))                     ->  Parallel Seq Scan on log_full_1_11_2017_alive_status  (cost=0.00..702893.03 rows=1 width=0)                           Filter: ((end_date >= to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date('2017/12/04'::text, 'YY/MM/DD'::text)))                     ->  Parallel Seq Scan on log_full_1_11_2017_modem_status  (cost=0.00..10.59 rows=1 width=0)                           Filter: ((end_date >= to_date('2017/12/03'::text, 'YY/MM/DD'::text)) AND (end_date <= to_date('2017/12/04'::text, 'YY/MM/DD'::text)))and so on parallel seq for each partition that I have..2018-02-04 15:29 GMT+02:00 legrand legrand <[email protected]>:Explain analyse\nOutput ?\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html", "msg_date": "Sun, 4 Feb 2018 16:41:32 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "\n\nAm 04.02.2018 um 13:19 schrieb Mariel Cherkassky:\n> I checked the plan of the next query :\n> explain select count(*) from log_full where end_date between \n> to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n>\n\ncan you rewrite the query to\n\n... where end_date between '2017/12/03' and '2017/12/03'\n\n\n\nsimple test-case:\n\ntest=*# \\d+ t\n                                    Table \"public.t\"\n  Column | Type | Collation | Nullable | Default | Storage | Stats \ntarget | Description\n--------+------+-----------+----------+---------+---------+--------------+-------------\n  d      | date |           |          |         | plain |              |\nPartition key: RANGE (d)\nPartitions: t_01 FOR VALUES FROM ('2018-02-04') TO ('2018-02-05'),\n             t_02 FOR VALUES FROM ('2018-02-05') TO ('2018-02-06')\n\ntest=*# explain analyse select * from t where d between \nto_date('2018/02/04','YY/MM/DD') and to_date('2018/02/04','YY/MM/DD');\n                                                            QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n  Append  (cost=0.00..122.00 rows=26 width=4) (actual time=0.006..0.006 \nrows=0 loops=1)\n    ->  Seq Scan on t_01  (cost=0.00..61.00 rows=13 width=4) (actual \ntime=0.004..0.004 rows=0 loops=1)\n          Filter: ((d >= to_date('2018/02/04'::text, 'YY/MM/DD'::text)) \nAND (d <= to_date('2018/02/04'::text, 'YY/MM/DD'::text)))\n    ->  Seq Scan on t_02  (cost=0.00..61.00 rows=13 width=4) (actual \ntime=0.001..0.001 rows=0 loops=1)\n          Filter: ((d >= to_date('2018/02/04'::text, 'YY/MM/DD'::text)) \nAND (d <= to_date('2018/02/04'::text, 'YY/MM/DD'::text)))\n  Planning time: 0.241 ms\n  Execution time: 0.042 ms\n(7 rows)\n\ntest=*# explain analyse select * from t where d between '2018/02/04' and \n'2018/02/04';\n                                               QUERY PLAN\n------------------------------------------------------------------------------------------------------\n  Append  (cost=0.00..48.25 rows=13 width=4) (actual time=0.005..0.005 \nrows=0 loops=1)\n    ->  Seq Scan on t_01  (cost=0.00..48.25 rows=13 width=4) (actual \ntime=0.004..0.004 rows=0 loops=1)\n          Filter: ((d >= '2018-02-04'::date) AND (d <= '2018-02-04'::date))\n  Planning time: 0.203 ms\n  Execution time: 0.030 ms\n(5 rows)\n\ntest=*#\n\nmaybe the planner should be smart enough to do that for you, but \nobvously he can't. So it's a workaround, but it seems to solve the problem.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n", "msg_date": "Sun, 4 Feb 2018 15:54:40 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "Great, it solved the issue. Seems problematic that the planner do full\nscans on all partitions in the first case isnt it ? Seems like a bug ?\n\n2018-02-04 16:54 GMT+02:00 Andreas Kretschmer <[email protected]>:\n\n>\n>\n> Am 04.02.2018 um 13:19 schrieb Mariel Cherkassky:\n>\n>> I checked the plan of the next query :\n>> explain select count(*) from log_full where end_date between\n>> to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n>>\n>>\n> can you rewrite the query to\n>\n> ... where end_date between '2017/12/03' and '2017/12/03'\n>\n>\n>\n> simple test-case:\n>\n> test=*# \\d+ t\n> Table \"public.t\"\n> Column | Type | Collation | Nullable | Default | Storage | Stats target |\n> Description\n> --------+------+-----------+----------+---------+---------+-\n> -------------+-------------\n> d | date | | | | plain | |\n> Partition key: RANGE (d)\n> Partitions: t_01 FOR VALUES FROM ('2018-02-04') TO ('2018-02-05'),\n> t_02 FOR VALUES FROM ('2018-02-05') TO ('2018-02-06')\n>\n> test=*# explain analyse select * from t where d between\n> to_date('2018/02/04','YY/MM/DD') and to_date('2018/02/04','YY/MM/DD');\n> QUERY PLAN\n> ------------------------------------------------------------\n> ---------------------------------------------------------------------\n> Append (cost=0.00..122.00 rows=26 width=4) (actual time=0.006..0.006\n> rows=0 loops=1)\n> -> Seq Scan on t_01 (cost=0.00..61.00 rows=13 width=4) (actual\n> time=0.004..0.004 rows=0 loops=1)\n> Filter: ((d >= to_date('2018/02/04'::text, 'YY/MM/DD'::text)) AND\n> (d <= to_date('2018/02/04'::text, 'YY/MM/DD'::text)))\n> -> Seq Scan on t_02 (cost=0.00..61.00 rows=13 width=4) (actual\n> time=0.001..0.001 rows=0 loops=1)\n> Filter: ((d >= to_date('2018/02/04'::text, 'YY/MM/DD'::text)) AND\n> (d <= to_date('2018/02/04'::text, 'YY/MM/DD'::text)))\n> Planning time: 0.241 ms\n> Execution time: 0.042 ms\n> (7 rows)\n>\n> test=*# explain analyse select * from t where d between '2018/02/04' and\n> '2018/02/04';\n> QUERY PLAN\n> ------------------------------------------------------------\n> ------------------------------------------\n> Append (cost=0.00..48.25 rows=13 width=4) (actual time=0.005..0.005\n> rows=0 loops=1)\n> -> Seq Scan on t_01 (cost=0.00..48.25 rows=13 width=4) (actual\n> time=0.004..0.004 rows=0 loops=1)\n> Filter: ((d >= '2018-02-04'::date) AND (d <= '2018-02-04'::date))\n> Planning time: 0.203 ms\n> Execution time: 0.030 ms\n> (5 rows)\n>\n> test=*#\n>\n> maybe the planner should be smart enough to do that for you, but obvously\n> he can't. So it's a workaround, but it seems to solve the problem.\n>\n>\n> Regards, Andreas\n>\n> --\n> 2ndQuadrant - The PostgreSQL Support Company.\n> www.2ndQuadrant.com\n>\n>\n>\n\nGreat, it solved the issue. Seems problematic that the planner do full scans on all partitions in the first case isnt it ? Seems like a bug ?2018-02-04 16:54 GMT+02:00 Andreas Kretschmer <[email protected]>:\n\nAm 04.02.2018 um 13:19 schrieb Mariel Cherkassky:\n\nI checked the plan of the next query :\nexplain select count(*) from log_full where end_date between to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n\n\n\ncan you rewrite the query to\n\n... where end_date between '2017/12/03' and '2017/12/03'\n\n\n\nsimple test-case:\n\ntest=*# \\d+ t\n                                   Table \"public.t\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description\n--------+------+-----------+----------+---------+---------+--------------+-------------\n d      | date |           |          |         | plain |              |\nPartition key: RANGE (d)\nPartitions: t_01 FOR VALUES FROM ('2018-02-04') TO ('2018-02-05'),\n            t_02 FOR VALUES FROM ('2018-02-05') TO ('2018-02-06')\n\ntest=*# explain analyse select * from t where d between to_date('2018/02/04','YY/MM/DD') and to_date('2018/02/04','YY/MM/DD');\n                                                           QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Append  (cost=0.00..122.00 rows=26 width=4) (actual time=0.006..0.006 rows=0 loops=1)\n   ->  Seq Scan on t_01  (cost=0.00..61.00 rows=13 width=4) (actual time=0.004..0.004 rows=0 loops=1)\n         Filter: ((d >= to_date('2018/02/04'::text, 'YY/MM/DD'::text)) AND (d <= to_date('2018/02/04'::text, 'YY/MM/DD'::text)))\n   ->  Seq Scan on t_02  (cost=0.00..61.00 rows=13 width=4) (actual time=0.001..0.001 rows=0 loops=1)\n         Filter: ((d >= to_date('2018/02/04'::text, 'YY/MM/DD'::text)) AND (d <= to_date('2018/02/04'::text, 'YY/MM/DD'::text)))\n Planning time: 0.241 ms\n Execution time: 0.042 ms\n(7 rows)\n\ntest=*# explain analyse select * from t where d between '2018/02/04' and '2018/02/04';\n                                              QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Append  (cost=0.00..48.25 rows=13 width=4) (actual time=0.005..0.005 rows=0 loops=1)\n   ->  Seq Scan on t_01  (cost=0.00..48.25 rows=13 width=4) (actual time=0.004..0.004 rows=0 loops=1)\n         Filter: ((d >= '2018-02-04'::date) AND (d <= '2018-02-04'::date))\n Planning time: 0.203 ms\n Execution time: 0.030 ms\n(5 rows)\n\ntest=*#\n\nmaybe the planner should be smart enough to do that for you, but obvously he can't. So it's a workaround, but it seems to solve the problem.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com", "msg_date": "Sun, 4 Feb 2018 17:06:38 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "On Sun, Feb 04, 2018 at 05:06:38PM +0200, Mariel Cherkassky wrote:\n> Great, it solved the issue. Seems problematic that the planner do full\n> scans on all partitions in the first case isnt it ? Seems like a bug ?\n\nSee also:\nhttps://www.postgresql.org/message-id/20170725131650.GA30519%40telsasoft.com\nhttps://www.postgresql.org/message-id/20170825154434.GC16287%40telsasoft.com\n\nJustin\n\n2018-02-04 16:54 GMT+02:00 Andreas Kretschmer <[email protected]>:\n> \n> >\n> >\nAm 04.02.2018 um 13:19 schrieb Mariel Cherkassky:\n> >\n> >> I checked the plan of the next query :\n> >> explain select count(*) from log_full where end_date between\n> >> to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n> >>\n> >>\n> > can you rewrite the query to\n> >\n> > ... where end_date between '2017/12/03' and '2017/12/03'\n> >\n> > maybe the planner should be smart enough to do that for you, but obvously\n> > he can't. So it's a workaround, but it seems to solve the problem.\n\n", "msg_date": "Sun, 4 Feb 2018 09:25:32 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "I read those two links and I dont think that they are relevant because : 1\n1)I didnt do any join.\n2)I used a where clause in my select\n\n\n\n2018-02-04 17:25 GMT+02:00 Justin Pryzby <[email protected]>:\n\n> On Sun, Feb 04, 2018 at 05:06:38PM +0200, Mariel Cherkassky wrote:\n> > Great, it solved the issue. Seems problematic that the planner do full\n> > scans on all partitions in the first case isnt it ? Seems like a bug ?\n>\n> See also:\n> https://www.postgresql.org/message-id/20170725131650.\n> GA30519%40telsasoft.com\n> https://www.postgresql.org/message-id/20170825154434.\n> GC16287%40telsasoft.com\n>\n> Justin\n>\n> 2018-02-04 16:54 GMT+02:00 Andreas Kretschmer <[email protected]>:\n> >\n> > >\n> > >\n> Am 04.02.2018 um 13:19 schrieb Mariel Cherkassky:\n> > >\n> > >> I checked the plan of the next query :\n> > >> explain select count(*) from log_full where end_date between\n> > >> to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/\n> DD');\n> > >>\n> > >>\n> > > can you rewrite the query to\n> > >\n> > > ... where end_date between '2017/12/03' and '2017/12/03'\n> > >\n> > > maybe the planner should be smart enough to do that for you, but\n> obvously\n> > > he can't. So it's a workaround, but it seems to solve the problem.\n>\n\nI read those two links and I dont think that they are relevant because : 11)I didnt do any join.2)I used a where clause in my select2018-02-04 17:25 GMT+02:00 Justin Pryzby <[email protected]>:On Sun, Feb 04, 2018 at 05:06:38PM +0200, Mariel Cherkassky wrote:\n> Great, it solved the issue. Seems problematic that the planner do full\n> scans on all partitions in the first case isnt it ? Seems like a bug ?\n\nSee also:\nhttps://www.postgresql.org/message-id/20170725131650.GA30519%40telsasoft.com\nhttps://www.postgresql.org/message-id/20170825154434.GC16287%40telsasoft.com\n\nJustin\n\n2018-02-04 16:54 GMT+02:00 Andreas Kretschmer <[email protected]>:\n>\n> >\n> >\nAm 04.02.2018 um 13:19 schrieb Mariel Cherkassky:\n> >\n> >> I checked the plan of the next query :\n> >> explain select count(*) from log_full where end_date between\n> >> to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/DD');\n> >>\n> >>\n> > can you rewrite the query to\n> >\n> > ... where end_date between '2017/12/03' and '2017/12/03'\n> >\n> > maybe the planner should be smart enough to do that for you, but obvously\n> > he can't. So it's a workaround, but it seems to solve the problem.", "msg_date": "Sun, 4 Feb 2018 17:28:52 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "Rick Otten <[email protected]> writes:\n> I'm wrestling with a very similar problem too - except instead of official\n> partitions I have a views on top of a bunch (50+) of unioned materialized\n> views, each \"partition\" with 10M - 100M rows. On 9.6.6 the queries would\n> use the indexes on each materialized view. On 10.1, every materialized\n> view is sequence scanned.\n\nCan you post a self-contained example of this behavior? My gut reaction\nis that the changes for the partitioning feature broke some optimization\nthat used to work ... but it could easily be something else, too. Hard\nto say with nothing concrete to look at.\n\n> I'm mostly hoping with fingers crossed that something in 10.2, which is\n> coming out next week, fixes it.\n\nIf you'd reported this in suitable detail awhile ago, we might have been\nable to fix it for 10.2. At this point, with barely 30 hours remaining\nbefore the planned release wrap, it's unlikely that anything but the most\ntrivial fixes could get done in time.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 04 Feb 2018 10:35:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "Mariel Cherkassky <[email protected]> writes:\n> Great, it solved the issue. Seems problematic that the planner do full\n> scans on all partitions in the first case isnt it ? Seems like a bug ?\n\nto_date isn't an immutable function (it depends on timezone and possibly\nsome other GUC settings). So there's a limited amount that the planner\ncan do with it.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 04 Feb 2018 10:39:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "Hi Tom,\nDid you hear about any solution that is similar to oracle`s global index ?\nIs there any way to query all the partitions with one index?\n\n2018-02-04 17:39 GMT+02:00 Tom Lane <[email protected]>:\n\n> Mariel Cherkassky <[email protected]> writes:\n> > Great, it solved the issue. Seems problematic that the planner do full\n> > scans on all partitions in the first case isnt it ? Seems like a bug ?\n>\n> to_date isn't an immutable function (it depends on timezone and possibly\n> some other GUC settings). So there's a limited amount that the planner\n> can do with it.\n>\n> regards, tom lane\n>\n\nHi Tom,Did you hear about any solution that is similar to oracle`s global index ? Is there any way to query all the partitions with one index? 2018-02-04 17:39 GMT+02:00 Tom Lane <[email protected]>:Mariel Cherkassky <[email protected]> writes:\n> Great, it solved the issue. Seems problematic that the planner do full\n> scans on all partitions in the first case isnt it ? Seems like a bug ?\n\nto_date isn't an immutable function (it depends on timezone and possibly\nsome other GUC settings).  So there's a limited amount that the planner\ncan do with it.\n\n                        regards, tom lane", "msg_date": "Sun, 4 Feb 2018 17:42:23 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "On Sun, Feb 4, 2018 at 10:35 AM, Tom Lane <[email protected]> wrote:\n\n> Rick Otten <[email protected]> writes:\n> > I'm wrestling with a very similar problem too - except instead of\n> official\n> > partitions I have a views on top of a bunch (50+) of unioned materialized\n> > views, each \"partition\" with 10M - 100M rows. On 9.6.6 the queries would\n> > use the indexes on each materialized view. On 10.1, every materialized\n> > view is sequence scanned.\n>\n> Can you post a self-contained example of this behavior? My gut reaction\n> is that the changes for the partitioning feature broke some optimization\n> that used to work ... but it could easily be something else, too. Hard\n> to say with nothing concrete to look at.\n>\n>\nI think it is worth trying to reproduce in an example. I'll try to cook\nsomething up that illustrates it. It should be doable.\n\n\n\n> > I'm mostly hoping with fingers crossed that something in 10.2, which is\n> > coming out next week, fixes it.\n>\n> If you'd reported this in suitable detail awhile ago, we might have been\n> able to fix it for 10.2. At this point, with barely 30 hours remaining\n> before the planned release wrap, it's unlikely that anything but the most\n> trivial fixes could get done in time.\n>\n>\nI wish I could move faster on identifying and reporting this sort of thing.\n\nWe only cut over to 10.1 about 2 weeks ago and didn't discover the issue\nuntil we'd been running for a few days (and eliminated everything else we\ncould think of - including the bug that is fixed in 10.2 that crashes some\nqueries when they have parallel gather enabled).\n\nMy hope is that 10.2 will fix our issue \"by accident\" rather than on\npurpose.\n\nI'll try to build a test case this afternoon.\n\n--\n\nI use a view on top of the materialized views so I can swap them in and out\nwith a \"create or replace\" that doesn't disrupt downstream depndencies.\n\nI'm currently thinking to work around this issue for the short term, I need\nto build a mat view on top of the mat views, and then put my view on top of\nthat (so I can swap out the big matview without disrupting downstream\ndependencies). It means a lot more disk will be needed, and moving\npartitions around will be much less elegant, but I can live with that if it\nfixes the performance problems caused by the sequence scanning. Hopefully\nthe planner will use the indexes on the \"big\" materialized view.\n\nI'm going to try that hack this afternoon too.\n\nI was going to blog about this approach of using a view to do partitioning\nof materialized views, but I'm not sure when I'll ever get to it. It was\nthis list that originally gave me the idea to try this approach. The\npartiions are actually materialized views of foreign tables from a Hadoop\ncluster.\n\nFWIW, here is the function that builds the view:\n\n---\ncreate or replace function treasure_data.\"relinkMyView\"()\nreturns varchar\nsecurity definer\nas\n$$\ndeclare\n wrMatView varchar;\n fromString text;\nbegin\n\n for wrMatView in\n\n select\n c.relname\n from\n pg_class c\n join pg_namespace n on c.relnamespace = n.oid\n where\n c.relkind = 'm'\n and\n n.nspname = 'myschema'\n and\n c.relname ~ 'my_matview_partition_\\d\\d\\d\\d_\\d\\d$'\n order by\n c.relname\n\n loop\n\n if length(fromString) > 0 then\n fromString := format ('%s union all select * from myschema.%I',\nfromString, wrMatView);\n else\n fromString := format ('select * from myschema.%I', wrMatView);\n end if;\n\n end loop;\n\n execute format ('create or replace view myschema.my_view as %s',\nfromString);\n\n grant select on myschema.my_view to some_read_only_role;\n grant select on myschema.my_view to some_read_write_role;\n\n return format ('create or replace view myschema.my_view as %s',\nfromString);\n\nend\n$$ language plpgsql\n;\n\n---\n\nTo swap a partition out, I rename it to something that does not conform to\nthe regex pattern above, and then run the function.\nTo swap a partition in, I rename it to something that does conform to the\nregex pattern, and then run the function.\n\n(of course, that is mostly automated, but it works by hand too)\n\nThis has been working great for us until we jumped to PG 10, when suddenly\nI can't get the planner to use the indexes in the partitions any more.\n\nOn Sun, Feb 4, 2018 at 10:35 AM, Tom Lane <[email protected]> wrote:Rick Otten <[email protected]> writes:\n> I'm wrestling with a very similar problem too - except instead of official\n> partitions I have a views on top of a bunch (50+) of unioned materialized\n> views, each \"partition\" with 10M - 100M rows.  On 9.6.6 the queries would\n> use the indexes on each materialized view.  On 10.1, every materialized\n> view is sequence scanned.\n\nCan you post a self-contained example of this behavior?  My gut reaction\nis that the changes for the partitioning feature broke some optimization\nthat used to work ... but it could easily be something else, too.  Hard\nto say with nothing concrete to look at.\nI think it is worth trying to reproduce in an example.  I'll try to cook something up that illustrates it.  It should be doable. \n> I'm mostly hoping with fingers crossed that something in 10.2, which is\n> coming out next week, fixes it.\n\nIf you'd reported this in suitable detail awhile ago, we might have been\nable to fix it for 10.2.  At this point, with barely 30 hours remaining\nbefore the planned release wrap, it's unlikely that anything but the most\ntrivial fixes could get done in time.I wish I could move faster on identifying and reporting this sort of thing.We only cut over to 10.1 about 2 weeks ago and didn't discover the issue until we'd been running for a few days (and eliminated everything else we could think of - including the bug that is fixed in 10.2 that crashes some queries when they have parallel gather enabled). My hope is that 10.2 will fix our issue \"by accident\" rather than on purpose.I'll try to build a test case this afternoon.--I use a view on top of the materialized views so I can swap them in and out with a \"create or replace\" that doesn't disrupt downstream depndencies. I'm currently thinking to work around this issue for the short term, I need to build a mat view on top of the mat views, and then put my view on top of that (so I can swap out the big matview without disrupting downstream dependencies).  It means a lot more disk will be needed, and moving partitions around will be much less elegant, but I can live with that if it fixes the performance problems caused by the sequence scanning.  Hopefully the planner will use the indexes on the \"big\" materialized view.I'm going to try that hack this afternoon too.I was going to blog about this approach of using a view to do partitioning of materialized views, but I'm not sure when I'll ever get to it.  It was this list that originally gave me the idea to try this approach.  The partiions are actually materialized views of foreign tables from a Hadoop cluster.FWIW, here is the function that builds the view:---create or replace function treasure_data.\"relinkMyView\"()returns varcharsecurity defineras$$declare    wrMatView  varchar;    fromString text;begin    for wrMatView in        select            c.relname        from            pg_class c            join pg_namespace n on c.relnamespace = n.oid        where            c.relkind = 'm'            and            n.nspname = 'myschema'            and            c.relname ~ 'my_matview_partition_\\d\\d\\d\\d_\\d\\d$'        order by            c.relname    loop        if length(fromString) > 0 then            fromString := format ('%s union all select * from myschema.%I', fromString, wrMatView);        else            fromString := format ('select * from myschema.%I', wrMatView);        end if;    end loop;    execute format ('create or replace view myschema.my_view as %s', fromString);    grant select on myschema.my_view to some_read_only_role;    grant select on myschema.my_view to some_read_write_role;    return format ('create or replace view myschema.my_view as %s', fromString);end$$ language plpgsql;---To swap a partition out, I rename it to something that does not conform to the regex pattern above, and then run the function.To swap a partition in, I rename it to something that does conform to the regex pattern, and then run the function.(of course, that is mostly automated, but it works by hand too) This has been working great for us until we jumped to PG 10, when suddenly I can't get the planner to use the indexes in the partitions any more.", "msg_date": "Sun, 4 Feb 2018 11:04:56 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "On Sun, Feb 04, 2018 at 05:28:52PM +0200, Mariel Cherkassky wrote:\n> I read those two links and I dont think that they are relevant because : 1\n> 1)I didnt do any join.\n> 2)I used a where clause in my select\n\nhttps://www.postgresql.org/docs/current/static/ddl-partitioning.html\n|The following caveats apply to constraint exclusion:\n| Constraint exclusion only works when the query's WHERE clause contains\n|constants (or externally supplied parameters). For example, a comparison\n|against a non-immutable function such as CURRENT_TIMESTAMP cannot be optimized,\n|since the planner cannot know which partition the function value might fall\n|into at run time.\n[..]\n\nThe issue is with the comparison between function call to to_date() compared\nwith constant - that doesn't allow constraint exclusion as currently\nimplemented.\n\nJustin\n\n2018-02-04 16:54 GMT+02:00 Andreas Kretschmer <[email protected]>:\n> > Am 04.02.2018 um 13:19 schrieb Mariel Cherkassky:\n> > > >\n> > > >> I checked the plan of the next query :\n> > > >> explain select count(*) from log_full where end_date between\n> > > >> to_date('2017/12/03','YY/MM/DD') and to_date('2017/12/03','YY/MM/\n> > DD');\n> > > >>\n> > > >>\n> > > > can you rewrite the query to\n> > > >\n> > > > ... where end_date between '2017/12/03' and '2017/12/03'\n> > > >\n> > > > maybe the planner should be smart enough to do that for you, but obvously\n> > > > he can't. So it's a workaround, but it seems to solve the problem.\n\n", "msg_date": "Mon, 5 Feb 2018 13:19:22 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql 10.1 wrong plan in when using partitions bug" }, { "msg_contents": "On Sun, Feb 04, 2018 at 11:04:56AM -0500, Rick Otten wrote:\n> On Sun, Feb 4, 2018 at 10:35 AM, Tom Lane <[email protected]> wrote:\n> \n> > Rick Otten <[email protected]> writes:\n> > > I'm wrestling with a very similar problem too - except instead of\n> > official\n> > > partitions I have a views on top of a bunch (50+) of unioned materialized\n> > > views, each \"partition\" with 10M - 100M rows. On 9.6.6 the queries would\n> > > use the indexes on each materialized view. On 10.1, every materialized\n> > > view is sequence scanned.\n\nI think it'd be useful to see the plan from explain analyze, on both the\n\"parent\" view and a child, with and without SET enable_seqscan=off,\n\nJustin\n\n", "msg_date": "Tue, 6 Feb 2018 12:18:07 -0600", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failing to use index on UNION of matviews (Re: postgresql 10.1\n wrong plan in when using partitions bug)" }, { "msg_contents": "Ooo. I wasn't aware of that option. (Learn something new every day!)\n\nSetting enable_seqscan=off takes one of the shorter queries I was working\nwith from about 3 minutes to 300ms. This is a comparable performance\nimprovement to where I put a materialized view (with indexes) on top of the\nmaterialized views instead of using a simple view on top of the\nmaterialized views. I'll have to try it with the query that takes 12 hours.\n\nI built a test case, but can't get it to reproduce what I'm seeing on my\nproduction database (it keeps choosing the indexes). I'm still fiddling\nwith that test case so I can easily share it. I'm also back to trying to\nfigure out what is different between my laptop database and the test case I\nbuilt and the real world query with the real data, and pondering the worst\nquery itself to see if some sort of re-write will help.\n\n\n\nOn Tue, Feb 6, 2018 at 1:18 PM, Justin Pryzby <[email protected]> wrote:\n\n> On Sun, Feb 04, 2018 at 11:04:56AM -0500, Rick Otten wrote:\n> > On Sun, Feb 4, 2018 at 10:35 AM, Tom Lane <[email protected]> wrote:\n> >\n> > > Rick Otten <[email protected]> writes:\n> > > > I'm wrestling with a very similar problem too - except instead of\n> > > official\n> > > > partitions I have a views on top of a bunch (50+) of unioned\n> materialized\n> > > > views, each \"partition\" with 10M - 100M rows. On 9.6.6 the queries\n> would\n> > > > use the indexes on each materialized view. On 10.1, every\n> materialized\n> > > > view is sequence scanned.\n>\n> I think it'd be useful to see the plan from explain analyze, on both the\n> \"parent\" view and a child, with and without SET enable_seqscan=off,\n>\n> Justin\n>\n\nOoo.  I wasn't aware of that option.  (Learn something new every day!)Setting enable_seqscan=off takes one of the shorter queries I was working with from about 3 minutes to 300ms.   This is a comparable performance improvement to where I put a materialized view (with indexes) on top of the materialized views instead of using a simple view on top of the materialized views.  I'll have to try it with the query that takes 12 hours.I built a test case, but can't get it to reproduce what I'm seeing on my production database (it keeps choosing the indexes).  I'm still fiddling with that test case so I can easily share it.  I'm also back to trying to figure out what is different between my laptop database and the test case I built and the real world query with the real data, and pondering the worst query itself to see if some sort of re-write will help.On Tue, Feb 6, 2018 at 1:18 PM, Justin Pryzby <[email protected]> wrote:On Sun, Feb 04, 2018 at 11:04:56AM -0500, Rick Otten wrote:\n> On Sun, Feb 4, 2018 at 10:35 AM, Tom Lane <[email protected]> wrote:\n>\n> > Rick Otten <[email protected]> writes:\n> > > I'm wrestling with a very similar problem too - except instead of\n> > official\n> > > partitions I have a views on top of a bunch (50+) of unioned materialized\n> > > views, each \"partition\" with 10M - 100M rows.  On 9.6.6 the queries would\n> > > use the indexes on each materialized view.  On 10.1, every materialized\n> > > view is sequence scanned.\n\nI think it'd be useful to see the plan from explain analyze, on both the\n\"parent\" view and a child, with and without SET enable_seqscan=off,\n\nJustin", "msg_date": "Tue, 6 Feb 2018 15:02:56 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failing to use index on UNION of matviews (Re: postgresql 10.1\n wrong plan in when using partitions bug)" }, { "msg_contents": "On Tue, Feb 6, 2018 at 3:02 PM, Rick Otten <[email protected]> wrote:\n\n> Ooo. I wasn't aware of that option. (Learn something new every day!)\n>\n> Setting enable_seqscan=off takes one of the shorter queries I was working\n> with from about 3 minutes to 300ms. This is a comparable performance\n> improvement to where I put a materialized view (with indexes) on top of the\n> materialized views instead of using a simple view on top of the\n> materialized views. I'll have to try it with the query that takes 12 hours.\n>\n> I built a test case, but can't get it to reproduce what I'm seeing on my\n> production database (it keeps choosing the indexes). I'm still fiddling\n> with that test case so I can easily share it. I'm also back to trying to\n> figure out what is different between my laptop database and the test case I\n> built and the real world query with the real data, and pondering the worst\n> query itself to see if some sort of re-write will help.\n>\n>\n>\n> On Tue, Feb 6, 2018 at 1:18 PM, Justin Pryzby <[email protected]>\n> wrote:\n>\n>> On Sun, Feb 04, 2018 at 11:04:56AM -0500, Rick Otten wrote:\n>> > On Sun, Feb 4, 2018 at 10:35 AM, Tom Lane <[email protected]> wrote:\n>> >\n>> > > Rick Otten <[email protected]> writes:\n>> > > > I'm wrestling with a very similar problem too - except instead of\n>> > > official\n>> > > > partitions I have a views on top of a bunch (50+) of unioned\n>> materialized\n>> > > > views, each \"partition\" with 10M - 100M rows. On 9.6.6 the queries\n>> would\n>> > > > use the indexes on each materialized view. On 10.1, every\n>> materialized\n>> > > > view is sequence scanned.\n>>\n>> I think it'd be useful to see the plan from explain analyze, on both the\n>> \"parent\" view and a child, with and without SET enable_seqscan=off,\n>>\n>> Justin\n>>\n>\n>\nSorry, I didn't mean to \"top reply\". My bad.\n\nOn Tue, Feb 6, 2018 at 3:02 PM, Rick Otten <[email protected]> wrote:Ooo.  I wasn't aware of that option.  (Learn something new every day!)Setting enable_seqscan=off takes one of the shorter queries I was working with from about 3 minutes to 300ms.   This is a comparable performance improvement to where I put a materialized view (with indexes) on top of the materialized views instead of using a simple view on top of the materialized views.  I'll have to try it with the query that takes 12 hours.I built a test case, but can't get it to reproduce what I'm seeing on my production database (it keeps choosing the indexes).  I'm still fiddling with that test case so I can easily share it.  I'm also back to trying to figure out what is different between my laptop database and the test case I built and the real world query with the real data, and pondering the worst query itself to see if some sort of re-write will help.On Tue, Feb 6, 2018 at 1:18 PM, Justin Pryzby <[email protected]> wrote:On Sun, Feb 04, 2018 at 11:04:56AM -0500, Rick Otten wrote:\n> On Sun, Feb 4, 2018 at 10:35 AM, Tom Lane <[email protected]> wrote:\n>\n> > Rick Otten <[email protected]> writes:\n> > > I'm wrestling with a very similar problem too - except instead of\n> > official\n> > > partitions I have a views on top of a bunch (50+) of unioned materialized\n> > > views, each \"partition\" with 10M - 100M rows.  On 9.6.6 the queries would\n> > > use the indexes on each materialized view.  On 10.1, every materialized\n> > > view is sequence scanned.\n\nI think it'd be useful to see the plan from explain analyze, on both the\n\"parent\" view and a child, with and without SET enable_seqscan=off,\n\nJustin\nSorry, I didn't mean to \"top reply\".  My bad.", "msg_date": "Tue, 6 Feb 2018 15:03:24 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failing to use index on UNION of matviews (Re: postgresql 10.1\n wrong plan in when using partitions bug)" }, { "msg_contents": ">\n>\n>>\n>> Setting enable_seqscan=off takes one of the shorter queries I was working\n>> with from about 3 minutes to 300ms. This is a comparable performance\n>> improvement to where I put a materialized view (with indexes) on top of the\n>> materialized views instead of using a simple view on top of the\n>> materialized views. I'll have to try it with the query that takes 12 hours.\n>>\n>>\n>\nThe query that takes 12 hours and won't use indexes when I feel it should\nis a materialized view refresh. When I set it before testing the plan with\na simple explain on the query it definitely gets it to use all of the\nindexes. Does setting something like \"enable_seqscan=off\" work when I\nfollow it with a \"refresh materialized view concurrently\" instead of a\nsimple select? I'll try it to see if it helps the refresh time, but I\nthought I'd ask.\n\n(I got pulled into another problem since my last email, so I haven't had a\nchance to follow up.)\n\nSetting enable_seqscan=off takes one of the shorter queries I was working with from about 3 minutes to 300ms.   This is a comparable performance improvement to where I put a materialized view (with indexes) on top of the materialized views instead of using a simple view on top of the materialized views.  I'll have to try it with the query that takes 12 hours.\nThe query that takes 12 hours and won't use indexes when I feel it should is a materialized view refresh.  When I set it before testing the plan with a simple explain on the query it definitely gets it to use all of the indexes.  Does setting something like \"enable_seqscan=off\" work when I follow it with a \"refresh materialized view concurrently\" instead of a simple select?   I'll try it to see if it helps the refresh time, but I thought I'd ask.(I got pulled into another problem since my last email, so I haven't had a chance to follow up.)", "msg_date": "Thu, 8 Feb 2018 06:04:36 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: failing to use index on UNION of matviews (Re: postgresql 10.1\n wrong plan in when using partitions bug)" } ]
[ { "msg_contents": "This is a bit off-topic, since it is not about the performance of PG itself.\n\nBut maybe some have the same issue.\n\nWe run PostgreSQL in virtual machines which get provided by our customer.\n\nWe are not responsible for the hypervisor and have not access to it.\n\nThe IO performance of our application was terrible slow yesterday.\n\nThe users blamed us, but it seems that there was something wrong with the hypervisor.\n\nFor the next time I would like to have reliable figures, to underline my guess that the hypervisor (and not our \napplication) is the bottle neck.\n\nI have the vague strategy to make some io performance check every N minutes and record the numbers.\n\nOf course I could do some dirty scripting, but I would like to avoid to re-invent things. I guess this was already \nsolved by people which have more brain and more experience than I have :-)\n\nWhat do you suggest to get some reliable figures?\n\nRegards,\n Thomas Güttler\n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines\n\n", "msg_date": "Mon, 5 Feb 2018 14:14:32 +0100", "msg_from": "=?UTF-8?Q?Thomas_G=c3=bcttler?= <[email protected]>", "msg_from_op": true, "msg_subject": "OT: Performance of VM" }, { "msg_contents": "\n\nAm 05.02.2018 um 14:14 schrieb Thomas Güttler:\n> What do you suggest to get some reliable figures? \n\nsar is often recommended, see \nhttps://blog.2ndquadrant.com/in-the-defense-of-sar/.\n\nCan you exclude other reasons like vacuum / vacuum freeze?\n\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n", "msg_date": "Mon, 5 Feb 2018 14:26:32 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Performance of VM" }, { "msg_contents": "Have them check the memory and CPU allocation of the hypervisor, make sure\nits not overallocated. Make sure the partitions for stroage are aligned\n(see here:\nhttps://blogs.vmware.com/vsphere/2011/08/guest-os-partition-alignment.html)\n. Install tuned, and enable the throughput performance profile. Oracle has\na problem with transparent hugepages, postgres may well have the same\nproblem, so consider disabling transparent hugepages. There is no reason\nwhy performance on a VM would be worse than performance on a physical\nserver.\n\nOn Mon, Feb 5, 2018 at 7:26 AM, Andreas Kretschmer <[email protected]>\nwrote:\n\n>\n>\n> Am 05.02.2018 um 14:14 schrieb Thomas Güttler:\n>\n>> What do you suggest to get some reliable figures?\n>>\n>\n> sar is often recommended, see https://blog.2ndquadrant.com/i\n> n-the-defense-of-sar/.\n>\n> Can you exclude other reasons like vacuum / vacuum freeze?\n>\n>\n>\n> Regards, Andreas\n>\n> --\n> 2ndQuadrant - The PostgreSQL Support Company.\n> www.2ndQuadrant.com\n>\n>\n>\n\n\n-- \nAndrew W. Kerber\n\n'If at first you dont succeed, dont take up skydiving.'\n\nHave them check the memory and CPU allocation of the hypervisor, make sure its not overallocated. Make sure the partitions for stroage are aligned (see here: https://blogs.vmware.com/vsphere/2011/08/guest-os-partition-alignment.html) . Install tuned, and enable the throughput performance profile. Oracle has a problem with transparent hugepages, postgres may well have the same problem, so consider disabling transparent hugepages.  There is no reason why performance on a VM would be worse than performance on a physical server. On Mon, Feb 5, 2018 at 7:26 AM, Andreas Kretschmer <[email protected]> wrote:\n\nAm 05.02.2018 um 14:14 schrieb Thomas Güttler:\n\nWhat do you suggest to get some reliable figures? \n\n\nsar is often recommended, see https://blog.2ndquadrant.com/in-the-defense-of-sar/.\n\nCan you exclude other reasons like vacuum / vacuum freeze?\n\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n-- Andrew W. Kerber'If at first you dont succeed, dont take up skydiving.'", "msg_date": "Mon, 5 Feb 2018 10:22:17 -0600", "msg_from": "Andrew Kerber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Performance of VM" }, { "msg_contents": "\n\nAm 05.02.2018 um 17:22 schrieb Andrew Kerber:\n> Oracle has a problem with transparent hugepages, postgres may well \n> have the same problem, so consider disabling transparent hugepages. \n\nyes, that's true.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n", "msg_date": "Mon, 5 Feb 2018 17:33:27 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Performance of VM" }, { "msg_contents": "\n\nAm 05.02.2018 um 14:26 schrieb Andreas Kretschmer:\n> \n> \n> Am 05.02.2018 um 14:14 schrieb Thomas Güttler:\n>> What do you suggest to get some reliable figures? \n> \n> sar is often recommended, see https://blog.2ndquadrant.com/in-the-defense-of-sar/.\n> \n> Can you exclude other reasons like vacuum / vacuum freeze?\n\nIn the current case it was a problem in the hypervisor.\n\nBut I want to be prepared for the next time.\n\nThe tool sar looks good. This way I can generate a chart where I can see peaks. Nice.\n\n.... But one thing is still unclear. Imagine I see a peak in the chart. The peak\nwas some hours ago. AFAIK sar has only the aggregated numbers.\n\nBut I need to know details if I want to answer the question \"Why?\". The peak\nhas gone and ps/top/iotop don't help me anymore.\n\nAny idea?\n\nRegards,\n Thomas Güttler\n\n\n\n\n\n-- \nThomas Guettler http://www.thomas-guettler.de/\nI am looking for feedback: https://github.com/guettli/programming-guidelines\n\n", "msg_date": "Tue, 6 Feb 2018 15:31:27 +0100", "msg_from": "=?UTF-8?Q?Thomas_G=c3=bcttler?= <[email protected]>", "msg_from_op": true, "msg_subject": "Details after Load Peak was: OT: Performance of VM" }, { "msg_contents": "On Tue, 2018-02-06 at 15:31 +0100, Thomas Güttler wrote:\n> \n.... But one thing is still unclear. Imagine I see a peak in the chart. The peak\n> was some hours ago. AFAIK sar has only the aggregated numbers.\n> \n> But I need to know details if I want to answer the question \"Why?\". The peak\n> has gone and ps/top/iotop don't help me anymore.\n> \n\nThe typical solution is to store stats on everything you can think of\nwith munin, cacti, ganglia, or similar systems.\n\nI know with ganglia at least, in addition to all the many details it\nalready tracks on a system and the many plugins already available for\nit, you can write your own plugins or simple agents, so you can keep\nstats on anything you can code around.\n\nMunin's probably the easiest to try out, though.\nOn Tue, 2018-02-06 at 15:31 +0100, Thomas Güttler wrote:\n.... But one thing is still unclear. Imagine I see a peak in the chart. The peak\nwas some hours ago. AFAIK sar has only the aggregated numbers.\n\nBut I need to know details if I want to answer the question \"Why?\". The peak\nhas gone and ps/top/iotop don't help me anymore.\n\nThe typical solution is to store stats on everything you can think of with munin, cacti, ganglia, or similar systems.I know with ganglia at least, in addition to all the many details it already tracks on a system and the many plugins already available for it, you can write your own plugins or simple agents, so you can keep stats on anything you can code around.Munin's probably the easiest to try out, though.", "msg_date": "Tue, 06 Feb 2018 07:30:59 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Details after Load Peak was: OT: Performance of VM" }, { "msg_contents": "On Mon, Feb 5, 2018 at 5:22 PM, Andrew Kerber <[email protected]> wrote:\n> Have them check the memory and CPU allocation of the hypervisor, make sure\n> its not overallocated. Make sure the partitions for stroage are aligned (see\n> here:\n> https://blogs.vmware.com/vsphere/2011/08/guest-os-partition-alignment.html)\n> . Install tuned, and enable the throughput performance profile. Oracle has a\n> problem with transparent hugepages, postgres may well have the same problem,\n> so consider disabling transparent hugepages. There is no reason why\n> performance on a VM would be worse than performance on a physical server.\n\nNot theoretically. But in practice if you have anything run in a VM\nlike in this case you do not know what else is working on that box.\nAnalyzing these issues can be really cumbersome and tricky. This is\nwhy I am generally skeptical of running a resource intensive\napplication like a RDBMS in a VM. To get halfway predictable results\nyou want at least a minimum of resources (CPU, memory, IO bandwidth)\nreserved for that VM.\n\nAnecdote: we once had a customer run our application in a VM (which is\nsupported) and complain about slowness. Eventually we found out that\nthey over committed memory - not in sum for all VMs which is common,\nbut this single VM had been configured to have more memory than was\nphysically available in the machine.\n\nKind regards\n\nrobert\n\n-- \n[guy, jim, charlie].each {|him| remember.him do |as, often| as.you_can\n- without end}\nhttp://blog.rubybestpractices.com/\n\n", "msg_date": "Sat, 10 Feb 2018 12:20:59 +0100", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Performance of VM" }, { "msg_contents": "I am consultant that specializes in virtualizing oracle enterprise level workloads. I’m picking up Postgres as a secondary skill. You are right if you don’t manage it properly, you can have problems running enterprise workloads on vm s. But it can be done with proper management. And the HA and DR advantages of virtual systems are huge. \n\nSent from my iPhone\n\n> On Feb 10, 2018, at 5:20 AM, Robert Klemme <[email protected]> wrote:\n> \n>> On Mon, Feb 5, 2018 at 5:22 PM, Andrew Kerber <[email protected]> wrote:\n>> Have them check the memory and CPU allocation of the hypervisor, make sure\n>> its not overallocated. Make sure the partitions for stroage are aligned (see\n>> here:\n>> https://blogs.vmware.com/vsphere/2011/08/guest-os-partition-alignment.html)\n>> . Install tuned, and enable the throughput performance profile. Oracle has a\n>> problem with transparent hugepages, postgres may well have the same problem,\n>> so consider disabling transparent hugepages. There is no reason why\n>> performance on a VM would be worse than performance on a physical server.\n> \n> Not theoretically. But in practice if you have anything run in a VM\n> like in this case you do not know what else is working on that box.\n> Analyzing these issues can be really cumbersome and tricky. This is\n> why I am generally skeptical of running a resource intensive\n> application like a RDBMS in a VM. To get halfway predictable results\n> you want at least a minimum of resources (CPU, memory, IO bandwidth)\n> reserved for that VM.\n> \n> Anecdote: we once had a customer run our application in a VM (which is\n> supported) and complain about slowness. Eventually we found out that\n> they over committed memory - not in sum for all VMs which is common,\n> but this single VM had been configured to have more memory than was\n> physically available in the machine.\n> \n> Kind regards\n> \n> robert\n> \n> -- \n> [guy, jim, charlie].each {|him| remember.him do |as, often| as.you_can\n> - without end}\n> http://blog.rubybestpractices.com/\n\n", "msg_date": "Sat, 10 Feb 2018 08:58:41 -0600", "msg_from": "Andrew Kerber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Performance of VM" }, { "msg_contents": "Am 06.02.2018 um 15:31 schrieb Thomas Güttler:\n> \n> \n> Am 05.02.2018 um 14:26 schrieb Andreas Kretschmer:\n>>\n>>\n>> Am 05.02.2018 um 14:14 schrieb Thomas Güttler:\n>>> What do you suggest to get some reliable figures? \n>>\n>> sar is often recommended, see\n>> https://blog.2ndquadrant.com/in-the-defense-of-sar/.\n>>\n>> Can you exclude other reasons like vacuum / vacuum freeze?\n> \n> In the current case it was a problem in the hypervisor.\n> \n> But I want to be prepared for the next time.\n> \n> The tool sar looks good. This way I can generate a chart where I can see\n> peaks. Nice.\n> \n> .... But one thing is still unclear. Imagine I see a peak in the chart.\n> The peak\n> was some hours ago. AFAIK sar has only the aggregated numbers.\n> \n> But I need to know details if I want to answer the question \"Why?\". The\n> peak\n> has gone and ps/top/iotop don't help me anymore.\n> \n> Any idea?\n\nI love atop (atoptool.nl) for exactly that kind of situation. It will\nsave a snapshot every 10 minutes by default, which you can then simply\n\"scroll\" back to. Helped me pinpointing nightly issues countless times.\n\nOnly really available for Linux though (in case you're on *BSD).\n\nBest regards,\n-- \nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n_____________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX.\nTen years later they are choosing Windows over UNIX.\nWhat part of that message aren't you getting? - Tom Payne", "msg_date": "Tue, 13 Feb 2018 22:15:31 +0100", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Details after Load Peak was: OT: Performance of VM" }, { "msg_contents": "+1 for atop. Be sure to adjust the sampling interval so it suits your\nneeds. It'll tell you what caused the spike.\n\nAlternatively you could probably use sysdig, but I expect that'd result in\na fair performance hit if your system is already struggling.\n\nMicky\n\nOn 14 February 2018 at 08:15, Gunnar \"Nick\" Bluth <[email protected]>\nwrote:\n\n> Am 06.02.2018 um 15:31 schrieb Thomas Güttler:\n> >\n> >\n> > Am 05.02.2018 um 14:26 schrieb Andreas Kretschmer:\n> >>\n> >>\n> >> Am 05.02.2018 um 14:14 schrieb Thomas Güttler:\n> >>> What do you suggest to get some reliable figures?\n> >>\n> >> sar is often recommended, see\n> >> https://blog.2ndquadrant.com/in-the-defense-of-sar/.\n> >>\n> >> Can you exclude other reasons like vacuum / vacuum freeze?\n> >\n> > In the current case it was a problem in the hypervisor.\n> >\n> > But I want to be prepared for the next time.\n> >\n> > The tool sar looks good. This way I can generate a chart where I can see\n> > peaks. Nice.\n> >\n> > .... But one thing is still unclear. Imagine I see a peak in the chart.\n> > The peak\n> > was some hours ago. AFAIK sar has only the aggregated numbers.\n> >\n> > But I need to know details if I want to answer the question \"Why?\". The\n> > peak\n> > has gone and ps/top/iotop don't help me anymore.\n> >\n> > Any idea?\n>\n> I love atop (atoptool.nl) for exactly that kind of situation. It will\n> save a snapshot every 10 minutes by default, which you can then simply\n> \"scroll\" back to. Helped me pinpointing nightly issues countless times.\n>\n> Only really available for Linux though (in case you're on *BSD).\n>\n> Best regards,\n> --\n> Gunnar \"Nick\" Bluth\n> RHCE/SCLA\n>\n> Mobil +49 172 8853339\n> Email: [email protected]\n> _____________________________________________________________\n> In 1984 mainstream users were choosing VMS over UNIX.\n> Ten years later they are choosing Windows over UNIX.\n> What part of that message aren't you getting? - Tom Payne\n>\n>\n>\n\n+1 for atop. Be sure to adjust the sampling interval so it suits your needs. It'll tell you what caused the spike.Alternatively you could probably use sysdig, but I expect that'd result in a fair performance hit if your system is already struggling.MickyOn 14 February 2018 at 08:15, Gunnar \"Nick\" Bluth <[email protected]> wrote:Am 06.02.2018 um 15:31 schrieb Thomas Güttler:\n>\n>\n> Am 05.02.2018 um 14:26 schrieb Andreas Kretschmer:\n>>\n>>\n>> Am 05.02.2018 um 14:14 schrieb Thomas Güttler:\n>>> What do you suggest to get some reliable figures?\n>>\n>> sar is often recommended, see\n>> https://blog.2ndquadrant.com/in-the-defense-of-sar/.\n>>\n>> Can you exclude other reasons like vacuum / vacuum freeze?\n>\n> In the current case it was a problem in the hypervisor.\n>\n> But I want to be prepared for the next time.\n>\n> The tool sar looks good. This way I can generate a chart where I can see\n> peaks. Nice.\n>\n> .... But one thing is still unclear. Imagine I see a peak in the chart.\n> The peak\n> was some hours ago. AFAIK sar has only the aggregated numbers.\n>\n> But I need to know details if I want to answer the question \"Why?\". The\n> peak\n> has gone and ps/top/iotop don't help me anymore.\n>\n> Any idea?\n\nI love atop (atoptool.nl) for exactly that kind of situation. It will\nsave a snapshot every 10 minutes by default, which you can then simply\n\"scroll\" back to. Helped me pinpointing nightly issues countless times.\n\nOnly really available for Linux though (in case you're on *BSD).\n\nBest regards,\n--\nGunnar \"Nick\" Bluth\nRHCE/SCLA\n\nMobil +49 172 8853339\nEmail: [email protected]\n_____________________________________________________________\nIn 1984 mainstream users were choosing VMS over UNIX.\nTen years later they are choosing Windows over UNIX.\nWhat part of that message aren't you getting? - Tom Payne", "msg_date": "Wed, 14 Feb 2018 08:55:58 +1100", "msg_from": "Micky Gough <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Details after Load Peak was: OT: Performance of VM" }, { "msg_contents": "\n\nOn 11/02/18 00:20, Robert Klemme wrote:\n> On Mon, Feb 5, 2018 at 5:22 PM, Andrew Kerber <[email protected]> wrote:\n>> Have them check the memory and CPU allocation of the hypervisor, make sure\n>> its not overallocated. Make sure the partitions for stroage are aligned (see\n>> here:\n>> https://blogs.vmware.com/vsphere/2011/08/guest-os-partition-alignment.html)\n>> . Install tuned, and enable the throughput performance profile. Oracle has a\n>> problem with transparent hugepages, postgres may well have the same problem,\n>> so consider disabling transparent hugepages. There is no reason why\n>> performance on a VM would be worse than performance on a physical server.\n> Not theoretically. But in practice if you have anything run in a VM\n> like in this case you do not know what else is working on that box.\n> Analyzing these issues can be really cumbersome and tricky. This is\n> why I am generally skeptical of running a resource intensive\n> application like a RDBMS in a VM. To get halfway predictable results\n> you want at least a minimum of resources (CPU, memory, IO bandwidth)\n> reserved for that VM.\n>\n> Anecdote: we once had a customer run our application in a VM (which is\n> supported) and complain about slowness. Eventually we found out that\n> they over committed memory - not in sum for all VMs which is common,\n> but this single VM had been configured to have more memory than was\n> physically available in the machine.\n>\n\nAgreed. If you can get the IO layer to have some type of guaranteed \nperformance (e.g AWS Provisioned IOPS), then that is a big help. However \n(as you say above) debugging memory and cpu contention (from within the \nguest) is tricky indeed.\n\nAnecdote: concluded VM needed more cpu, so went to 8 to 16 - performance \ngot significantly *worse*. We prevailed on the devops guys (this was \n*not* AWS) to migrate the VM is a less busy host. Everything was fine \nthereafter.\n\nregards\nMark\n\n", "msg_date": "Wed, 14 Feb 2018 16:43:50 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Performance of VM" } ]
[ { "msg_contents": "Hi community,\n\n \n\nI successfully use PG for a while but I am new to the community.\n\n \n\nI have recently written a number of functions that call each other (one of\nthem is recursive). I attach the code of the top-level (plpgsql) functions\nin the file sql.sql along with the structure of the main table that is used\nin their queries. In subsequent runs of the following query (with exactly\nthe same parameters) all the results are the expected ones:\n\n \n\nSELECT dt.c_create_tree(1::smallint, 13, 1::smallint, 110::smallint,\nARRAY[1,2]::smallint[], false, ARRAY[22,8,26,1]::smallint[], true, 100, 4,\n0.05);\n\n \n\nHowever, before I start the optimizing process (many parts of the code are\nsubject to optimization) I noticed that the performance significantly\ndiffers (from 45'' to 7.5') per run and I can't understand what is the\ntrigger that enforces this behavior since the plans are always the same (but\nnot the Heap Blocks and the buffers). I noticed that when I restart the PG's\nservice sometimes (but not always) the first 1 - 5 runs are a lot faster,\nwhile, once a run lasts long, all subsequent runs last long too. Also, I\nnoticed that applying ANALYSE of even full VACUM to the main table\n(pd.d_sample) does not significantly improve the performance if it is\nalready low.\n\n \n\nThe main table on which the queries run is the pd.d_sample that contains\naround 1.5m rows and the run of the above query updates about 120k of them\n(the same every time it runs).\n\n \n\nThe two attached logs correspond to excerpts of the EXPLAIN logs of two\nsubsequent runs of the above query the one right after the other. They are\npruned because of their size and do not give the total picture, but they\ncover at least one full iteration and one can see the differences in the\nHeap Blocks and the buffers from the first few simple queries.\n\n \n\nMy machine has a Core-i7 processor and runs Windows 10. The PG's version is\n9.6.3 64bit.\n\n \n\nI'd appreciate any help to understand the source of the problem and any\npotential solution.\n\n \n\nThanks in advance,\n\nElias", "msg_date": "Fri, 9 Feb 2018 13:56:38 +0200", "msg_from": "\"Elias Panagiotidis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Same plans different performance?" } ]
[ { "msg_contents": "Hello,\nI have the following schema:\n    CREATE TABLE users (        id   BIGSERIAL PRIMARY KEY,        name TEXT      NOT NULL UNIQUE    );        CREATE TABLE friends (        user_id        BIGINT NOT NULL REFERENCES users,        friend_user_id BIGINT NOT NULL REFERENCES users,        UNIQUE (user_id, friend_user_id)    );        CREATE TABLE posts (        id      BIGSERIAL PRIMARY KEY,        user_id BIGINT    NOT NULL REFERENCES users,        content TEXT      NOT NULL    );    CREATE INDEX posts_user_id_id_index ON posts(user_id, id);\nEach user can unilaterally follow any number of friends. The posts table has a large number of rows and is rapidly growing.\nMy goal is to retrieve the 10 most recent posts of a user's friends. This query gives the correct result, but is inefficient:\n    SELECT posts.id, users.name, posts.content    FROM posts JOIN users ON posts.user_id = users.id    WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id = 1)    ORDER BY posts.id DESC LIMIT 10;\nIf the user's friends have recently posted, the query is still reasonably fast (https://explain.depesz.com/s/6ykR). But if the user's friends haven't recently posted or the user has no friends, it quickly deteriorates (https://explain.depesz.com/s/OnoG).\nIf I match only a single post author (e.g. WHERE posts.user_id = 5), Postgres uses the index posts_user_id_id_index. But if I use IN, the index doesn't appear to be used at all.\nHow can I get these results more efficiently?\nI've uploaded the schema and the queries I've tried to dbfiddle at http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=cf1489b7f6d53c3fe0b55ed7ccbad1f0. The output of \"SELECT version()\" is \"PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit\" for me.\nThank you in advance for any insights, pointers or suggestions you are able to give me.\nRegards,Milo\n\n\n\n\n\nHello,I have the following schema:    CREATE TABLE users (        id   BIGSERIAL PRIMARY KEY,        name TEXT      NOT NULL UNIQUE    );        CREATE TABLE friends (        user_id        BIGINT NOT NULL REFERENCES users,        friend_user_id BIGINT NOT NULL REFERENCES users,        UNIQUE (user_id, friend_user_id)    );        CREATE TABLE posts (        id      BIGSERIAL PRIMARY KEY,        user_id BIGINT    NOT NULL REFERENCES users,        content TEXT      NOT NULL    );    CREATE INDEX posts_user_id_id_index ON posts(user_id, id);Each user can unilaterally follow any number of friends. The posts table has a large number of rows and is rapidly growing.My goal is to retrieve the 10 most recent posts of a user's friends. This query gives the correct result, but is inefficient:    SELECT posts.id, users.name, posts.content    FROM posts JOIN users ON posts.user_id = users.id    WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id = 1)    ORDER BY posts.id DESC LIMIT 10;If the user's friends have recently posted, the query is still reasonably fast (https://explain.depesz.com/s/6ykR). But if the user's friends haven't recently posted or the user has no friends, it quickly deteriorates (https://explain.depesz.com/s/OnoG).If I match only a single post author (e.g. WHERE posts.user_id = 5), Postgres uses the index posts_user_id_id_index. But if I use IN, the index doesn't appear to be used at all.How can I get these results more efficiently?I've uploaded the schema and the queries I've tried to dbfiddle at http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=cf1489b7f6d53c3fe0b55ed7ccbad1f0. The output of \"SELECT version()\" is \"PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit\" for me.Thank you in advance for any insights, pointers or suggestions you are able to give me.Regards,Milo", "msg_date": "Tue, 13 Feb 2018 14:28:07 +0100 (CET)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Efficiently searching for the most recent rows where a column\n matches any result from a different query" }, { "msg_contents": "Hello:\n\n\nEXPLAIN (ANALYZE, BUFFERS)\nselect * from (\nSELECT posts.id, users.name, posts.content\nFROM posts JOIN users ON posts.user_id = users.id\nWHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id =\n1)\n\nORDER BY posts.id DESC\n) as a\nORDER BY a.id DESC\nLIMIT 10;\n\n------\n\n\nEXPLAIN (ANALYZE, BUFFERS)\nselect * from (\nSELECT posts.id, users.name, posts.content\nFROM posts JOIN users ON posts.user_id = users.id\nWHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id =\n2)\n\nORDER BY posts.id DESC\n) as a\nORDER BY a.id DESC\nLIMIT 10;\n\n2018-02-13 8:28 GMT-05:00 <[email protected]>:\n\n> Hello,\n>\n> I have the following schema:\n>\n> CREATE TABLE users (\n> id BIGSERIAL PRIMARY KEY,\n> name TEXT NOT NULL UNIQUE\n> );\n>\n> CREATE TABLE friends (\n> user_id BIGINT NOT NULL REFERENCES users,\n> friend_user_id BIGINT NOT NULL REFERENCES users,\n> UNIQUE (user_id, friend_user_id)\n> );\n>\n> CREATE TABLE posts (\n> id BIGSERIAL PRIMARY KEY,\n> user_id BIGINT NOT NULL REFERENCES users,\n> content TEXT NOT NULL\n> );\n> CREATE INDEX posts_user_id_id_index ON posts(user_id, id);\n>\n> Each user can unilaterally follow any number of friends. The posts table\n> has a large number of rows and is rapidly growing.\n>\n> My goal is to retrieve the 10 most recent posts of a user's friends. This\n> query gives the correct result, but is inefficient:\n>\n> SELECT posts.id, users.name, posts.content\n> FROM posts JOIN users ON posts.user_id = users.id\n> WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE\n> user_id = 1)\n> ORDER BY posts.id DESC LIMIT 10;\n>\n> If the user's friends have recently posted, the query is still reasonably\n> fast (https://explain.depesz.com/s/6ykR). But if the user's friends\n> haven't recently posted or the user has no friends, it quickly deteriorates\n> (https://explain.depesz.com/s/OnoG).\n>\n> If I match only a single post author (e.g. WHERE posts.user_id = 5),\n> Postgres uses the index posts_user_id_id_index. But if I use IN, the index\n> doesn't appear to be used at all.\n>\n> How can I get these results more efficiently?\n>\n> I've uploaded the schema and the queries I've tried to dbfiddle at\n> http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=\n> cf1489b7f6d53c3fe0b55ed7ccbad1f0. The output of \"SELECT version()\" is\n> \"PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10)\n> 4.9.2, 64-bit\" for me.\n>\n> Thank you in advance for any insights, pointers or suggestions you are\n> able to give me.\n>\n> Regards,\n> Milo\n>\n\n\n\n-- \nCordialmente,\n\nIng. Hellmuth I. Vargas S.\nEsp. Telemática y Negocios por Internet\nOracle Database 10g Administrator Certified Associate\nEnterpriseDB Certified PostgreSQL 9.3 Associate\n\nHello:\nEXPLAIN (ANALYZE, BUFFERS)select * from ( SELECT posts.id, users.name, posts.content FROM posts JOIN users ON posts.user_id = users.id WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id = 1) ORDER BY posts.id DESC ) as aORDER BY a.id DESC LIMIT 10;\n------EXPLAIN (ANALYZE, BUFFERS)select * from ( SELECT posts.id, users.name, posts.content FROM posts JOIN users ON posts.user_id = users.id WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id = 2) ORDER BY posts.id DESC ) as aORDER BY a.id DESC LIMIT 10;2018-02-13 8:28 GMT-05:00 <[email protected]>:\n\nHello,I have the following schema:    CREATE TABLE users (        id   BIGSERIAL PRIMARY KEY,        name TEXT      NOT NULL UNIQUE    );        CREATE TABLE friends (        user_id        BIGINT NOT NULL REFERENCES users,        friend_user_id BIGINT NOT NULL REFERENCES users,        UNIQUE (user_id, friend_user_id)    );        CREATE TABLE posts (        id      BIGSERIAL PRIMARY KEY,        user_id BIGINT    NOT NULL REFERENCES users,        content TEXT      NOT NULL    );    CREATE INDEX posts_user_id_id_index ON posts(user_id, id);Each user can unilaterally follow any number of friends. The posts table has a large number of rows and is rapidly growing.My goal is to retrieve the 10 most recent posts of a user's friends. This query gives the correct result, but is inefficient:    SELECT posts.id, users.name, posts.content    FROM posts JOIN users ON posts.user_id = users.id    WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id = 1)    ORDER BY posts.id DESC LIMIT 10;If the user's friends have recently posted, the query is still reasonably fast (https://explain.depesz.com/s/6ykR). But if the user's friends haven't recently posted or the user has no friends, it quickly deteriorates (https://explain.depesz.com/s/OnoG).If I match only a single post author (e.g. WHERE posts.user_id = 5), Postgres uses the index posts_user_id_id_index. But if I use IN, the index doesn't appear to be used at all.How can I get these results more efficiently?I've uploaded the schema and the queries I've tried to dbfiddle at http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=cf1489b7f6d53c3fe0b55ed7ccbad1f0. The output of \"SELECT version()\" is \"PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit\" for me.Thank you in advance for any insights, pointers or suggestions you are able to give me.Regards,Milo \n-- Cordialmente, Ing. Hellmuth I. Vargas S. Esp. Telemática y Negocios por Internet Oracle Database 10g Administrator Certified AssociateEnterpriseDB Certified PostgreSQL 9.3 Associate", "msg_date": "Tue, 13 Feb 2018 16:13:13 -0500", "msg_from": "Hellmuth Vargas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Efficiently searching for the most recent rows where a column\n matches any result from a different query" }, { "msg_contents": "Hello Hellmuth,\n\nThank you for your response.\nI've uploaded the query plan for the first query (user_id=2) here: https://gist.github.com/anonymous/6d251b277ef71f8977b03cab91fedccdThe query plan for the second query (user_id=1) can be found here: https://gist.github.com/anonymous/32ed485b40cce2651ddc52661f3e7f7b\nJust like in the original queries, posts_user_id_id_index is not used.\nKind regards,Milo\n13. Feb 2018 22:13 by [email protected]:\n\n\n> Hello:\n>\n> EXPLAIN (ANALYZE, BUFFERS)> select * from (> \t> SELECT > posts.id> , > users.name> , posts.content> \t> FROM posts JOIN users ON posts.user_id = > users.id> \t> WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id = 1)\n> \t> ORDER BY > posts.id> DESC > ) as a> ORDER BY > a.id> DESC > LIMIT 10;\n> ------\n>\n> EXPLAIN (ANALYZE, BUFFERS)> select * from (> \t> SELECT > posts.id> , > users.name> , posts.content> \t> FROM posts JOIN users ON posts.user_id = > users.id> \t> WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id = 2)\n> \t> ORDER BY > posts.id> DESC > ) as a> ORDER BY > a.id> DESC > LIMIT 10;\n> 2018-02-13 8:28 GMT-05:00 <> [email protected]> >:\n>\n>> >> Hello,\n>> I have the following schema:\n>>     CREATE TABLE users (>>         id   BIGSERIAL PRIMARY KEY,>>         name TEXT      NOT NULL UNIQUE>>     );>>     >>     CREATE TABLE friends (>>         user_id        BIGINT NOT NULL REFERENCES users,>>         friend_user_id BIGINT NOT NULL REFERENCES users,>>         UNIQUE (user_id, friend_user_id)>>     );>>     >>     CREATE TABLE posts (>>         id      BIGSERIAL PRIMARY KEY,>>         user_id BIGINT    NOT NULL REFERENCES users,>>         content TEXT      NOT NULL>>     );>>     CREATE INDEX posts_user_id_id_index ON posts(user_id, id);\n>> Each user can unilaterally follow any number of friends. The posts table has a large number of rows and is rapidly growing.\n>> My goal is to retrieve the 10 most recent posts of a user's friends. This query gives the correct result, but is inefficient:\n>>     SELECT >> posts.id>> , >> users.name>> , posts.content>>     FROM posts JOIN users ON posts.user_id = >> users.id>>     WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id = 1)>>     ORDER BY >> posts.id>> DESC LIMIT 10;\n>> If the user's friends have recently posted, the query is still reasonably fast (>> https://explain.depesz.com/s/6ykR>> ). But if the user's friends haven't recently posted or the user has no friends, it quickly deteriorates (>> https://explain.depesz.com/s/OnoG>> ).\n>> If I match only a single post author (e.g. WHERE posts.user_id = 5), Postgres uses the index posts_user_id_id_index. But if I use IN, the index doesn't appear to be used at all.\n>> How can I get these results more efficiently?\n>> I've uploaded the schema and the queries I've tried to dbfiddle at >> http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=cf1489b7f6d53c3fe0b55ed7ccbad1f0>> . The output of \"SELECT version()\" is \"PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit\" for me.\n>> Thank you in advance for any insights, pointers or suggestions you are able to give me.\n>> Regards,>> Milo>> \n>\n>\n>\n> -- \n> Cordialmente, \n>\n> Ing. Hellmuth I. Vargas S. \n> Esp. Telemática y Negocios por Internet > Oracle Database 10g Administrator Certified Associate\n> EnterpriseDB Certified PostgreSQL 9.3 Associate\n>\n\n\n\n\n\nHello Hellmuth,Thank you for your response.I've uploaded the query plan for the first query (user_id=2) here: https://gist.github.com/anonymous/6d251b277ef71f8977b03cab91fedccdThe query plan for the second query (user_id=1) can be found here: https://gist.github.com/anonymous/32ed485b40cce2651ddc52661f3e7f7bJust like in the original queries, posts_user_id_id_index is not used.Kind regards,Milo13. Feb 2018 22:13 by [email protected]:Hello:\nEXPLAIN (ANALYZE, BUFFERS)select * from ( SELECT posts.id, users.name, posts.content FROM posts JOIN users ON posts.user_id = users.id WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id = 1) ORDER BY posts.id DESC ) as aORDER BY a.id DESC LIMIT 10;\n------EXPLAIN (ANALYZE, BUFFERS)select * from ( SELECT posts.id, users.name, posts.content FROM posts JOIN users ON posts.user_id = users.id WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id = 2) ORDER BY posts.id DESC ) as aORDER BY a.id DESC LIMIT 10;2018-02-13 8:28 GMT-05:00 <[email protected]>:\n\nHello,I have the following schema:    CREATE TABLE users (        id   BIGSERIAL PRIMARY KEY,        name TEXT      NOT NULL UNIQUE    );        CREATE TABLE friends (        user_id        BIGINT NOT NULL REFERENCES users,        friend_user_id BIGINT NOT NULL REFERENCES users,        UNIQUE (user_id, friend_user_id)    );        CREATE TABLE posts (        id      BIGSERIAL PRIMARY KEY,        user_id BIGINT    NOT NULL REFERENCES users,        content TEXT      NOT NULL    );    CREATE INDEX posts_user_id_id_index ON posts(user_id, id);Each user can unilaterally follow any number of friends. The posts table has a large number of rows and is rapidly growing.My goal is to retrieve the 10 most recent posts of a user's friends. This query gives the correct result, but is inefficient:    SELECT posts.id, users.name, posts.content    FROM posts JOIN users ON posts.user_id = users.id    WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id = 1)    ORDER BY posts.id DESC LIMIT 10;If the user's friends have recently posted, the query is still reasonably fast (https://explain.depesz.com/s/6ykR). But if the user's friends haven't recently posted or the user has no friends, it quickly deteriorates (https://explain.depesz.com/s/OnoG).If I match only a single post author (e.g. WHERE posts.user_id = 5), Postgres uses the index posts_user_id_id_index. But if I use IN, the index doesn't appear to be used at all.How can I get these results more efficiently?I've uploaded the schema and the queries I've tried to dbfiddle at http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=cf1489b7f6d53c3fe0b55ed7ccbad1f0. The output of \"SELECT version()\" is \"PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit\" for me.Thank you in advance for any insights, pointers or suggestions you are able to give me.Regards,Milo \n-- Cordialmente, Ing. Hellmuth I. Vargas S. Esp. Telemática y Negocios por Internet Oracle Database 10g Administrator Certified AssociateEnterpriseDB Certified PostgreSQL 9.3 Associate", "msg_date": "Thu, 15 Feb 2018 13:18:00 +0100 (CET)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Efficiently searching for the most recent rows where a column\n matches any result from a different query" }, { "msg_contents": "Hi,\n\nI myself am new to performance tuning queries. But, from what you have\nsaid it looks like Postgres has to go through all the posts using the\nbackward index scan and find out whether their author is amongst the\nuser's friends list.\n\nSince the number of friends is arbitrary for any user, even if a user\nhas few friends (or no friends at all), the stats will not reflect\nthis and so the planner cannot take advantage of this to directly\nfetch the posts from the small set of friends.\n\nMy suggestion (which involves changing the schema and query) is to\nhave a last_post_id or last_posted_time column in user table, find the\nlast 10 friends who have posted first and then use it to find the last\n10 posts. Something like,\n\nselect * from posts where posts.author_id in (select id from users\nwhere id in (select friend_id from user_friend where user_id = 1) and\nlast_posted_time is not null order by last_posted_time desc limit 10);\n\nI am not sure if this is the best way to solve this. If there are\nbetter solutions I would be happy to learn the same.\n\nRegards\nNanda\n\nOn Thu, Feb 15, 2018 at 5:48 PM, <[email protected]> wrote:\n>\n> Hello Hellmuth,\n>\n> Thank you for your response.\n>\n> I've uploaded the query plan for the first query (user_id=2) here:\n> https://gist.github.com/anonymous/6d251b277ef71f8977b03cab91fedccd\n> The query plan for the second query (user_id=1) can be found here:\n> https://gist.github.com/anonymous/32ed485b40cce2651ddc52661f3e7f7b\n>\n> Just like in the original queries, posts_user_id_id_index is not used.\n>\n> Kind regards,\n> Milo\n>\n> 13. Feb 2018 22:13 by [email protected]:\n>\n> Hello:\n>\n>\n> EXPLAIN (ANALYZE, BUFFERS)\n> select * from (\n> SELECT posts.id, users.name, posts.content\n> FROM posts JOIN users ON posts.user_id = users.id\n> WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id =\n> 1)\n>\n> ORDER BY posts.id DESC\n> ) as a\n> ORDER BY a.id DESC\n> LIMIT 10;\n>\n> ------\n>\n>\n> EXPLAIN (ANALYZE, BUFFERS)\n> select * from (\n> SELECT posts.id, users.name, posts.content\n> FROM posts JOIN users ON posts.user_id = users.id\n> WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE user_id =\n> 2)\n>\n> ORDER BY posts.id DESC\n> ) as a\n> ORDER BY a.id DESC\n> LIMIT 10;\n>\n> 2018-02-13 8:28 GMT-05:00 <[email protected]>:\n>>\n>> Hello,\n>>\n>> I have the following schema:\n>>\n>> CREATE TABLE users (\n>> id BIGSERIAL PRIMARY KEY,\n>> name TEXT NOT NULL UNIQUE\n>> );\n>>\n>> CREATE TABLE friends (\n>> user_id BIGINT NOT NULL REFERENCES users,\n>> friend_user_id BIGINT NOT NULL REFERENCES users,\n>> UNIQUE (user_id, friend_user_id)\n>> );\n>>\n>> CREATE TABLE posts (\n>> id BIGSERIAL PRIMARY KEY,\n>> user_id BIGINT NOT NULL REFERENCES users,\n>> content TEXT NOT NULL\n>> );\n>> CREATE INDEX posts_user_id_id_index ON posts(user_id, id);\n>>\n>> Each user can unilaterally follow any number of friends. The posts table\n>> has a large number of rows and is rapidly growing.\n>>\n>> My goal is to retrieve the 10 most recent posts of a user's friends. This\n>> query gives the correct result, but is inefficient:\n>>\n>> SELECT posts.id, users.name, posts.content\n>> FROM posts JOIN users ON posts.user_id = users.id\n>> WHERE posts.user_id IN (SELECT friend_user_id FROM friends WHERE\n>> user_id = 1)\n>> ORDER BY posts.id DESC LIMIT 10;\n>>\n>> If the user's friends have recently posted, the query is still reasonably\n>> fast (https://explain.depesz.com/s/6ykR). But if the user's friends haven't\n>> recently posted or the user has no friends, it quickly deteriorates\n>> (https://explain.depesz.com/s/OnoG).\n>>\n>> If I match only a single post author (e.g. WHERE posts.user_id = 5),\n>> Postgres uses the index posts_user_id_id_index. But if I use IN, the index\n>> doesn't appear to be used at all.\n>>\n>> How can I get these results more efficiently?\n>>\n>> I've uploaded the schema and the queries I've tried to dbfiddle at\n>> http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=cf1489b7f6d53c3fe0b55ed7ccbad1f0.\n>> The output of \"SELECT version()\" is \"PostgreSQL 9.6.5 on\n>> x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit\" for\n>> me.\n>>\n>> Thank you in advance for any insights, pointers or suggestions you are\n>> able to give me.\n>>\n>> Regards,\n>> Milo\n>\n>\n>\n>\n> --\n> Cordialmente,\n>\n> Ing. Hellmuth I. Vargas S.\n> Esp. Telemática y Negocios por Internet\n> Oracle Database 10g Administrator Certified Associate\n> EnterpriseDB Certified PostgreSQL 9.3 Associate\n>\n\n", "msg_date": "Mon, 19 Feb 2018 15:34:44 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Efficiently searching for the most recent rows where a column\n matches any result from a different query" }, { "msg_contents": "Hi,\n\nCorrection in the query. I missed to add limit 10 in the outer most query..\n\n> select * from posts where posts.author_id in (select id from users\n> where id in (select friend_id from user_friend where user_id = 1) and\n> last_posted_time is not null order by last_posted_time desc limit 10);\n>\n\nselect * from posts where posts.author_id in (select id from users\nwhere id in (select friend_id from user_friend where user_id = 1) and\nlast_posted_time is not null order by last_posted_time desc limit 10)\norder by post_id desc limit 10;\n\nRegards,\nNanda\n\n", "msg_date": "Mon, 19 Feb 2018 15:40:37 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Efficiently searching for the most recent rows where a column\n matches any result from a different query" } ]
[ { "msg_contents": "Hi,\nI have installed pgpool 2 version 3.7.0 . I'm trying to configure log\nrotation on the pgpool.log but It seems that something wrong. I configured\nin logrotate conf file the parameters :\n\n/PostgreSQL/pgpool/log/pgpool.log {\n\n daily\n\n dateext\n\n missingok\n\n compress\n\n notifempty\n\n maxage 7\n\n maxsize 21118320640\n\n rotate 7\n\n create 644 postgres postgres\n\n postrotate\n\n su - postgres -c \"~/pgpool/bin/pgpool reload\"\n\n endscript\n\n}\n\nAfter the first rotation, an archive is generated but the pool stops\nwriting to the original log. Any idea what can be the reason ?\n\nHi,I have installed pgpool 2 version 3.7.0 . I'm trying to configure log rotation on the pgpool.log but It seems that something wrong. I configured in logrotate conf file the parameters : \n/PostgreSQL/pgpool/log/pgpool.log {\n        daily\n        dateext\n        missingok\n        compress\n        notifempty\n        maxage 7\n        maxsize\n21118320640\n               \nrotate 7\n               \ncreate 644 postgres postgres\n                postrotate\n        su - postgres -c\n\"~/pgpool/bin/pgpool reload\"\n                endscript\n}\nAfter the first rotation, an archive is generated but the pool stops writing to the original log. Any idea what can be the reason ?", "msg_date": "Sun, 18 Feb 2018 17:19:08 +0200", "msg_from": "Mariel Cherkassky <[email protected]>", "msg_from_op": true, "msg_subject": "pgpool 2 rotate logs" }, { "msg_contents": "> Hi,\n> I have installed pgpool 2 version 3.7.0 . I'm trying to configure log\n> rotation on the pgpool.log but It seems that something wrong. I configured\n> in logrotate conf file the parameters :\n\nThis is not an appropriate mailing because your question is nothing\nrelated to PostgreSQL nor PostgreSQL performance. So please go to\nappropriate mailing list (for example pgpool-general\nhttps://www.pgpool.net/mailman/listinfo/pgpool-general).\n\nBTW a short answer to your question is, pgpool does not the close log\nfile upon receiving SIGHUP (issued by reload). Please make more\nquestions if you have on the pgpool mailing list.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n", "msg_date": "Mon, 19 Feb 2018 10:03:39 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgpool 2 rotate logs" } ]
[ { "msg_contents": "Some of my data processes use large quantities of temp space - 5 or 6T\nanyway.\n\nWe are running in Google Cloud. In order to get the best performance out\nof all of my queries that might need temp space, I've configured temp space\non a concatenated local (volatile) SSD volume. In GCE, local SSD's are\nmore than 20x faster than SAN SSD's in GCE.\n\nside note: The disadvantage of local SSD is that it won't survive \"hitting\nthe virtual power button\" on an instance, nor can it migrate automatically\nto other hardware. (We have to hit the power button to add memory/cpu to\nthe system, and sometimes the power button might get hit by accident.)\nThis is OK for temp space. I never have my database come up automatically\non boot, and I have scripted the entire setup of the temp space volume and\ndata structures. I can run that script before starting the database.\n I've done some tests and it seems to work great. I don't mind rolling\nback any transaction that might be in play during a power failure.\n\nSo here is the problem: The largest local SSD configuration I can get in\nGCE is 3T. Since I have processes that sometimes use more than that, I've\nconfigured a second temp space volume on regular SAN SSD. My hope was\nthat if a query ran out of temp space on one volume it would spill over\nonto the other volume. Unfortunately it doesn't appear to do that\nautomatically. When it hits the 3T limit on the one volume, the query\nfails. :-(\n\nSo, the obvious solution is to anticipate which processes will need more\nthan 3T temp space and then 'set temp_tablespaces' to not use the 3T\nvolume. And that is what we'll try next.\n\nMeanwhile, I'd like other processes to \"prefer\" the fast volume over the\nslow one when the space is available. Ideally I'd like to always use the\nfast volume and have the planner know about the different performance\ncharacteristics and capacity of the available temp space volumes and then\nchoose the best one (speed or size) depending on the query's needs.\n\nI was wondering if there anyone had ideas for how to make that possible.\n I don't think I want to add the SAN disk to the same LVM volume group as\nthe local disk, but maybe that would work, since I'm already building it\nwith a script anyhow ... Is LVM smart enough to optimize radically\ndifferent disk performances?\n\nAt the moment it seems like when multiple temp spaces are available, the\ntemp spaces are chosen in a 'round robin' or perhaps 'random' fashion. Is\nthat true?\n\nI'm meeting with my GCE account rep next week to see if there is any way to\nget more than 3T of local SSD, but I'm skeptical it will be available any\ntime soon.\n\nthoughts?\n\nSome of my data processes use large quantities of temp space - 5 or 6T anyway.We are running in Google Cloud.  In order to get the best performance out of all of my queries that might need temp space, I've configured temp space on a concatenated local (volatile) SSD volume.  In GCE, local SSD's are more than 20x faster than SAN SSD's in GCE.side note:  The disadvantage of local SSD is that it won't survive \"hitting the virtual power button\" on an instance, nor can it migrate automatically to other hardware.  (We have to hit the power button to add memory/cpu to the system, and sometimes the power button might get hit by accident.)  This is OK for temp space.  I never have my database come up automatically on boot, and I have scripted the entire setup of the temp space volume and data structures.  I can run that script before starting the database.   I've done some tests and it seems to work great.  I don't mind rolling back any transaction that might be in play during a power failure.So here is the problem:   The largest local SSD configuration I can get in GCE is 3T.  Since I have processes that sometimes use more than that, I've configured a second temp space volume on regular SAN SSD.   My hope was that if a query ran out of temp space on one volume it would spill over onto the other volume.  Unfortunately it doesn't appear to do that automatically.  When it hits the 3T limit on the one volume, the query fails.  :-(So, the obvious solution is to anticipate which processes will need more than 3T temp space and then 'set temp_tablespaces' to not use the 3T volume.  And that is what we'll try next.Meanwhile, I'd like other processes to \"prefer\" the fast volume over the slow one when the space is available.  Ideally I'd like to always use the fast volume and have the planner know about the different performance characteristics and capacity of the available temp space volumes and then choose the best one (speed or size) depending on the query's needs.  I was wondering if there anyone had ideas for how to make that possible.   I don't think I want to add the SAN disk to the same LVM volume group as the local disk, but maybe that would work, since I'm already building it with a script anyhow ... Is LVM smart enough to optimize radically different disk performances?At the moment it seems like when multiple temp spaces are available, the temp spaces are chosen in a 'round robin' or perhaps 'random' fashion.  Is that true?I'm meeting with my GCE account rep next week to see if there is any way to get more than 3T of local SSD, but I'm skeptical it will be available any time soon.thoughts?", "msg_date": "Wed, 21 Feb 2018 10:53:18 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": true, "msg_subject": "blending fast and temp space volumes" }, { "msg_contents": "Rick Otten <[email protected]> writes:\n> At the moment it seems like when multiple temp spaces are available, the\n> temp spaces are chosen in a 'round robin' or perhaps 'random' fashion. Is\n> that true?\n\nYes, see fd.c's SetTempTablespaces and GetNextTempTableSpace.\nThere's no concept of different temp spaces having different performance\ncharacteristics, and anyway we don't really have enough info to make\naccurate predictions of temp space consumption. So it's hard to see the\nplanner doing this for you automagically.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 21 Feb 2018 11:04:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: blending fast and temp space volumes" }, { "msg_contents": "On Wed, Feb 21, 2018 at 7:53 AM, Rick Otten <[email protected]>\nwrote:\n\n> Some of my data processes use large quantities of temp space - 5 or 6T\n> anyway.\n>\n> We are running in Google Cloud. In order to get the best performance out\n> of all of my queries that might need temp space, I've configured temp space\n> on a concatenated local (volatile) SSD volume. In GCE, local SSD's are\n> more than 20x faster than SAN SSD's in GCE.\n>\n> side note: The disadvantage of local SSD is that it won't survive\n> \"hitting the virtual power button\" on an instance, nor can it migrate\n> automatically to other hardware. (We have to hit the power button to add\n> memory/cpu to the system, and sometimes the power button might get hit by\n> accident.) This is OK for temp space. I never have my database come up\n> automatically on boot, and I have scripted the entire setup of the temp\n> space volume and data structures. I can run that script before starting\n> the database. I've done some tests and it seems to work great. I don't\n> mind rolling back any transaction that might be in play during a power\n> failure.\n>\n> So here is the problem: The largest local SSD configuration I can get in\n> GCE is 3T. Since I have processes that sometimes use more than that, I've\n> configured a second temp space volume on regular SAN SSD. My hope was\n> that if a query ran out of temp space on one volume it would spill over\n> onto the other volume. Unfortunately it doesn't appear to do that\n> automatically. When it hits the 3T limit on the one volume, the query\n> fails. :-(\n>\n> So, the obvious solution is to anticipate which processes will need more\n> than 3T temp space and then 'set temp_tablespaces' to not use the 3T\n> volume. And that is what we'll try next.\n>\n> Meanwhile, I'd like other processes to \"prefer\" the fast volume over the\n> slow one when the space is available. Ideally I'd like to always use the\n> fast volume and have the planner know about the different performance\n> characteristics and capacity of the available temp space volumes and then\n> choose the best one (speed or size) depending on the query's needs.\n>\n> I was wondering if there anyone had ideas for how to make that possible.\n> I don't think I want to add the SAN disk to the same LVM volume group as\n> the local disk, but maybe that would work, since I'm already building it\n> with a script anyhow ... Is LVM smart enough to optimize radically\n> different disk performances?\n>\n\nCouldn't you configure both devices into a single 6T device via RAID0 using\nmd?\n\nCraig\n\n\n>\n> At the moment it seems like when multiple temp spaces are available, the\n> temp spaces are chosen in a 'round robin' or perhaps 'random' fashion. Is\n> that true?\n>\n> I'm meeting with my GCE account rep next week to see if there is any way\n> to get more than 3T of local SSD, but I'm skeptical it will be available\n> any time soon.\n>\n> thoughts?\n>\n>\n>\n>\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Wed, Feb 21, 2018 at 7:53 AM, Rick Otten <[email protected]> wrote:Some of my data processes use large quantities of temp space - 5 or 6T anyway.We are running in Google Cloud.  In order to get the best performance out of all of my queries that might need temp space, I've configured temp space on a concatenated local (volatile) SSD volume.  In GCE, local SSD's are more than 20x faster than SAN SSD's in GCE.side note:  The disadvantage of local SSD is that it won't survive \"hitting the virtual power button\" on an instance, nor can it migrate automatically to other hardware.  (We have to hit the power button to add memory/cpu to the system, and sometimes the power button might get hit by accident.)  This is OK for temp space.  I never have my database come up automatically on boot, and I have scripted the entire setup of the temp space volume and data structures.  I can run that script before starting the database.   I've done some tests and it seems to work great.  I don't mind rolling back any transaction that might be in play during a power failure.So here is the problem:   The largest local SSD configuration I can get in GCE is 3T.  Since I have processes that sometimes use more than that, I've configured a second temp space volume on regular SAN SSD.   My hope was that if a query ran out of temp space on one volume it would spill over onto the other volume.  Unfortunately it doesn't appear to do that automatically.  When it hits the 3T limit on the one volume, the query fails.  :-(So, the obvious solution is to anticipate which processes will need more than 3T temp space and then 'set temp_tablespaces' to not use the 3T volume.  And that is what we'll try next.Meanwhile, I'd like other processes to \"prefer\" the fast volume over the slow one when the space is available.  Ideally I'd like to always use the fast volume and have the planner know about the different performance characteristics and capacity of the available temp space volumes and then choose the best one (speed or size) depending on the query's needs.  I was wondering if there anyone had ideas for how to make that possible.   I don't think I want to add the SAN disk to the same LVM volume group as the local disk, but maybe that would work, since I'm already building it with a script anyhow ... Is LVM smart enough to optimize radically different disk performances?Couldn't you configure both devices into a single 6T device via RAID0 using md?Craig At the moment it seems like when multiple temp spaces are available, the temp spaces are chosen in a 'round robin' or perhaps 'random' fashion.  Is that true?I'm meeting with my GCE account rep next week to see if there is any way to get more than 3T of local SSD, but I'm skeptical it will be available any time soon.thoughts?\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Wed, 21 Feb 2018 11:22:51 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: blending fast and temp space volumes" }, { "msg_contents": "On Wed, Feb 21, 2018 at 7:53 AM, Rick Otten <[email protected]> wrote:\n> side note: The disadvantage of local SSD is that it won't survive \"hitting\n> the virtual power button\" on an instance, nor can it migrate automatically\n> to other hardware. (We have to hit the power button to add memory/cpu to\n> the system, and sometimes the power button might get hit by accident.) This\n> is OK for temp space. I never have my database come up automatically on\n> boot, and I have scripted the entire setup of the temp space volume and data\n> structures. I can run that script before starting the database. I've done\n> some tests and it seems to work great. I don't mind rolling back any\n> transaction that might be in play during a power failure.\n\nIt sounds like you're treating a temp_tablespaces tablespace as\nephemeral, which IIRC can have problems that an ephemeral\nstats_temp_directory does not have.\n\n-- \nPeter Geoghegan\n\n", "msg_date": "Wed, 21 Feb 2018 11:50:03 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: blending fast and temp space volumes" }, { "msg_contents": "On Wed, Feb 21, 2018 at 4:50 PM, Peter Geoghegan <[email protected]> wrote:\n> On Wed, Feb 21, 2018 at 7:53 AM, Rick Otten <[email protected]> wrote:\n>> side note: The disadvantage of local SSD is that it won't survive \"hitting\n>> the virtual power button\" on an instance, nor can it migrate automatically\n>> to other hardware. (We have to hit the power button to add memory/cpu to\n>> the system, and sometimes the power button might get hit by accident.) This\n>> is OK for temp space. I never have my database come up automatically on\n>> boot, and I have scripted the entire setup of the temp space volume and data\n>> structures. I can run that script before starting the database. I've done\n>> some tests and it seems to work great. I don't mind rolling back any\n>> transaction that might be in play during a power failure.\n>\n> It sounds like you're treating a temp_tablespaces tablespace as\n> ephemeral, which IIRC can have problems that an ephemeral\n> stats_temp_directory does not have.\n\nFor instance?\n\nI've been doing that for years without issue. If you're careful to\nrestore the skeleton directory structure at server boot up, I haven't\nhad any issues.\n\n\n\nOn Wed, Feb 21, 2018 at 4:22 PM, Craig James <[email protected]> wrote:\n>\n> On Wed, Feb 21, 2018 at 7:53 AM, Rick Otten <[email protected]>\n>> I was wondering if there anyone had ideas for how to make that possible.\n>> I don't think I want to add the SAN disk to the same LVM volume group as the\n>> local disk, but maybe that would work, since I'm already building it with a\n>> script anyhow ... Is LVM smart enough to optimize radically different disk\n>> performances?\n>\n>\n> Couldn't you configure both devices into a single 6T device via RAID0 using\n> md?\n\nThat would probably perform as slow as the slowest disk.\n\n", "msg_date": "Wed, 21 Feb 2018 17:07:12 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: blending fast and temp space volumes" }, { "msg_contents": "On Wed, Feb 21, 2018 at 12:07 PM, Claudio Freire <[email protected]> wrote:\n> On Wed, Feb 21, 2018 at 4:50 PM, Peter Geoghegan <[email protected]> wrote:\n>> On Wed, Feb 21, 2018 at 7:53 AM, Rick Otten <[email protected]> wrote:\n>>> side note: The disadvantage of local SSD is that it won't survive \"hitting\n>>> the virtual power button\" on an instance, nor can it migrate automatically\n>>> to other hardware. (We have to hit the power button to add memory/cpu to\n>>> the system, and sometimes the power button might get hit by accident.) This\n>>> is OK for temp space. I never have my database come up automatically on\n>>> boot, and I have scripted the entire setup of the temp space volume and data\n>>> structures. I can run that script before starting the database. I've done\n>>> some tests and it seems to work great. I don't mind rolling back any\n>>> transaction that might be in play during a power failure.\n>>\n>> It sounds like you're treating a temp_tablespaces tablespace as\n>> ephemeral, which IIRC can have problems that an ephemeral\n>> stats_temp_directory does not have.\n>\n> For instance?\n>\n> I've been doing that for years without issue. If you're careful to\n> restore the skeleton directory structure at server boot up, I haven't\n> had any issues.\n\nThen you clearly know what I mean already. That's not documented as\neither required or safe anywhere.\n\n-- \nPeter Geoghegan\n\n", "msg_date": "Wed, 21 Feb 2018 12:09:04 -0800", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: blending fast and temp space volumes" }, { "msg_contents": "On Wed, Feb 21, 2018 at 5:09 PM, Peter Geoghegan <[email protected]> wrote:\n> On Wed, Feb 21, 2018 at 12:07 PM, Claudio Freire <[email protected]> wrote:\n>> On Wed, Feb 21, 2018 at 4:50 PM, Peter Geoghegan <[email protected]> wrote:\n>>> On Wed, Feb 21, 2018 at 7:53 AM, Rick Otten <[email protected]> wrote:\n>>>> side note: The disadvantage of local SSD is that it won't survive \"hitting\n>>>> the virtual power button\" on an instance, nor can it migrate automatically\n>>>> to other hardware. (We have to hit the power button to add memory/cpu to\n>>>> the system, and sometimes the power button might get hit by accident.) This\n>>>> is OK for temp space. I never have my database come up automatically on\n>>>> boot, and I have scripted the entire setup of the temp space volume and data\n>>>> structures. I can run that script before starting the database. I've done\n>>>> some tests and it seems to work great. I don't mind rolling back any\n>>>> transaction that might be in play during a power failure.\n>>>\n>>> It sounds like you're treating a temp_tablespaces tablespace as\n>>> ephemeral, which IIRC can have problems that an ephemeral\n>>> stats_temp_directory does not have.\n>>\n>> For instance?\n>>\n>> I've been doing that for years without issue. If you're careful to\n>> restore the skeleton directory structure at server boot up, I haven't\n>> had any issues.\n>\n> Then you clearly know what I mean already. That's not documented as\n> either required or safe anywhere.\n\nAh, ok.\n\nBut the OP did mention he was doing that already. So it should be safe.\n\n", "msg_date": "Thu, 22 Feb 2018 08:30:00 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: blending fast and temp space volumes" } ]
[ { "msg_contents": "I have issue that update queries is slow, I need some advice how improve\nspeed. I don't have much control to change queries. But I can change\npostresql server configuration\n\nquery example:\n\nUPDATE \"project_work\" SET \"left\" = (\"project_work\".\"left\" + 2) WHERE\n(\"project_work\".\"left\" >= 8366)\n\nsometimes updated lines count is up to 10k\n\npostgresql version 9.3\n\npostgresl.conf\nmax_connections = 100\nshared_buffers = 6GB # min 128kB\nwork_mem = 100MB # min 64kB\n\nall other values are default\n\nserver hardware\nIntel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz\n16GB RAM\ndisk is HDD\n\nabout half of resource I can dedicate for postgresql server.\n\n I have issue that update queries is slow, I need some advice how improve speed. I don't have much control to change queries. But I can change postresql server configurationquery example:UPDATE \"project_work\" SET \"left\" = (\"project_work\".\"left\" + 2) WHERE (\"project_work\".\"left\" >= 8366)sometimes updated lines count is up to 10kpostgresql version 9.3postgresl.confmax_connections = 100 shared_buffers = 6GB # min 128kBwork_mem = 100MB # min 64kBall other values are defaultserver hardwareIntel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz16GB RAMdisk is HDDabout half of resource I can dedicate for postgresql server.", "msg_date": "Fri, 23 Feb 2018 16:42:48 +0200", "msg_from": "=?UTF-8?B?RGFyaXVzIFDEl8W+YQ==?= <[email protected]>", "msg_from_op": true, "msg_subject": "need advice to tune postgresql" }, { "msg_contents": "What caught my eye is the update count can be up to 10K. That means if \nautovacuum is not keeping up with this table, bloat may be increasing at \na high pace leading to more page I/O which causes degraded performance. \nIf the table has become bloated, you need to do a blocking VACUUM FULL \non it or a non-blocking VACUUM using pg_repack. Then tune autovacuum so \nthat it can keep up with the updates to this table or add manual vacuum \nanalyze on this table at certain times via a cron job. Manual vacuums \n(user-initiated) will not be bumped as with autovacuums that can be \nbumped due to user priority.\n\nRegards,\nMichael Vitale\n\n\n> Darius Pėža <mailto:[email protected]>\n> Friday, February 23, 2018 9:42 AM\n> I have issue that update queries is slow, I need some advice how \n> improve speed. I don't have much control to change queries. But I can \n> change postresql server configuration\n>\n> query example:\n>\n> UPDATE \"project_work\" SET \"left\" = (\"project_work\".\"left\" + 2) WHERE \n> (\"project_work\".\"left\" >= 8366)\n>\n> sometimes updated lines count is up to 10k\n>\n> postgresql version 9.3\n>\n> postgresl.conf\n> max_connections = 100\n> shared_buffers = 6GB# min 128kB\n> work_mem = 100MB# min 64kB\n>\n> all other values are default\n>\n> server hardware\n> Intel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz\n> 16GB RAM\n> disk is HDD\n>\n> about half of resource I can dedicate for postgresql server.\n>\n\n\n\n\nWhat caught my eye is the \nupdate count can be up to 10K.  That means if autovacuum is not keeping \nup with this table, bloat may be increasing at a high pace leading to \nmore page I/O which causes degraded performance.  If the table has \nbecome bloated, you need to do a blocking VACUUM FULL on it or a \nnon-blocking VACUUM using pg_repack.  Then tune autovacuum so that it \ncan keep up with the updates to this table or add manual vacuum analyze \non this table at certain times via a cron job. Manual vacuums \n(user-initiated) will not be bumped as with autovacuums that can be \nbumped due to user priority.\n\nRegards,\nMichael Vitale\n\n\n\n\n \nDarius Pėža Friday,\n February 23, 2018 9:42 AM \n I\n have issue that update queries is slow, I need some advice how improve \nspeed. I don't have much control to change queries. But I can change \npostresql server configurationquery example:UPDATE \n\"project_work\" SET \"left\" = (\"project_work\".\"left\" + 2) WHERE \n(\"project_work\".\"left\" >= 8366)sometimes updated lines count \nis up to 10kpostgresql version 9.3postgresl.confmax_connections\n = 100 shared_buffers\n = 6GB # min 128kBwork_mem\n = 100MB # min 64kBall\n other values are defaultserver\n hardwareIntel(R)\n Xeon(R) CPU E5-2637 v4 @ 3.50GHz16GB\n RAMdisk\n is HDDabout\n half of resource I can dedicate for postgresql server.", "msg_date": "Fri, 23 Feb 2018 10:03:12 -0500", "msg_from": "MichaelDBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need advice to tune postgresql" }, { "msg_contents": "Darius Pėža wrote:\n> I have issue that update queries is slow, I need some advice how improve speed. I don't have much control to change queries. But I can change postresql server configuration\n> \n> query example:\n> \n> UPDATE \"project_work\" SET \"left\" = (\"project_work\".\"left\" + 2) WHERE (\"project_work\".\"left\" >= 8366)\n> \n> sometimes updated lines count is up to 10k\n> \n> postgresql version 9.3\n> \n> postgresl.conf\n> max_connections = 100\t\t\t\n> shared_buffers = 6GB\t\t\t# min 128kB\n> work_mem = 100MB\t\t\t\t# min 64kB\n> \n> all other values are default\n> \n> server hardware\n> Intel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz\n> 16GB RAM\n> disk is HDD\n> \n> about half of resource I can dedicate for postgresql server.\n\nIf the number of updated lines is that big, you should try to\nget HOT updates as much as possible.\n\nFor that, make sure that there is *no* index on the column,\nand that the fillfactor for the table is suitably low (perhaps 50).\n\nDuring a HOT update, when the new row version fits into the same\npage as the old one, the indexes don't have to be updated.\nThat will speed up the UPDATE considerably.\n\nOn the other hand, an UPDATE like yours would then always use a\nsequential scan, but that may still be a net win.\n\nOther than that, setting checkpoint_segments high enough that\nyou don't get too many checkpoints can help.\n\nOf course, more RAM and fast storage are always good.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n", "msg_date": "Fri, 23 Feb 2018 16:26:55 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need advice to tune postgresql" } ]
[ { "msg_contents": " Hello experts,\n \n We have the following requirements in single query or any proper solution. Please help on this.\nHow many sessions are currently opened.\n -and if opened then how many queries have executed on that session.\n -and also we have to trace how much time each query is taking.\n -and also we have to find the cost of each query.\n \n Regards,\n Daulat\n \n\n", "msg_date": "Fri, 23 Feb 2018 19:29:57 +0000", "msg_from": "Daulat Ram <[email protected]>", "msg_from_op": true, "msg_subject": "Performance" }, { "msg_contents": "\n\nAm 23.02.2018 um 20:29 schrieb Daulat Ram:\n> We have the following requirements in single query or any proper solution. Please help on this.\n> How many sessions are currently opened.\nask pg_stat_activity, via select * from pg_stat_activity\n\n\n> -and if opened then how many queries have executed on that session.\n\nWhot? There isn't a counter for that, AFAIK.\n\n\n> -and also we have to trace how much time each query is taking.\n\nYou can use auto_explain for that\n\n> -and also we have to find the cost of each query.\n\nthe same, auto_explain\n\nplease keep in mind: costs are imaginary.\n\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n", "msg_date": "Fri, 23 Feb 2018 22:20:15 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "\nLe 23/02/2018 à 22:20, Andreas Kretschmer a écrit :\n>\n>\n> Am 23.02.2018 um 20:29 schrieb Daulat Ram:\n>> We have the following requirements in single query or any proper \n>> solution. Please help on this.\n>> How many sessions are currently opened.\n> ask pg_stat_activity, via select * from pg_stat_activity\n>\n>\n>> -and if opened then how many queries have executed on that session.\n>\n> Whot? There isn't a counter for that, AFAIK.\n>\n>\n>> -and also we have to trace how much time each query is taking.\n>\n> You can use auto_explain for that\n>\n>> -and also we have to find the cost of each query.\n>\n> the same, auto_explain\n>\n> please keep in mind: costs are imaginary.\n>\nYou can also have a look at PoWA : https://github.com/dalibo/powa\n>\n>\n> Regards, Andreas\n>\n\n\n", "msg_date": "Sun, 25 Feb 2018 09:08:58 +0100", "msg_from": "phb07 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance" } ]
[ { "msg_contents": "Hello team,\n\nI need help how & what we can monitor the Postgres database via Nagios.\n\nI came to know about the check_postgres.pl script but we are using free ware option of postgres. If its Ok with freeware then please let me know the steps how I can implement in our environment.\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\n\n \nHello team,\n \nI need help how  & what we can monitor the Postgres database via Nagios.\n \nI came to know about the check_postgres.pl script but we are using free ware option of postgres. If its Ok with freeware then please let me know the steps how I\n can implement in our environment.\n \nRegards,\nDaulat", "msg_date": "Fri, 23 Feb 2018 19:31:08 +0000", "msg_from": "Daulat Ram <[email protected]>", "msg_from_op": true, "msg_subject": "Please help" }, { "msg_contents": "\n\nAm 23.02.2018 um 20:31 schrieb Daulat Ram:\n>\n> Hello team,\n>\n> I need help how� & what we can monitor the Postgres database via Nagios.\n>\n> I came to know about the check_postgres.pl script but we are using \n> free ware option of postgres. If its Ok with freeware then please let \n> me know the steps how I can implement in our environment.\n>\n>\n\nyou can use check_postgres from https://bucardo.org/. Please read: \nhttps://bucardo.org/check_postgres/check_postgres.pl.html#license_and_copyright\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n", "msg_date": "Fri, 23 Feb 2018 22:23:23 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help" }, { "msg_contents": "Have you looked at the Nagios XI & Core packages?\n\nhttps://www.nagios.com/solutions/postgres-monitoring/\n\n\n\n\n\nOn 02/23/2018 12:31 PM, Daulat Ram wrote:\n>\n> Hello team,\n>\n> I need help how� & what we can monitor the Postgres database via Nagios.\n>\n> I came to know about the check_postgres.pl script but we are using \n> free ware option of postgres. If its Ok with freeware then please let \n> me know the steps how I can implement in our environment.\n>\n> Regards,\n>\n> Daulat\n>\n\n\n\n\n\n\n\nHave you looked at the Nagios XI & Core packages?\nhttps://www.nagios.com/solutions/postgres-monitoring/\n\n\n\n\n\n\n\nOn 02/23/2018 12:31 PM, Daulat Ram\n wrote:\n\n\n\n\n\n\n�\nHello\n team,\n�\nI\n need help how� & what we can monitor the Postgres\n database via Nagios.\n�\nI\n came to know about the check_postgres.pl script but we are\n using free ware option of postgres. If its Ok with freeware\n then please let me know the steps how I can implement in our\n environment.\n�\nRegards,\nDaulat\n�", "msg_date": "Sat, 24 Feb 2018 13:58:48 -0700", "msg_from": "PropAAS DBA <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help" }, { "msg_contents": "Dear Experts,\n\nKndly help to resolve the issue reported during startup pgadmin4 server mode on ubuntu 16.04\n\ndram@vipgadmin:~$ cd .pgadmin4\ndram@vipgadmin:~/.pgadmin4$ chmod +x lib/python2.7/site-packages/pgadmin4/pgAdmin4.py\ndram@vipgadmin:~/.pgadmin4$ chmod 7777 lib/python2.7/site-packages/pgadmin4/pgAdmin4.py\ndram@vipgadmin:~/.pgadmin4$ chmod -R 7777 lib/python2.7/site-packages/pgadmin4/pgAdmin4.py\ndram@vipgadmin:~/.pgadmin4$ sudo systemctl daemon-reload\ndram@vipgadmin:~/.pgadmin4$ sudo systemctl enable pgadmin4\ndram@vipgadmin:~/.pgadmin4$ sudo systemctl start pgadmin4\ndram@vipgadmin:~/.pgadmin4$ sudo systemctl status pgadmin4\n● pgadmin4.service - Pgadmin4 Service\n Loaded: loaded (/etc/systemd/system/pgadmin4.service; enabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Mon 2018-03-05 23:57:24 PST; 10s ago\n Process: 14190 ExecStart=/root/.pgadmin4/lib/python2.7/site-packages/pgadmin4/pgAdmin4.py (code=exited, status=200/CHDIR)\nMain PID: 14190 (code=exited, status=200/CHDIR)\n\nMar 05 23:57:24 vipgadmin systemd[1]: Started Pgadmin4 Service.\nMar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Main process exited, code=exited, status=200/CHDIR\nMar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Unit entered failed state.\nMar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Failed with result 'exit-code'.\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\n\n\nDear Experts,\n \nKndly help to resolve the issue reported during startup pgadmin4 server mode on ubuntu 16.04\n\n \ndram@vipgadmin:~$ cd .pgadmin4\ndram@vipgadmin:~/.pgadmin4$ chmod +x  lib/python2.7/site-packages/pgadmin4/pgAdmin4.py\ndram@vipgadmin:~/.pgadmin4$ chmod 7777 lib/python2.7/site-packages/pgadmin4/pgAdmin4.py\ndram@vipgadmin:~/.pgadmin4$ chmod -R 7777 lib/python2.7/site-packages/pgadmin4/pgAdmin4.py\ndram@vipgadmin:~/.pgadmin4$ sudo systemctl daemon-reload\ndram@vipgadmin:~/.pgadmin4$  sudo systemctl enable pgadmin4\ndram@vipgadmin:~/.pgadmin4$ sudo systemctl start pgadmin4\ndram@vipgadmin:~/.pgadmin4$ sudo systemctl status  pgadmin4\n● pgadmin4.service - Pgadmin4 Service\n   Loaded: loaded (/etc/systemd/system/pgadmin4.service; enabled; vendor preset: enabled)\n   Active: failed (Result: exit-code) since Mon 2018-03-05 23:57:24 PST; 10s ago\n  Process: 14190 ExecStart=/root/.pgadmin4/lib/python2.7/site-packages/pgadmin4/pgAdmin4.py (code=exited, status=200/CHDIR)\nMain PID: 14190 (code=exited, status=200/CHDIR)\n \nMar 05 23:57:24 vipgadmin systemd[1]: Started Pgadmin4 Service.\nMar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Main process exited, code=exited, status=200/CHDIR\nMar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Unit entered failed state.\nMar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Failed with result 'exit-code'.\n \nRegards,\nDaulat", "msg_date": "Tue, 6 Mar 2018 08:01:54 +0000", "msg_from": "Daulat Ram <[email protected]>", "msg_from_op": true, "msg_subject": "Please help" }, { "msg_contents": "Can you please use separate threads for your questions? That is, don't\nstart new thread by responding to an existing message (because the new\nmessage then points using \"References\" header, which is what e-mail\nclients use to group messages into threads). And use a proper subject\ndescribing the issue (\"Please help\" tells people nothing).\n\nThat being said, how is this related to performance at all? It seems to\nbe about pgadmin, so please send it to pgadmin-support I guess.\n\nregards\n\nOn 03/06/2018 09:01 AM, Daulat Ram wrote:\n> Dear Experts,\n> \n>  \n> \n> Kndly help to resolve the issue reported during startup pgadmin4 server\n> mode on ubuntu 16.04\n> \n>  \n> \n> dram@vipgadmin:~$ cd .pgadmin4\n> \n> dram@vipgadmin:~/.pgadmin4$ chmod +x \n> lib/python2.7/site-packages/pgadmin4/pgAdmin4.py\n> \n> dram@vipgadmin:~/.pgadmin4$ chmod 7777\n> lib/python2.7/site-packages/pgadmin4/pgAdmin4.py\n> \n> dram@vipgadmin:~/.pgadmin4$ chmod -R 7777\n> lib/python2.7/site-packages/pgadmin4/pgAdmin4.py\n> \n> dram@vipgadmin:~/.pgadmin4$ sudo systemctl daemon-reload\n> \n> dram@vipgadmin:~/.pgadmin4$  sudo systemctl enable pgadmin4\n> \n> dram@vipgadmin:~/.pgadmin4$ sudo systemctl start pgadmin4\n> \n> dram@vipgadmin:~/.pgadmin4$ sudo systemctl status  pgadmin4\n> \n> ● pgadmin4.service - Pgadmin4 Service\n> \n>    Loaded: loaded (/etc/systemd/system/pgadmin4.service; enabled; vendor\n> preset: enabled)\n> \n>    Active: failed (Result: exit-code) since Mon 2018-03-05 23:57:24 PST;\n> 10s ago\n> \n>   Process: 14190\n> ExecStart=/root/.pgadmin4/lib/python2.7/site-packages/pgadmin4/pgAdmin4.py\n> (code=exited, status=200/CHDIR)\n> \n> Main PID: 14190 (code=exited, status=200/CHDIR)\n> \n>  \n> \n> Mar 05 23:57:24 vipgadmin systemd[1]: Started Pgadmin4 Service.\n> \n> Mar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Main process\n> exited, code=exited, status=200/CHDIR\n> \n> Mar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Unit entered\n> failed state.\n> \n> Mar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Failed with\n> result 'exit-code'.\n> \n>  \n> \n> Regards,\n> \n> Daulat\n> \n>  \n> \n>  \n> \n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 6 Mar 2018 12:46:15 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please help" } ]
[ { "msg_contents": "Hello\n\nI work with a large and wide table (about 300 million rows, about 50 columns), and from time to time, we get business requirements to make some modifications. But sometimes, it's just some plain mistake. This has happened to us a few weeks ago where someone made a mistake and we had to update a single column of a large and wide table. Literally, the source data screwed up a zip code and we had to patch on our end.\n\nAnyways... Query ran was:\n update T set source_id = substr(sourceId, 2, 10);\nTook about 10h and created 100's of millions of dead tuples, causing another couple of hours of vacuum.\n\nThis was done during a maintenance window, and that table is read-only except when we ETL data to it on a weekly basis, and so I was just wondering why I should pay the \"bloat\" penalty for this type of transaction. Is there a trick that could be use here?\n\nMore generally, I suspect that the MVCC architecture is so deep that something like LOCK TABLE, which would guarantee that there won't be contentions, couldn't be used as a heuristic to not create dead tuples? That would make quite a performance improvement for this type of work though.\n\n\nThank you,\nLaurent.\n\n\n\n\n\n\n\n\n\nHello\n \nI work with a large and wide table (about 300 million rows, about 50 columns), and from time to time, we get business requirements to make some modifications. But sometimes, it’s just some plain mistake. This has happened to us a few weeks\n ago where someone made a mistake and we had to update a single column of a large and wide table. Literally, the source data screwed up a zip code and we had to patch on our end.\n \nAnyways… Query ran was:\n    update T set source_id = substr(sourceId, 2, 10);\nTook about 10h and created 100’s of millions of dead tuples, causing another couple of hours of vacuum.\n \nThis was done during a maintenance window, and that table is read-only except when we ETL data to it on a weekly basis, and so I was just wondering why I should pay the “bloat” penalty for this type of transaction. Is there a trick that\n could be use here?\n \nMore generally, I suspect that the MVCC architecture is so deep that something like LOCK TABLE, which would guarantee that there won’t be contentions, couldn’t be used as a heuristic to not create dead tuples? That would make quite a performance\n improvement for this type of work though.\n \n \nThank you,\nLaurent.", "msg_date": "Fri, 23 Feb 2018 23:27:36 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Updating large tables without dead tuples" }, { "msg_contents": "Greetings,\n\n* [email protected] ([email protected]) wrote:\n> This was done during a maintenance window, and that table is read-only except when we ETL data to it on a weekly basis, and so I was just wondering why I should pay the \"bloat\" penalty for this type of transaction. Is there a trick that could be use here?\n\nYes, create a new table and INSERT the data into that table, then swap\nthe new table into place as the old table. Another option, if you don't\nmind the exclusive lock taken on the table, is to dump the data to\nanother table, then TRUNCATE the current one and then INSERT into it.\n\nThere's other options too, involving triggers and such to allow updates\nand other changes to be captured during this process, avoiding the need\nto lock the table, but that gets a bit complicated.\n\n> More generally, I suspect that the MVCC architecture is so deep that something like LOCK TABLE, which would guarantee that there won't be contentions, couldn't be used as a heuristic to not create dead tuples? That would make quite a performance improvement for this type of work though.\n\nI'm afraid it wouldn't be quite that simple, particularly you have to\nthink about what happens when you issue a rollback...\n\nThanks!\n\nStephen", "msg_date": "Fri, 23 Feb 2018 19:09:40 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updating large tables without dead tuples" }, { "msg_contents": "> -----Original Message-----\n> From: Stephen Frost [mailto:[email protected]]\n> Sent: Friday, February 23, 2018 19:10\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: Updating large tables without dead tuples\n> \n> Greetings,\n> \n> * [email protected] ([email protected]) wrote:\n> > This was done during a maintenance window, and that table is read-only\n> except when we ETL data to it on a weekly basis, and so I was just wondering\n> why I should pay the \"bloat\" penalty for this type of transaction. Is there a trick\n> that could be use here?\n> \n> Yes, create a new table and INSERT the data into that table, then swap the new\n> table into place as the old table. Another option, if you don't mind the\n> exclusive lock taken on the table, is to dump the data to another table, then\n> TRUNCATE the current one and then INSERT into it.\n> \n> There's other options too, involving triggers and such to allow updates and\n> other changes to be captured during this process, avoiding the need to lock the\n> table, but that gets a bit complicated.\n> \n> > More generally, I suspect that the MVCC architecture is so deep that\n> something like LOCK TABLE, which would guarantee that there won't be\n> contentions, couldn't be used as a heuristic to not create dead tuples? That\n> would make quite a performance improvement for this type of work though.\n> \n> I'm afraid it wouldn't be quite that simple, particularly you have to think about\n> what happens when you issue a rollback...\n> \n> Thanks!\n> \n> Stephen\n\n[Laurent Hasson] \n[Laurent Hasson] \nThis table several other tables with foreign keys into it... So any physical replacement of the table wouldn't work I believe. I'd have to disable/remove the foreign keys across the other tables, do this work, and then re-set the foreign keys. Overall time in aggregate may not be much shorter than the current implementation.\n\nThis table represents Hospital visits, off of which hang a lot of other information. The updated column in that Visits table is not part of the key.\n\nAs for the rollback, I didn't think about it because in our case, short of a db/hardware failure, this operation wouldn't fail... But the risk is there and I understand the engine must be prepared for anything and fulfill the ACID principles.\n\nWith respect to that, I read in many places that an UPDATE is effectively a DELETE + INSERT. Does that mean in the rollback logs, there are 2 entries for each row updated as a result?\n\nThank you,\nLaurent.\n\n", "msg_date": "Sat, 24 Feb 2018 17:19:21 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Updating large tables without dead tuples" }, { "msg_contents": "Greetings,\n\n* [email protected] ([email protected]) wrote:\n> > * [email protected] ([email protected]) wrote:\n> > > This was done during a maintenance window, and that table is read-only\n> > except when we ETL data to it on a weekly basis, and so I was just wondering\n> > why I should pay the \"bloat\" penalty for this type of transaction. Is there a trick\n> > that could be use here?\n> > \n> > Yes, create a new table and INSERT the data into that table, then swap the new\n> > table into place as the old table. Another option, if you don't mind the\n> > exclusive lock taken on the table, is to dump the data to another table, then\n> > TRUNCATE the current one and then INSERT into it.\n> > \n> > There's other options too, involving triggers and such to allow updates and\n> > other changes to be captured during this process, avoiding the need to lock the\n> > table, but that gets a bit complicated.\n> > \n> > > More generally, I suspect that the MVCC architecture is so deep that\n> > something like LOCK TABLE, which would guarantee that there won't be\n> > contentions, couldn't be used as a heuristic to not create dead tuples? That\n> > would make quite a performance improvement for this type of work though.\n> > \n> > I'm afraid it wouldn't be quite that simple, particularly you have to think about\n> > what happens when you issue a rollback...\n> \n> [Laurent Hasson] \n> This table several other tables with foreign keys into it... So any physical replacement of the table wouldn't work I believe. I'd have to disable/remove the foreign keys across the other tables, do this work, and then re-set the foreign keys. Overall time in aggregate may not be much shorter than the current implementation.\n\nThat would depend on the FKs, of course, but certainly having them does\nadd to the level of effort required.\n\n> This table represents Hospital visits, off of which hang a lot of other information. The updated column in that Visits table is not part of the key.\n> \n> As for the rollback, I didn't think about it because in our case, short of a db/hardware failure, this operation wouldn't fail... But the risk is there and I understand the engine must be prepared for anything and fulfill the ACID principles.\n\nRight, PG still needs to be able to provide the ability to perform a\nrollback.\n\n> With respect to that, I read in many places that an UPDATE is effectively a DELETE + INSERT. Does that mean in the rollback logs, there are 2 entries for each row updated as a result?\n\nThe short answer is yes. The existing row is updated with a marker\nsaying \"not valid as of this transaction\" and a new row is added with a\nmarker saying \"valid as of this transaction.\" Each of those changes\nalso ends up in WAL (possibly as a full-page image, if that was the\nfirst time that page was changed during that checkpoint, or possibly as\njust a partial page change if the page had already been modified during\nthat checkpoint and a prior full-page image written out). Indexes also\nmay need to be updated, depending on if the new row ended up on the same\npage or not and depending on which columns were indexed and which were\nbeing changed.\n\nThere has been discussion around having an undo-log type of approach,\nwhere the page is modified in-place and a log of what existed previously\nstored off to the side, to allow for rollback, but it doesn't seem\nlikely that we'll have that any time soon, and that space to store the\nundo log would have to be accounted for as well.\n\nThanks!\n\nStephen", "msg_date": "Sat, 24 Feb 2018 15:56:50 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updating large tables without dead tuples" }, { "msg_contents": "On 02/24/2018 12:27 AM, [email protected] wrote:\n> Hello\n> \n> �\n> \n> I work with a large and wide table (about 300 million rows, about 50\n> columns), and from time to time, we get business requirements to make\n> some modifications. But sometimes, it�s just some plain mistake. This\n> has happened to us a few weeks ago where someone made a mistake and we\n> had to update a single column of a large and wide table. Literally, the\n> source data screwed up a zip code and we had to patch on our end.\n> \n> �\n> \n> Anyways� Query ran was:\n> \n> � ��update T set source_id = substr(sourceId, 2, 10);\n> \n> Took about 10h and created 100�s of millions of dead tuples, causing\n> another couple of hours of vacuum.\n> \n> �\n> \n> This was done during a maintenance window, and that table is read-only\n> except when we ETL data to it on a weekly basis, and so I was just\n> wondering why I should pay the �bloat� penalty for this type of\n> transaction. Is there a trick that could be use here?\nYes, there is a trick I like to use here, as long as you don't mind\nlocking the table (even against reads).\n\nI'll assume T.source_id is of type text. If it's not, use whatever the\nactual type is.\n\nALTER TABLE T\n ALTER COLUMN source_id TYPE text USING substr(sourceId, 2, 10);\n\nI copied what you had verbatim, I earnestly hope you don't have two\ncolumns source_id and sourceId in your table.\n\nThis will rewrite the entire table just the same as a VACUUM FULL after\nyour UPDATE would.\n\nDon't forget to VACUUM ANALYZE this table after the operation. Even\nthough there will be no dead rows, you still need to VACUUM it to\ngenerate the visibility map and you need to ANALYZE it for statistics on\nyour \"new\" column.\n\nForeign keys remain intact with this solution and you don't have double\nwal logging like for an UPDATE.\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n", "msg_date": "Sat, 3 Mar 2018 02:55:52 +0100", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Updating large tables without dead tuples" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Vik Fearing [mailto:[email protected]]\n> Sent: Friday, March 02, 2018 20:56\n> To: [email protected]; [email protected]\n> Cc: Stephen Frost <[email protected]>\n> Subject: Re: Updating large tables without dead tuples\n> \n> On 02/24/2018 12:27 AM, [email protected] wrote:\n> > Hello\n> >\n> >\n> >\n> > I work with a large and wide table (about 300 million rows, about 50\n> > columns), and from time to time, we get business requirements to make\n> > some modifications. But sometimes, it's just some plain mistake. This\n> > has happened to us a few weeks ago where someone made a mistake and we\n> > had to update a single column of a large and wide table. Literally,\n> > the source data screwed up a zip code and we had to patch on our end.\n> >\n> >\n> >\n> > Anyways. Query ran was:\n> >\n> >     update T set source_id = substr(sourceId, 2, 10);\n> >\n> > Took about 10h and created 100's of millions of dead tuples, causing\n> > another couple of hours of vacuum.\n> >\n> >\n> >\n> > This was done during a maintenance window, and that table is read-only\n> > except when we ETL data to it on a weekly basis, and so I was just\n> > wondering why I should pay the \"bloat\" penalty for this type of\n> > transaction. Is there a trick that could be use here?\n> Yes, there is a trick I like to use here, as long as you don't mind locking the\n> table (even against reads).\n> \n> I'll assume T.source_id is of type text. If it's not, use whatever the actual type\n> is.\n> \n> ALTER TABLE T\n> ALTER COLUMN source_id TYPE text USING substr(sourceId, 2, 10);\n> \n> I copied what you had verbatim, I earnestly hope you don't have two columns\n> source_id and sourceId in your table.\n> \n> This will rewrite the entire table just the same as a VACUUM FULL after your\n> UPDATE would.\n> \n> Don't forget to VACUUM ANALYZE this table after the operation. Even though\n> there will be no dead rows, you still need to VACUUM it to generate the\n> visibility map and you need to ANALYZE it for statistics on your \"new\" column.\n> \n> Foreign keys remain intact with this solution and you don't have double wal\n> logging like for an UPDATE.\n> --\n> Vik Fearing +33 6 46 75 15 36\n> http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n[Laurent Hasson] \nYes, sorry... only a single column source_id. I understand your idea... Is that because a TEXT field (vs a varchar) would be considered TOAST and be treated differently?\n\nThanks,\nLaurent.\n\n", "msg_date": "Sat, 10 Mar 2018 23:42:37 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Updating large tables without dead tuples" } ]
[ { "msg_contents": "Hello team,\n\n I need help how & what we can monitor the Postgres database via Nagios.\n\n\n\nI came to know about the check_postgres.pl script but we are using free\nware option of PostgreSQL. If its Ok with freeware then please let me know\nthe steps how I can I use check_postgres <http://check_postgres.pl/> via\nNagios.\n\n\n\nRegards,\n\nDaulat\n\n\n Hello team,\n I need help how &\nwhat we can monitor the Postgres database via Nagios.\n \nI came to know about\nthe check_postgres.pl script but we are using\nfree ware option of PostgreSQL. If its Ok with freeware then please let me know\nthe steps how I can I use \n\ncheck_postgres\n\nvia Nagios.\n \nRegards,\nDaulat", "msg_date": "Mon, 26 Feb 2018 10:57:19 +0530", "msg_from": "daulat sagar <[email protected]>", "msg_from_op": true, "msg_subject": "check_postgres via Nagios" } ]
[ { "msg_contents": "I have created a table with 301 columns(ID, 150 BIGINT, 150 TEXT). The\ntable contains multiple form data differentiated by ID range. Hence a\ncolumn contains more than one form data. To achieve Unique Constraint and\nIndexing per form, I chose PostgreSQL Partial Indexes which suits my\nrequirement. I have created Partial Indexes with ID Range as criteria and\nit provides Uniqueness and Indexing per form basis as expected. But DML\noperations on a particular form scans all the Indexes created for the\nentire table instead of scanning the Indexes created for that particular\nform ID Range. This degrades Planner Performance and Query Time more than\n10 times as below,\n\nQuery Result for the table with 3000 Partial Indexes(15 Indexes per form) :\n\nexplain analyse select id from form_data_copy where id between 3001 and\n4000 and bigint50=789;\nQUERY PLAN\n------------------------------------------------------------\n------------------------------------------------------------\n------------------\nIndex Scan using form_data_1_bigint50_3000 on form_data_copy\n(cost=0.28..8.29 rows=1 width=8) (actual time=0.057..0.057 rows=0 loops=1)\nIndex Cond: (bigint50 = 789)\n*Planning time: 99.287 ms*\nExecution time: 0.112 ms\n(4 rows)\n\n*Time: 103.967 ms*\n\nQuery Result for the table with no Indexes(with same record count as above\ntable) :\n\nexplain analyse select id from form_data_copy1 where id between 3001 and\n4000 and bigint50=789; QUERY PLAN\n------------------------------------------------------------\n------------------------------------------------------------\n-------------------\nIndex Scan using form_data_copy1_fk1_idx on form_data_copy1\n(cost=0.42..208.62 rows=1 width=8) (actual time=1.576..1.576 rows=0\nloops=1)\nIndex Cond: ((id >= 3001) AND (id <= 4000))\nFilter: (bigint50 = 789)\nRows Removed by Filter: 859\nPlanning time: 1.243 ms\nExecution time: 1.701 ms\n(6 rows)\n\nTime: *5.891 ms*\n\n\nTo ensure that the Planning Time 99.287 ms is not the time taken for\nscanning 15 Indexes for the form, I have created only 15 Indexes for the\ntable and got the result as below,\n\nexplain analyse select id from form_data_copy1 where id between 3001 and\n4000 and bigint50=789;\nQUERY PLAN\n------------------------------------------------------------\n------------------------------------------------------------\n-----------------------\nIndex Scan using form_data_copy1_bigint50_3000 on form_data_copy1\n(cost=0.28..8.29 rows=1 width=8) (actual time=0.025..0.025 rows=0 loops=1)\nIndex Cond: (bigint50 = 789)\nPlanning time: 3.017 ms\nExecution time: 0.086 ms\n(4 rows)\n\nTime: 7.291 ms\n\nIt seems PGSQL scans all 3000 Indexes even though I provided the ID Range\nin the query. Please clarify whether my assumption is correct or the reason\nfor this more Planning Time. Also, suggest me the way to reduce this\nplanning time.\n\nI have created a table with 301 columns(ID, 150 BIGINT, 150 TEXT). The table contains multiple form data differentiated by ID range. Hence a column contains more than one form data. To achieve Unique Constraint and Indexing per form, I chose PostgreSQL Partial Indexes which suits my requirement. I have created Partial Indexes with ID Range as criteria and it provides Uniqueness and Indexing per form basis as expected. But DML operations on a particular form scans all the Indexes created for the entire table instead of scanning the Indexes created for that particular form ID Range. This degrades Planner Performance and Query Time more than 10 times as below, Query Result for the table with 3000 Partial Indexes(15 Indexes per form) : explain analyse select id from form_data_copy where id between 3001 and 4000 and bigint50=789; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Index Scan using form_data_1_bigint50_3000 on form_data_copy (cost=0.28..8.29 rows=1 width=8) (actual time=0.057..0.057 rows=0 loops=1) Index Cond: (bigint50 = 789) Planning time: 99.287 ms Execution time: 0.112 ms (4 rows) Time: 103.967 ms Query Result for the table with no Indexes(with same record count as above table) : explain analyse select id from form_data_copy1 where id between 3001 and 4000 and bigint50=789; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using form_data_copy1_fk1_idx on form_data_copy1 (cost=0.42..208.62 rows=1 width=8) (actual time=1.576..1.576 rows=0 loops=1) Index Cond: ((id >= 3001) AND (id <= 4000)) Filter: (bigint50 = 789) Rows Removed by Filter: 859 Planning time: 1.243 ms Execution time: 1.701 ms (6 rows) Time: 5.891 ms To ensure that the Planning Time 99.287 ms is not the time taken for scanning 15 Indexes for the form, I have created only 15 Indexes for the table and got the result as below, explain analyse select id from form_data_copy1 where id between 3001 and 4000 and bigint50=789; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------- Index Scan using form_data_copy1_bigint50_3000 on form_data_copy1 (cost=0.28..8.29 rows=1 width=8) (actual time=0.025..0.025 rows=0 loops=1) Index Cond: (bigint50 = 789) Planning time: 3.017 ms Execution time: 0.086 ms (4 rows) Time: 7.291 ms It seems PGSQL scans all 3000 Indexes even though I provided the ID Range in the query. Please clarify whether my assumption is correct or the reason for this more Planning Time. Also, suggest me the way to reduce this planning time.", "msg_date": "Thu, 1 Mar 2018 16:39:36 +0530", "msg_from": "Meenatchi Sandanam <[email protected]>", "msg_from_op": true, "msg_subject": "Performance degrade in Planning Time to find appropriate Partial\n Index" }, { "msg_contents": "Meenatchi Sandanam wrote:\n> I have created a table with 301 columns(ID, 150 BIGINT, 150 TEXT). The table contains\n> multiple form data differentiated by ID range. Hence a column contains more than one form data.\n> To achieve Unique Constraint and Indexing per form, I chose PostgreSQL Partial Indexes\n> which suits my requirement. I have created Partial Indexes with ID Range as criteria and\n> it provides Uniqueness and Indexing per form basis as expected. But DML operations on a\n> particular form scans all the Indexes created for the entire table instead of scanning\n> the Indexes created for that particular form ID Range. This degrades Planner Performance\n> and Query Time more than 10 times as below, \n> \n> Query Result for the table with 3000 Partial Indexes(15 Indexes per form) : \n\nIt is crazy to create 3000 partial indexes on one table.\n\nNo wonder planning and DML statements take very long, they have to consider all the\nindexes.\n\n> explain analyse select id from form_data_copy where id between 3001 and 4000 and bigint50=789;\n\nUse a single index on (bigint50, id) for best performance.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n", "msg_date": "Thu, 01 Mar 2018 14:03:28 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degrade in Planning Time to find appropriate\n Partial Index" }, { "msg_contents": "On Thu, Mar 1, 2018 at 03:10 Meenatchi Sandanam <[email protected]> wrote:\n\n> I have created a table with 301 columns(ID, 150 BIGINT, 150 TEXT). The\n> table contains multiple form data differentiated by ID range. Hence a\n> column contains more than one form data. To achieve Unique Constraint and\n> Indexing per form, I chose PostgreSQL Partial Indexes which suits my\n> requirement. I have created Partial Indexes with ID Range as criteria and\n> it provides Uniqueness and Indexing per form basis as expected. But DML\n> operations on a particular form scans all the Indexes created for the\n> entire table instead of scanning the Indexes created for that particular\n> form ID Range. This degrades Planner Performance and Query Time more than\n> 10 times as below,\n>\n> Query Result for the table with 3000 Partial Indexes(15 Indexes per form)\n> :\n>\n\nThis smells like you’ve failed to normalize your data correctly. 3k indexes\nto ensure uniqueness ? It sounds a lot more like you need 15 tables for 15\nforms ... perhaps with a view for reading or maybe 1/15th of the columns to\nbegin with by having a form_type column...or perhaps like an index function\nfor the unique constraint....such that the output of the function is the\nnormalized portion of data that’s required to be unique....\n\nIf you’ve really got 3k different uniqueness criteria differing by “id”\nranges then it sounds like an expression index with a function spitting out\nthe hash of uniqueness but that’d still be hairy, at least you wouldn’t eat\nthe time on every read though. But I’d reduce that id range based problem\nto include a unique_type indicator column instead.\n\n\n\n> --\n\n\"Genius might be described as a supreme capacity for getting its possessors\ninto trouble of all kinds.\"\n-- Samuel Butler\n\nOn Thu, Mar 1, 2018 at 03:10 Meenatchi Sandanam <[email protected]> wrote:I have created a table with 301 columns(ID, 150 BIGINT, 150 TEXT). The table contains multiple form data differentiated by ID range. Hence a column contains more than one form data. To achieve Unique Constraint and Indexing per form, I chose PostgreSQL Partial Indexes which suits my requirement. I have created Partial Indexes with ID Range as criteria and it provides Uniqueness and Indexing per form basis as expected. But DML operations on a particular form scans all the Indexes created for the entire table instead of scanning the Indexes created for that particular form ID Range. This degrades Planner Performance and Query Time more than 10 times as below, Query Result for the table with 3000 Partial Indexes(15 Indexes per form) : This smells like you’ve failed to normalize your data correctly. 3k indexes to ensure uniqueness ? It sounds a lot more like you need 15 tables for 15 forms ... perhaps with a view for reading or maybe 1/15th of the columns to begin with by having a form_type column...or perhaps like an index function for the unique constraint....such that the output of the function is the normalized portion of data that’s required to be unique....If you’ve really got 3k different uniqueness criteria differing by “id” ranges then it sounds like an expression index with a function spitting out the hash of uniqueness but that’d still be hairy, at least you wouldn’t eat the time on every read though. But I’d reduce that id range based problem to include a unique_type indicator column instead.\n-- \"Genius might be described as a supreme capacity for getting its possessorsinto trouble of all kinds.\"-- Samuel Butler", "msg_date": "Thu, 01 Mar 2018 14:16:09 +0000", "msg_from": "Michael Loftis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degrade in Planning Time to find appropriate Partial\n Index" }, { "msg_contents": "\n\n\n\n\nIl 01/03/2018 15:16, Michael Loftis ha\n scritto:\n\n\n\n\nOn Thu, Mar 1, 2018 at 03:10 Meenatchi\n Sandanam <[email protected]> wrote:\n\n\nI\n have created a table with 301 columns(ID, 150 BIGINT,\n 150 TEXT). The table contains multiple form data\n differentiated by ID range. Hence a column contains more\n than one form data. To achieve Unique Constraint and\n Indexing per form, I chose PostgreSQL Partial Indexes\n which suits my requirement. I have created Partial\n Indexes with ID Range as criteria and it provides\n Uniqueness and Indexing per form basis as expected. But\n DML operations on a particular form scans all the\n Indexes created for the entire table instead of scanning\n the Indexes created for that particular form ID Range.\n This degrades Planner Performance and Query Time more\n than 10 times as below, \n\nQuery\n Result for the table with 3000 Partial Indexes(15\n Indexes per form) : \n\n\n\nThis smells like you’ve failed to normalize\n your data correctly. 3k indexes to ensure uniqueness ? It\n sounds a lot more like you need 15 tables for 15 forms ... \n\n\n\n\n ... or a column that specifies, e.g., the form ID. If all form has\n not the same number of BIGINT and TEXT, keep the maximum value and\n fill only the requested ones.\n\n You can also use the EAV schema, where the Entity is the form, the\n Attribute is the field, and the Value... is the value.\n\n CREATE TABLE tbl(\n id bigint,\n entity integer,\n attribute integer, --(or string, as you need)\n value_int bigint,\n value_string text\n );\n\n This way you'll get more rows, but very thin, and with not more than\n 3 or 4 indexes (based on the querues you need to perform) you can\n retrieve values quickly.\n\n My 2 cent\n Moreno.-\n\n\n", "msg_date": "Thu, 1 Mar 2018 15:39:46 +0100", "msg_from": "Moreno Andreo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degrade in Planning Time to find appropriate Partial\n Index" }, { "msg_contents": "Hi,\n\nhttps://heapanalytics.com/blog/engineering/running-10-million-postgresql-indexes-in-production\n\n From the link shared above, it looks like what Meenatchi has done should work.\n\nDo the conditions on the partial index and query match exactly? (\ngreater than / greater than equals mismatch maybe?)\n\nIf conditions for those partial indexes are mutually exclusive and the\nquery has a matching condition then Postgres can use that index alone.\nAre we missing something here?\n\nRegards,\nNanda\n\nOn Thu, Mar 1, 2018 at 6:33 PM, Laurenz Albe <[email protected]> wrote:\n> Meenatchi Sandanam wrote:\n>> I have created a table with 301 columns(ID, 150 BIGINT, 150 TEXT). The table contains\n>> multiple form data differentiated by ID range. Hence a column contains more than one form data.\n>> To achieve Unique Constraint and Indexing per form, I chose PostgreSQL Partial Indexes\n>> which suits my requirement. I have created Partial Indexes with ID Range as criteria and\n>> it provides Uniqueness and Indexing per form basis as expected. But DML operations on a\n>> particular form scans all the Indexes created for the entire table instead of scanning\n>> the Indexes created for that particular form ID Range. This degrades Planner Performance\n>> and Query Time more than 10 times as below,\n>>\n>> Query Result for the table with 3000 Partial Indexes(15 Indexes per form) :\n>\n> It is crazy to create 3000 partial indexes on one table.\n>\n> No wonder planning and DML statements take very long, they have to consider all the\n> indexes.\n>\n>> explain analyse select id from form_data_copy where id between 3001 and 4000 and bigint50=789;\n>\n> Use a single index on (bigint50, id) for best performance.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n\n", "msg_date": "Fri, 2 Mar 2018 19:19:28 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degrade in Planning Time to find appropriate Partial\n Index" }, { "msg_contents": "2018-03-02 14:49 GMT+01:00 Nandakumar M <[email protected]>:\n\n> Hi,\n>\n> https://heapanalytics.com/blog/engineering/running-10-\n> million-postgresql-indexes-in-production\n>\n> From the link shared above, it looks like what Meenatchi has done should\n> work.\n>\n\nIt can be different situation, there are not specified indexes per table.\nAnd if some projects works, it doesn't mean, so they are well designed.\n\nPostgreSQL has not column storage. Look on column databases. They are\ndesigned for extra wide tables.\n\nRegards\n\nPavel\n\n\n>\n> Do the conditions on the partial index and query match exactly? (\n> greater than / greater than equals mismatch maybe?)\n>\n> If conditions for those partial indexes are mutually exclusive and the\n> query has a matching condition then Postgres can use that index alone.\n> Are we missing something here?\n>\n> Regards,\n> Nanda\n>\n> On Thu, Mar 1, 2018 at 6:33 PM, Laurenz Albe <[email protected]>\n> wrote:\n> > Meenatchi Sandanam wrote:\n> >> I have created a table with 301 columns(ID, 150 BIGINT, 150 TEXT). The\n> table contains\n> >> multiple form data differentiated by ID range. Hence a column contains\n> more than one form data.\n> >> To achieve Unique Constraint and Indexing per form, I chose PostgreSQL\n> Partial Indexes\n> >> which suits my requirement. I have created Partial Indexes with ID\n> Range as criteria and\n> >> it provides Uniqueness and Indexing per form basis as expected. But DML\n> operations on a\n> >> particular form scans all the Indexes created for the entire table\n> instead of scanning\n> >> the Indexes created for that particular form ID Range. This degrades\n> Planner Performance\n> >> and Query Time more than 10 times as below,\n> >>\n> >> Query Result for the table with 3000 Partial Indexes(15 Indexes per\n> form) :\n> >\n> > It is crazy to create 3000 partial indexes on one table.\n> >\n> > No wonder planning and DML statements take very long, they have to\n> consider all the\n> > indexes.\n> >\n> >> explain analyse select id from form_data_copy where id between 3001 and\n> 4000 and bigint50=789;\n> >\n> > Use a single index on (bigint50, id) for best performance.\n> >\n> > Yours,\n> > Laurenz Albe\n> > --\n> > Cybertec | https://www.cybertec-postgresql.com\n> >\n>\n>\n\n2018-03-02 14:49 GMT+01:00 Nandakumar M <[email protected]>:Hi,\n\nhttps://heapanalytics.com/blog/engineering/running-10-million-postgresql-indexes-in-production\n\n From the link shared above, it looks like what Meenatchi has done should work.It can be different situation, there are not specified indexes per table. And if some projects works, it doesn't mean, so they are well designed.PostgreSQL has not column storage. Look on column databases. They are designed for extra wide tables.RegardsPavel \n\nDo the conditions on the partial index and query match exactly? (\ngreater than / greater than equals mismatch maybe?)\n\nIf conditions for those partial indexes are mutually exclusive and the\nquery has a matching condition then Postgres can use that index alone.\nAre we missing something here?\n\nRegards,\nNanda\n\nOn Thu, Mar 1, 2018 at 6:33 PM, Laurenz Albe <[email protected]> wrote:\n> Meenatchi Sandanam wrote:\n>> I have created a table with 301 columns(ID, 150 BIGINT, 150 TEXT). The table contains\n>> multiple form data differentiated by ID range. Hence a column contains more than one form data.\n>> To achieve Unique Constraint and Indexing per form, I chose PostgreSQL Partial Indexes\n>> which suits my requirement. I have created Partial Indexes with ID Range as criteria and\n>> it provides Uniqueness and Indexing per form basis as expected. But DML operations on a\n>> particular form scans all the Indexes created for the entire table instead of scanning\n>> the Indexes created for that particular form ID Range. This degrades Planner Performance\n>> and Query Time more than 10 times as below,\n>>\n>> Query Result for the table with 3000 Partial Indexes(15 Indexes per form) :\n>\n> It is crazy to create 3000 partial indexes on one table.\n>\n> No wonder planning and DML statements take very long, they have to consider all the\n> indexes.\n>\n>> explain analyse select id from form_data_copy where id between 3001 and 4000 and bigint50=789;\n>\n> Use a single index on (bigint50, id) for best performance.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>", "msg_date": "Fri, 2 Mar 2018 15:29:52 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degrade in Planning Time to find appropriate Partial\n Index" }, { "msg_contents": "2018-03-02 15:29 GMT+01:00 Pavel Stehule <[email protected]>:\n\n>\n>\n> 2018-03-02 14:49 GMT+01:00 Nandakumar M <[email protected]>:\n>\n>> Hi,\n>>\n>> https://heapanalytics.com/blog/engineering/running-10-millio\n>> n-postgresql-indexes-in-production\n>>\n>> From the link shared above, it looks like what Meenatchi has done should\n>> work.\n>>\n>\n> It can be different situation, there are not specified indexes per table.\n> And if some projects works, it doesn't mean, so they are well designed.\n>\n> PostgreSQL has not column storage. Look on column databases. They are\n> designed for extra wide tables.\n>\n\nread the article:\n\n1. Probably they use Citus\n\n2. Since partial indexes are so easy to create and work with, we’ve wound\nup with over 10 million partial indexes across our entire cluster.\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Do the conditions on the partial index and query match exactly? (\n>> greater than / greater than equals mismatch maybe?)\n>>\n>> If conditions for those partial indexes are mutually exclusive and the\n>> query has a matching condition then Postgres can use that index alone.\n>> Are we missing something here?\n>>\n>> Regards,\n>> Nanda\n>>\n>> On Thu, Mar 1, 2018 at 6:33 PM, Laurenz Albe <[email protected]>\n>> wrote:\n>> > Meenatchi Sandanam wrote:\n>> >> I have created a table with 301 columns(ID, 150 BIGINT, 150 TEXT). The\n>> table contains\n>> >> multiple form data differentiated by ID range. Hence a column contains\n>> more than one form data.\n>> >> To achieve Unique Constraint and Indexing per form, I chose PostgreSQL\n>> Partial Indexes\n>> >> which suits my requirement. I have created Partial Indexes with ID\n>> Range as criteria and\n>> >> it provides Uniqueness and Indexing per form basis as expected. But\n>> DML operations on a\n>> >> particular form scans all the Indexes created for the entire table\n>> instead of scanning\n>> >> the Indexes created for that particular form ID Range. This degrades\n>> Planner Performance\n>> >> and Query Time more than 10 times as below,\n>> >>\n>> >> Query Result for the table with 3000 Partial Indexes(15 Indexes per\n>> form) :\n>> >\n>> > It is crazy to create 3000 partial indexes on one table.\n>> >\n>> > No wonder planning and DML statements take very long, they have to\n>> consider all the\n>> > indexes.\n>> >\n>> >> explain analyse select id from form_data_copy where id between 3001\n>> and 4000 and bigint50=789;\n>> >\n>> > Use a single index on (bigint50, id) for best performance.\n>> >\n>> > Yours,\n>> > Laurenz Albe\n>> > --\n>> > Cybertec | https://www.cybertec-postgresql.com\n>> >\n>>\n>>\n>\n\n2018-03-02 15:29 GMT+01:00 Pavel Stehule <[email protected]>:2018-03-02 14:49 GMT+01:00 Nandakumar M <[email protected]>:Hi,\n\nhttps://heapanalytics.com/blog/engineering/running-10-million-postgresql-indexes-in-production\n\n From the link shared above, it looks like what Meenatchi has done should work.It can be different situation, there are not specified indexes per table. And if some projects works, it doesn't mean, so they are well designed.PostgreSQL has not column storage. Look on column databases. They are designed for extra wide tables.read the article:1. Probably they use Citus2. Since partial indexes are so easy to create and work with, we’ve wound \nup with over 10 million partial indexes across our entire cluster. RegardsPavel \n\nDo the conditions on the partial index and query match exactly? (\ngreater than / greater than equals mismatch maybe?)\n\nIf conditions for those partial indexes are mutually exclusive and the\nquery has a matching condition then Postgres can use that index alone.\nAre we missing something here?\n\nRegards,\nNanda\n\nOn Thu, Mar 1, 2018 at 6:33 PM, Laurenz Albe <[email protected]> wrote:\n> Meenatchi Sandanam wrote:\n>> I have created a table with 301 columns(ID, 150 BIGINT, 150 TEXT). The table contains\n>> multiple form data differentiated by ID range. Hence a column contains more than one form data.\n>> To achieve Unique Constraint and Indexing per form, I chose PostgreSQL Partial Indexes\n>> which suits my requirement. I have created Partial Indexes with ID Range as criteria and\n>> it provides Uniqueness and Indexing per form basis as expected. But DML operations on a\n>> particular form scans all the Indexes created for the entire table instead of scanning\n>> the Indexes created for that particular form ID Range. This degrades Planner Performance\n>> and Query Time more than 10 times as below,\n>>\n>> Query Result for the table with 3000 Partial Indexes(15 Indexes per form) :\n>\n> It is crazy to create 3000 partial indexes on one table.\n>\n> No wonder planning and DML statements take very long, they have to consider all the\n> indexes.\n>\n>> explain analyse select id from form_data_copy where id between 3001 and 4000 and bigint50=789;\n>\n> Use a single index on (bigint50, id) for best performance.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>", "msg_date": "Fri, 2 Mar 2018 15:32:36 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degrade in Planning Time to find appropriate Partial\n Index" } ]
[ { "msg_contents": "Have a query:\n\nexplain analyze SELECT minion_id FROM mob_player_mob_118 WHERE player_id =\n55351078;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using mob_player_mob_118_pkey on mob_player_mob_118\n(cost=0.44..117887.06 rows=4623076 width=4) (actual time=0.062..3716.105\nrows=4625123 loops=1)\n Index Cond: (player_id = 55351078)\n Heap Fetches: 1152408\n Planning time: 0.241 ms\n Execution time: 5272.171 ms\n\nIf I just get the count it will use a parallel query\n\nexplain analyze SELECT count(minion_id) FROM mob_player_mob_118 WHERE\nplayer_id = 55351078;\n\nThanks\n\nDave Cramer\n\nHave a query:explain analyze SELECT minion_id FROM mob_player_mob_118 WHERE player_id = 55351078;                                                                             QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------------------------------- Index Only Scan using mob_player_mob_118_pkey on mob_player_mob_118  (cost=0.44..117887.06 rows=4623076 width=4) (actual time=0.062..3716.105 rows=4625123 loops=1)   Index Cond: (player_id = 55351078)   Heap Fetches: 1152408 Planning time: 0.241 ms Execution time: 5272.171 msIf I just get the count it will use a parallel queryexplain analyze SELECT count(minion_id) FROM mob_player_mob_118 WHERE player_id = 55351078;ThanksDave Cramer", "msg_date": "Fri, 2 Mar 2018 11:29:29 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "why does this query not use a parallel query" }, { "msg_contents": "Dave Cramer <[email protected]> writes:\n> Have a query:\n> explain analyze SELECT minion_id FROM mob_player_mob_118 WHERE player_id =\n> 55351078;\n\n> Index Only Scan using mob_player_mob_118_pkey on mob_player_mob_118\n> (cost=0.44..117887.06 rows=4623076 width=4) (actual time=0.062..3716.105\n> rows=4625123 loops=1)\n\nI don't think we have parallel IOS yet (I might be wrong). If so,\nit probably thinks this is cheaper than the best available parallel plan.\n\n> If I just get the count it will use a parallel query\n\nLikely a parallelized aggregation.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 02 Mar 2018 11:44:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why does this query not use a parallel query" } ]
[ { "msg_contents": "I have a single table with 45 columns and 6.5 million records with a roughly\nrandom distribution of data of a variety of types. I am trying to implement\na data table with pagination in a web user interface, where the data table\ncan be filtered in a very flexible way on pretty much any combination of\ncolumns. I have indexes covering the most frequently filtered columns. The\ndata table shows 30 records at a time, sorted to put the most recent records\nfirst. The database version is 9.3.5.\n\n \n\nThe problem occurs if I have a filter which results in less than 30 records,\nor where the 30 records that are returned are distributed through the\ndataset (it's OK if the page of 30 records are found in the relatively\nrecent records). Basically, because of the ORDER BY id DESC LIMIT 30 the\nquery planner is opting to use an index scan backward on the primary key,\napplying a filter to each record, until it finds 30 records. If it finds\nthese relatively quickly then all is good. However sometimes the filter\nresults in < 30 records in the final result set, in which case the index\nscan runs through the whole table and takes several minutes. A better plan\nin these cases would be to use the indexes available on the other fields to\nlimit the results set, then filter, sort and limit. But, the planner is\npresumably not able to work this out because the statistics aren't detailed\nenough. \n\n \n\nHere's the table schema:\n\n \n\n-- Table: cache_occurrences_functional\n\n \n\n-- DROP TABLE cache_occurrences_functional;\n\n \n\nCREATE TABLE cache_occurrences_functional\n\n(\n\n id integer NOT NULL,\n\n sample_id integer,\n\n website_id integer,\n\n survey_id integer,\n\n input_form character varying,\n\n location_id integer,\n\n location_name character varying,\n\n public_geom geometry(Geometry,900913),\n\n map_sq_1km_id integer,\n\n map_sq_2km_id integer,\n\n map_sq_10km_id integer,\n\n date_start date,\n\n date_end date,\n\n date_type character varying(2),\n\n created_on timestamp without time zone,\n\n updated_on timestamp without time zone,\n\n verified_on timestamp without time zone,\n\n created_by_id integer,\n\n group_id integer,\n\n taxa_taxon_list_id integer,\n\n preferred_taxa_taxon_list_id integer,\n\n taxon_meaning_id integer,\n\n taxa_taxon_list_external_key character varying(50),\n\n family_taxa_taxon_list_id integer,\n\n taxon_group_id integer,\n\n taxon_rank_sort_order integer,\n\n record_status character(1),\n\n record_substatus smallint,\n\n certainty character(1),\n\n query character(1),\n\n sensitive boolean,\n\n release_status character(1),\n\n marine_flag boolean,\n\n data_cleaner_result boolean,\n\n media_count integer DEFAULT 0,\n\n training boolean NOT NULL DEFAULT false,\n\n zero_abundance boolean,\n\n licence_id integer,\n\n location_id_vice_county integer,\n\n location_id_lrc_boundary integer,\n\n location_id_country integer,\n\n identification_difficulty integer, -- Identification difficulty assigned\nby the data_cleaner module, on a scale from 1 (easy) to 5 (difficult)\n\n import_guid character varying, -- Globally unique identifier of the import\nbatch.\n\n confidential boolean DEFAULT false,\n\n external_key character varying,\n\n CONSTRAINT pk_cache_occurrences_functional PRIMARY KEY (id)\n\n)\n\nWITH (\n\n OIDS=FALSE\n\n);\n\nALTER TABLE cache_occurrences_functional\n\n OWNER TO indicia_user;\n\nGRANT ALL ON TABLE cache_occurrences_functional TO indicia_user;\n\nGRANT SELECT ON TABLE cache_occurrences_functional TO indicia_report_user;\n\nGRANT SELECT ON TABLE cache_occurrences_functional TO naturespot;\n\nGRANT SELECT ON TABLE cache_occurrences_functional TO brc_read_only;\n\nCOMMENT ON COLUMN cache_occurrences_functional.identification_difficulty IS\n'Identification difficulty assigned by the data_cleaner module, on a scale\nfrom 1 (easy) to 5 (difficult)';\n\nCOMMENT ON COLUMN cache_occurrences_functional.import_guid IS 'Globally\nunique identifier of the import batch.';\n\n \n\n \n\n-- Index: ix_cache_occurrences_functional_created_by_id\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_created_by_id;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_created_by_id\n\n ON cache_occurrences_functional\n\n USING btree\n\n (created_by_id);\n\n \n\n-- Index: ix_cache_occurrences_functional_date_end\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_date_end;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_date_end\n\n ON cache_occurrences_functional\n\n USING btree\n\n (date_end);\n\n \n\n-- Index: ix_cache_occurrences_functional_date_start\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_date_start;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_date_start\n\n ON cache_occurrences_functional\n\n USING btree\n\n (date_start);\n\n \n\n-- Index: ix_cache_occurrences_functional_family_taxa_taxon_list_id\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_family_taxa_taxon_list_id;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_family_taxa_taxon_list_id\n\n ON cache_occurrences_functional\n\n USING btree\n\n (family_taxa_taxon_list_id);\n\n \n\n-- Index: ix_cache_occurrences_functional_group_id\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_group_id;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_group_id\n\n ON cache_occurrences_functional\n\n USING btree\n\n (group_id);\n\n \n\n-- Index: ix_cache_occurrences_functional_location_id\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_location_id;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_location_id\n\n ON cache_occurrences_functional\n\n USING btree\n\n (location_id);\n\n \n\n-- Index: ix_cache_occurrences_functional_location_id_country\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_location_id_country;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_location_id_country\n\n ON cache_occurrences_functional\n\n USING btree\n\n (location_id_country);\n\n \n\n-- Index: ix_cache_occurrences_functional_location_id_lrc_boundary\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_location_id_lrc_boundary;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_location_id_lrc_boundary\n\n ON cache_occurrences_functional\n\n USING btree\n\n (location_id_lrc_boundary);\n\n \n\n-- Index: ix_cache_occurrences_functional_location_id_vice_county\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_location_id_vice_county;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_location_id_vice_county\n\n ON cache_occurrences_functional\n\n USING btree\n\n (location_id_vice_county);\n\n \n\n-- Index: ix_cache_occurrences_functional_map_sq_10km_id\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_map_sq_10km_id;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_map_sq_10km_id\n\n ON cache_occurrences_functional\n\n USING btree\n\n (map_sq_10km_id);\n\n \n\n-- Index: ix_cache_occurrences_functional_map_sq_1km_id\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_map_sq_1km_id;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_map_sq_1km_id\n\n ON cache_occurrences_functional\n\n USING btree\n\n (map_sq_1km_id);\n\n \n\n-- Index: ix_cache_occurrences_functional_map_sq_2km_id\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_map_sq_2km_id;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_map_sq_2km_id\n\n ON cache_occurrences_functional\n\n USING btree\n\n (map_sq_2km_id);\n\n \n\n-- Index: ix_cache_occurrences_functional_public_geom\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_public_geom;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_public_geom\n\n ON cache_occurrences_functional\n\n USING gist\n\n (public_geom);\n\n \n\n-- Index: ix_cache_occurrences_functional_status\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_status;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_status\n\n ON cache_occurrences_functional\n\n USING btree\n\n (record_status COLLATE pg_catalog.\"default\", record_substatus);\n\n \n\n-- Index: ix_cache_occurrences_functional_submission\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_submission;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_submission\n\n ON cache_occurrences_functional\n\n USING btree\n\n (website_id, survey_id, sample_id);\n\nALTER TABLE cache_occurrences_functional CLUSTER ON\nix_cache_occurrences_functional_submission;\n\n \n\n-- Index: ix_cache_occurrences_functional_taxa_taxon_list_external_key\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_taxa_taxon_list_external_key;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_taxa_taxon_list_external_key\n\n ON cache_occurrences_functional\n\n USING btree\n\n (taxa_taxon_list_external_key COLLATE pg_catalog.\"default\");\n\n \n\n-- Index: ix_cache_occurrences_functional_taxon_group_id\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_taxon_group_id;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_taxon_group_id\n\n ON cache_occurrences_functional\n\n USING btree\n\n (taxon_group_id);\n\n \n\n-- Index: ix_cache_occurrences_functional_updated_on\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_updated_on;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_updated_on\n\n ON cache_occurrences_functional\n\n USING btree\n\n (updated_on);\n\n \n\n-- Index: ix_cache_occurrences_functional_verified_on\n\n \n\n-- DROP INDEX ix_cache_occurrences_functional_verified_on;\n\n \n\nCREATE INDEX ix_cache_occurrences_functional_verified_on\n\n ON cache_occurrences_functional\n\n USING btree\n\n (verified_on);\n\n \n\nHere's an example query:\n\n \n\nSELECT o.id\n\n FROM cache_occurrences_functional o\n\n WHERE o.website_id in\n(101,12,24,14,8,6,17,25,11,3,7,30,40,16,27,34,5,43,13,41,29,33,44,32,42,47,5\n4,28,51,49,59,65,68,73,75,9,71,83,87,72,97,69,23,10)\n\nAND o.record_status='C' and o.record_substatus is null and (o.query<>'Q' or\no.query is null)\n\nAND o.taxa_taxon_list_external_key in ('NBNSYS0000008324')\n\nAND o.media_count>0\n\nORDER BY o.id DESC LIMIT 30\n\n \n\nand a link to a query plan:\n\nhttps://explain.depesz.com/s/LuK7\n\n \n\nInterestingly if I deliberately prevent the index being scanned by sorting\nby o.id+0, then I get good performance because the planner uses the column\nindexes to filter first:\n\nSELECT o.id\n\n FROM cache_occurrences_functional o\n\n WHERE o.website_id in\n(101,12,24,14,8,6,17,25,11,3,7,30,40,16,27,34,5,43,13,41,29,33,44,32,42,47,5\n4,28,51,49,59,65,68,73,75,9,71,83,87,72,97,69,23,10)\n\nAND o.record_status='C' and o.record_substatus is null and (o.query<>'Q' or\no.query is null)\n\nAND o.taxa_taxon_list_external_key in ('NBNSYS0000008324')\n\nAND o.media_count>0\n\nORDER BY o.id+0 DESC LIMIT 30\n\n \n\nThe \"fixed\" plan:\n\nhttps://explain.depesz.com/s/7KAy\n\n \n\nUnfortunately this way of hacking the query to prevent the index scan\nbackward makes other filters with more than 30 records in the results set\nmuch slower so it is not an option.\n\n \n\nAny ideas on indexing strategies or ways of restructuring the database\nschema to cope with this scenario would be much appreciated.\n\n \n\nRegards\n\nJohn\n\n\nI have a single table with 45 columns and 6.5 million records with a roughly random distribution of data of a variety of types. I am trying to implement a data table with pagination in a web user interface, where the data table can be filtered in a very flexible way on pretty much any combination of columns. I have indexes covering the most frequently filtered columns. The data table shows 30 records at a time, sorted to put the most recent records first. The database version is 9.3.5. The problem occurs if I have a filter which results in less than 30 records, or where the 30 records that are returned are distributed through the dataset (it's OK if the page of 30 records are found in the relatively recent records). Basically, because of the ORDER BY id DESC LIMIT 30 the query planner is opting to use an index scan backward on the primary key, applying a filter to each record, until it finds 30 records. If it finds these relatively quickly then all is good. However sometimes the filter results in < 30 records in the final result set, in which case the index scan runs through the whole table and takes several minutes. A better plan in these cases would be to use the indexes available on the other fields to limit the results set, then filter, sort and limit. But, the planner is presumably not able to work this out because the statistics aren't detailed enough.  Here's the table schema: -- Table: cache_occurrences_functional -- DROP TABLE cache_occurrences_functional; CREATE TABLE cache_occurrences_functional(  id integer NOT NULL,  sample_id integer,  website_id integer,  survey_id integer,  input_form character varying,  location_id integer,  location_name character varying,  public_geom geometry(Geometry,900913),  map_sq_1km_id integer,  map_sq_2km_id integer,  map_sq_10km_id integer,  date_start date,  date_end date,  date_type character varying(2),  created_on timestamp without time zone,  updated_on timestamp without time zone,  verified_on timestamp without time zone,  created_by_id integer,  group_id integer,  taxa_taxon_list_id integer,  preferred_taxa_taxon_list_id integer,  taxon_meaning_id integer,  taxa_taxon_list_external_key character varying(50),  family_taxa_taxon_list_id integer,  taxon_group_id integer,  taxon_rank_sort_order integer,  record_status character(1),  record_substatus smallint,  certainty character(1),  query character(1),  sensitive boolean,  release_status character(1),  marine_flag boolean,  data_cleaner_result boolean,  media_count integer DEFAULT 0,  training boolean NOT NULL DEFAULT false,  zero_abundance boolean,  licence_id integer,  location_id_vice_county integer,  location_id_lrc_boundary integer,  location_id_country integer,  identification_difficulty integer, -- Identification difficulty assigned by the data_cleaner module, on a scale from 1 (easy) to 5 (difficult)  import_guid character varying, -- Globally unique identifier of the import batch.  confidential boolean DEFAULT false,  external_key character varying,  CONSTRAINT pk_cache_occurrences_functional PRIMARY KEY (id))WITH (  OIDS=FALSE);ALTER TABLE cache_occurrences_functional  OWNER TO indicia_user;GRANT ALL ON TABLE cache_occurrences_functional TO indicia_user;GRANT SELECT ON TABLE cache_occurrences_functional TO indicia_report_user;GRANT SELECT ON TABLE cache_occurrences_functional TO naturespot;GRANT SELECT ON TABLE cache_occurrences_functional TO brc_read_only;COMMENT ON COLUMN cache_occurrences_functional.identification_difficulty IS 'Identification difficulty assigned by the data_cleaner module, on a scale from 1 (easy) to 5 (difficult)';COMMENT ON COLUMN cache_occurrences_functional.import_guid IS 'Globally unique identifier of the import batch.';  -- Index: ix_cache_occurrences_functional_created_by_id -- DROP INDEX ix_cache_occurrences_functional_created_by_id; CREATE INDEX ix_cache_occurrences_functional_created_by_id  ON cache_occurrences_functional  USING btree  (created_by_id); -- Index: ix_cache_occurrences_functional_date_end -- DROP INDEX ix_cache_occurrences_functional_date_end; CREATE INDEX ix_cache_occurrences_functional_date_end  ON cache_occurrences_functional  USING btree  (date_end); -- Index: ix_cache_occurrences_functional_date_start -- DROP INDEX ix_cache_occurrences_functional_date_start; CREATE INDEX ix_cache_occurrences_functional_date_start  ON cache_occurrences_functional  USING btree  (date_start); -- Index: ix_cache_occurrences_functional_family_taxa_taxon_list_id -- DROP INDEX ix_cache_occurrences_functional_family_taxa_taxon_list_id; CREATE INDEX ix_cache_occurrences_functional_family_taxa_taxon_list_id  ON cache_occurrences_functional  USING btree  (family_taxa_taxon_list_id); -- Index: ix_cache_occurrences_functional_group_id -- DROP INDEX ix_cache_occurrences_functional_group_id; CREATE INDEX ix_cache_occurrences_functional_group_id  ON cache_occurrences_functional  USING btree  (group_id); -- Index: ix_cache_occurrences_functional_location_id -- DROP INDEX ix_cache_occurrences_functional_location_id; CREATE INDEX ix_cache_occurrences_functional_location_id  ON cache_occurrences_functional  USING btree  (location_id); -- Index: ix_cache_occurrences_functional_location_id_country -- DROP INDEX ix_cache_occurrences_functional_location_id_country; CREATE INDEX ix_cache_occurrences_functional_location_id_country  ON cache_occurrences_functional  USING btree  (location_id_country); -- Index: ix_cache_occurrences_functional_location_id_lrc_boundary -- DROP INDEX ix_cache_occurrences_functional_location_id_lrc_boundary; CREATE INDEX ix_cache_occurrences_functional_location_id_lrc_boundary  ON cache_occurrences_functional  USING btree  (location_id_lrc_boundary); -- Index: ix_cache_occurrences_functional_location_id_vice_county -- DROP INDEX ix_cache_occurrences_functional_location_id_vice_county; CREATE INDEX ix_cache_occurrences_functional_location_id_vice_county  ON cache_occurrences_functional  USING btree  (location_id_vice_county); -- Index: ix_cache_occurrences_functional_map_sq_10km_id -- DROP INDEX ix_cache_occurrences_functional_map_sq_10km_id; CREATE INDEX ix_cache_occurrences_functional_map_sq_10km_id  ON cache_occurrences_functional  USING btree  (map_sq_10km_id); -- Index: ix_cache_occurrences_functional_map_sq_1km_id -- DROP INDEX ix_cache_occurrences_functional_map_sq_1km_id; CREATE INDEX ix_cache_occurrences_functional_map_sq_1km_id  ON cache_occurrences_functional  USING btree  (map_sq_1km_id); -- Index: ix_cache_occurrences_functional_map_sq_2km_id -- DROP INDEX ix_cache_occurrences_functional_map_sq_2km_id; CREATE INDEX ix_cache_occurrences_functional_map_sq_2km_id  ON cache_occurrences_functional  USING btree  (map_sq_2km_id); -- Index: ix_cache_occurrences_functional_public_geom -- DROP INDEX ix_cache_occurrences_functional_public_geom; CREATE INDEX ix_cache_occurrences_functional_public_geom  ON cache_occurrences_functional  USING gist  (public_geom); -- Index: ix_cache_occurrences_functional_status -- DROP INDEX ix_cache_occurrences_functional_status; CREATE INDEX ix_cache_occurrences_functional_status  ON cache_occurrences_functional  USING btree  (record_status COLLATE pg_catalog.\"default\", record_substatus); -- Index: ix_cache_occurrences_functional_submission -- DROP INDEX ix_cache_occurrences_functional_submission; CREATE INDEX ix_cache_occurrences_functional_submission  ON cache_occurrences_functional  USING btree  (website_id, survey_id, sample_id);ALTER TABLE cache_occurrences_functional CLUSTER ON ix_cache_occurrences_functional_submission; -- Index: ix_cache_occurrences_functional_taxa_taxon_list_external_key -- DROP INDEX ix_cache_occurrences_functional_taxa_taxon_list_external_key; CREATE INDEX ix_cache_occurrences_functional_taxa_taxon_list_external_key  ON cache_occurrences_functional  USING btree  (taxa_taxon_list_external_key COLLATE pg_catalog.\"default\"); -- Index: ix_cache_occurrences_functional_taxon_group_id -- DROP INDEX ix_cache_occurrences_functional_taxon_group_id; CREATE INDEX ix_cache_occurrences_functional_taxon_group_id  ON cache_occurrences_functional  USING btree  (taxon_group_id); -- Index: ix_cache_occurrences_functional_updated_on -- DROP INDEX ix_cache_occurrences_functional_updated_on; CREATE INDEX ix_cache_occurrences_functional_updated_on  ON cache_occurrences_functional  USING btree  (updated_on); -- Index: ix_cache_occurrences_functional_verified_on -- DROP INDEX ix_cache_occurrences_functional_verified_on; CREATE INDEX ix_cache_occurrences_functional_verified_on  ON cache_occurrences_functional  USING btree  (verified_on); Here's an example query: SELECT o.id  FROM cache_occurrences_functional o  WHERE o.website_id in (101,12,24,14,8,6,17,25,11,3,7,30,40,16,27,34,5,43,13,41,29,33,44,32,42,47,54,28,51,49,59,65,68,73,75,9,71,83,87,72,97,69,23,10)AND o.record_status='C' and o.record_substatus is null and (o.query<>'Q' or o.query is null)AND o.taxa_taxon_list_external_key in ('NBNSYS0000008324')AND o.media_count>0 ORDER BY o.id DESC LIMIT 30 and a link to a query plan:https://explain.depesz.com/s/LuK7 Interestingly if I deliberately prevent the index being scanned by sorting by o.id+0, then I get good performance because the planner uses the column indexes to filter first:SELECT o.id  FROM cache_occurrences_functional o  WHERE o.website_id in (101,12,24,14,8,6,17,25,11,3,7,30,40,16,27,34,5,43,13,41,29,33,44,32,42,47,54,28,51,49,59,65,68,73,75,9,71,83,87,72,97,69,23,10)AND o.record_status='C' and o.record_substatus is null and (o.query<>'Q' or o.query is null)AND o.taxa_taxon_list_external_key in ('NBNSYS0000008324')AND o.media_count>0 ORDER BY o.id+0 DESC LIMIT 30 The \"fixed\" plan:https://explain.depesz.com/s/7KAy Unfortunately this way of hacking the query to prevent the index scan backward makes other filters with more than 30 records in the results set much slower so it is not an option. Any ideas on indexing strategies or ways of restructuring the database schema to cope with this scenario would be much appreciated. RegardsJohn", "msg_date": "Mon, 5 Mar 2018 15:45:09 -0000", "msg_from": "\"John van Breda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow index scan backward." } ]
[ { "msg_contents": "Hi List,\n\nI have a short description bellow from Dev team regarding the behaviour \nof gist index on the polygon column, looking to get some  feedback  from \nyou:\n\n\".... I was expecting the <@(point,polygon) and @>(polygon,point) to be \nindexable but they are not. see bellow query output ,\nthe column is a polygon and the index is a gist index on the polygon \ncolumn; my understanding of the above query is that it says which \noperators would cause that index to be used\n\nThis SQL shows which operators are indexable:SELECT\n  pg_get_indexdef(ss.indexrelid, (ss.iopc).n, TRUE) AS index_col,\n  amop.amopopr::regoperator AS indexable_operator\nFROM pg_opclass opc, pg_amop amop,\n  (SELECT indexrelid, information_schema._pg_expandarray(indclass) AS iopc\n   FROM pg_index\n   WHERE indexrelid = 'caom2.Plane_energy_ib'::regclass) ss\nWHERE amop.amopfamily = opc.opcfamily AND opc.oid = (ss.iopc).x\nORDER BY (ss.iopc).n, indexable_operator;\n\nWe run  the SQL  in PG 9.5.3 and PG 10.2 we  the same result: only \npolygon vs polygon is indexable (except the last entry which is distance \noperator).\n\nThe work around for us was to change interval-contains-value from \npolygon-contains-point (@> or <@ operator) to \npolygn-intersects-really-small-polygon (&&) in order to use the index, \nbut I was quite surprised that contains operators are not indexable!\n\nNote that this is using the built in polygon and not pgsphere (spoly)\"\n\n\nthank you\n\nIsabella\n\n\n\n\n\n\n\n\n\nHi List,\n\nI have a short description bellow from Dev team regarding the\n behaviour of gist index on the polygon column, looking to get\n some  feedback  from you: \n\n\".... I was expecting the\n <@(point,polygon) and @>(polygon,point) to be indexable\n but they are not. see bellow query output , \n the column is a polygon and the index is a gist index on the\n polygon column; my understanding of the above query is that it\n says which operators would cause that index to be used\n\nThis SQL shows which operators are\n indexable:SELECT\n  pg_get_indexdef(ss.indexrelid, (ss.iopc).n, TRUE) AS index_col,\n  amop.amopopr::regoperator AS indexable_operator\n FROM pg_opclass opc, pg_amop amop,\n  (SELECT indexrelid,\n information_schema._pg_expandarray(indclass) AS iopc\n   FROM pg_index\n   WHERE indexrelid = 'caom2.Plane_energy_ib'::regclass) ss\n WHERE amop.amopfamily = opc.opcfamily AND opc.oid = (ss.iopc).x\n ORDER BY (ss.iopc).n, indexable_operator;\n We\n run  the SQL  in PG 9.5.3 and PG 10.2 we  the same result: only\n polygon vs polygon is indexable (except the last entry which is\n distance operator). \n\nThe work around for us was to change\n interval-contains-value from polygon-contains-point (@> or\n <@ operator) to polygn-intersects-really-small-polygon\n (&&) in order to use the index, but I was quite\n surprised that contains operators are not indexable!\n\n Note that this is using the built in\n polygon and not pgsphere (spoly)\"\n\n\nthank you\nIsabella", "msg_date": "Mon, 5 Mar 2018 08:18:22 -0800", "msg_from": "ghiureai <[email protected]>", "msg_from_op": true, "msg_subject": "GIST index (polygon, point)" }, { "msg_contents": "ghiureai wrote:\n> I have a short description bellow from Dev team regarding the behaviour of gist index on the polygon column, looking to get some feedback from you:\n>\n> \".... I was expecting the <@(point,polygon) and @>(polygon,point) to be indexable but they are not. see bellow query output ,\n> the column is a polygon and the index is a gist index on the polygon column; my understanding of the above query is that it says which operators would cause that index to be used\n>\n> This SQL shows which operators are indexable:SELECT\n> pg_get_indexdef(ss.indexrelid, (ss.iopc).n, TRUE) AS index_col,\n> amop.amopopr::regoperator AS indexable_operator\n> FROM pg_opclass opc, pg_amop amop,\n> (SELECT indexrelid, information_schema._pg_expandarray(indclass) AS iopc\n> FROM pg_index\n> WHERE indexrelid = 'caom2.Plane_energy_ib'::regclass) ss\n> WHERE amop.amopfamily = opc.opcfamily AND opc.oid = (ss.iopc).x\n> ORDER BY (ss.iopc).n, indexable_operator;\n>\n> We run the SQL in PG 9.5.3 and PG 10.2 we the same result: only polygon vs polygon is indexable (except the last entry which is distance operator).\n> The work around for us was to change interval-contains-value from polygon-contains-point (@> or <@ operator) to\n> polygn-intersects-really-small-polygon (&&) in order to use the index, but I was quite surprised that contains operators are not indexable!\n> Note that this is using the built in polygon and not pgsphere (spoly)\"\n\nThat sounds about right.\n\nYou could use a single-point polygon like '((1,1))'::polygon\nand the <@ or && operator.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n", "msg_date": "Tue, 06 Mar 2018 10:59:04 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GIST index (polygon, point)" } ]
[ { "msg_contents": "Hi Team,\n\nby mistake one physical file dropped for one of our table, as we do-not\nhave backup for this table we are getting below error.\n\nERROR: could not open file \"base/12669/16394\": No such file or directory\n\n\nplease help us to recover the table.\n\n\nRegards,\n\nRambabu Vakada,\n\nPostgreSQL DBA.\n\nHi Team, by mistake one physical file dropped for one of our table, as we do-not have backup for this table we are getting below error.\n\nERROR:  could not open file \"base/12669/16394\": No such file or directoryplease help us to recover the table.Regards,Rambabu Vakada,PostgreSQL DBA.", "msg_date": "Tue, 6 Mar 2018 17:05:43 +0530", "msg_from": "Rambabu V <[email protected]>", "msg_from_op": true, "msg_subject": "by mistake dropped physical file dropped for one table." }, { "msg_contents": "Greetings,\n\n* Rambabu V ([email protected]) wrote:\n> by mistake one physical file dropped for one of our table, as we do-not\n> have backup for this table we are getting below error.\n> \n> ERROR: could not open file \"base/12669/16394\": No such file or directory\n> \n> please help us to recover the table.\n\nYou're not likely able to recover that table. To do so would require\ncompletely stopping the system immediately and attempting to perform\nfilesystem maniuplation to \"undelete\" the file, or pull back chunks from\nthe filesystem which contain pieces of the file and attempting to\nreconstruct it.\n\nIf you've been keeping all WAL since the beginning of the cluster, it's\npossible you could recover that way, but you claim to not have any\nbackups, so I'm guessing that's pretty unlikely.\n\nThanks!\n\nStephen", "msg_date": "Tue, 6 Mar 2018 08:21:24 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: by mistake dropped physical file dropped for one table." }, { "msg_contents": "Ok, thanks.\n\nOn Mar 6, 2018 6:51 PM, \"Stephen Frost\" <[email protected]> wrote:\n\n> Greetings,\n>\n> * Rambabu V ([email protected]) wrote:\n> > by mistake one physical file dropped for one of our table, as we do-not\n> > have backup for this table we are getting below error.\n> >\n> > ERROR: could not open file \"base/12669/16394\": No such file or directory\n> >\n> > please help us to recover the table.\n>\n> You're not likely able to recover that table. To do so would require\n> completely stopping the system immediately and attempting to perform\n> filesystem maniuplation to \"undelete\" the file, or pull back chunks from\n> the filesystem which contain pieces of the file and attempting to\n> reconstruct it.\n>\n> If you've been keeping all WAL since the beginning of the cluster, it's\n> possible you could recover that way, but you claim to not have any\n> backups, so I'm guessing that's pretty unlikely.\n>\n> Thanks!\n>\n> Stephen\n>\n\nOk, thanks. On Mar 6, 2018 6:51 PM, \"Stephen Frost\" <[email protected]> wrote:Greetings,\n\n* Rambabu V ([email protected]) wrote:\n> by mistake one physical file dropped for one of our table, as we do-not\n> have backup for this table we are getting below error.\n>\n> ERROR:  could not open file \"base/12669/16394\": No such file or directory\n>\n> please help us to recover the table.\n\nYou're not likely able to recover that table.  To do so would require\ncompletely stopping the system immediately and attempting to perform\nfilesystem maniuplation to \"undelete\" the file, or pull back chunks from\nthe filesystem which contain pieces of the file and attempting to\nreconstruct it.\n\nIf you've been keeping all WAL since the beginning of the cluster, it's\npossible you could recover that way, but you claim to not have any\nbackups, so I'm guessing that's pretty unlikely.\n\nThanks!\n\nStephen", "msg_date": "Tue, 6 Mar 2018 18:57:18 +0530", "msg_from": "Rambabu V <[email protected]>", "msg_from_op": true, "msg_subject": "Re: by mistake dropped physical file dropped for one table." } ]
[ { "msg_contents": "Hi Team,\n\nPlease help us to get the query log details from meta data table/command in\npostgresql. aw we are not maintaining log files more than 2 days due to\nlack of space.\n\n\nAnd also please provide document or sop for database upgrade from 9.3 to\n9.6, as our database size was 4.5 tb and having table spaces as well. as it\nwas production database system we do-not want to take any risk, please help\nus on this as well.\n\n\nRegards,\n\nRambabu Vakada,\nPostgreSQL DBA.\n\nHi Team,Please help us to get the query log details from meta data table/command in postgresql. aw we are not maintaining log files more than 2 days due to lack of space.And also please provide document or sop for database upgrade from 9.3 to 9.6, as our database size was 4.5 tb and having table spaces as well. as it was production database system we do-not want to take any risk, please help us on this as well.Regards,Rambabu Vakada,PostgreSQL DBA.", "msg_date": "Tue, 6 Mar 2018 17:08:24 +0530", "msg_from": "Rambabu V <[email protected]>", "msg_from_op": true, "msg_subject": "need meta data table/command to find query log" }, { "msg_contents": "Greetings,\n\nThese questions are not appropriate for the 'performance' mailing list\nbut should be either on 'admin' or 'general'. Please use the\nappropriate list for asking questions in the future.\n\n* Rambabu V ([email protected]) wrote:\n> Please help us to get the query log details from meta data table/command in\n> postgresql. aw we are not maintaining log files more than 2 days due to\n> lack of space.\n\nIt's entirely unclear what you are asking for here when you say \"meta\ndata.\" Information about tables is stored in the system catalog,\nparticularly the \"pg_class\" and \"pg_attribute\" tables, but that's\nindependent from the WAL. To read the WAL files, you can use pg_waldump\n(or pg_xlogdump on older versions), though that's not 'meta' data.\n\n> And also please provide document or sop for database upgrade from 9.3 to\n> 9.6, as our database size was 4.5 tb and having table spaces as well. as it\n> was production database system we do-not want to take any risk, please help\n> us on this as well.\n\nYou'll likely want to use pg_upgrade to perform such an upgrade:\n\nhttps://www.postgresql.org/docs/10/static/pgupgrade.html\n\nThanks!\n\nStephen", "msg_date": "Tue, 6 Mar 2018 08:24:25 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need meta data table/command to find query log" } ]
[ { "msg_contents": "\nDear some consultation, I have a base of about 750 GB in size and we are\nhaving problem of slowness in certain views of the application, so I have\nbeen seeing it is apparently a memory problem because if I run again the\nview runs fast, the base is in a virtual server with 24 GB of RAM and 8 GB\nof shared buffer, with this information how much would you recommend to put\na memory in the server\nthank you very much\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Sun, 11 Mar 2018 05:48:41 -0700 (MST)", "msg_from": "dangal <[email protected]>", "msg_from_op": true, "msg_subject": "Memory size" }, { "msg_contents": "På søndag 11. mars 2018 kl. 13:48:41, skrev dangal <[email protected] \n<mailto:[email protected]>>:\n\n Dear some consultation, I have a base of about 750 GB in size and we are\n having problem of slowness in certain views of the application, so I have\n been seeing it is apparently a memory problem because if I run again the\n view runs fast, the base is in a virtual server with 24 GB of RAM and 8 GB\n of shared buffer, with this information how much would you recommend to put\n a memory in the server\n thank you very much\n \nWhat is effective_cache_size ?\n \n\nhttps://www.postgresql.org/docs/10/static/runtime-config-query.html#GUC-EFFECTIVE-CACHE-SIZE\n \n-- Andreas Joseph Krogh", "msg_date": "Sun, 11 Mar 2018 14:49:01 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Sv: Memory size" }, { "msg_contents": "The rest of the memory Andreas, 16 gb\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Sun, 11 Mar 2018 06:57:52 -0700 (MST)", "msg_from": "dangal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sv: Memory size" }, { "msg_contents": "On Sun, Mar 11, 2018 at 5:48 AM, dangal <[email protected]> wrote:\n\n>\n> Dear some consultation, I have a base of about 750 GB in size and we are\n> having problem of slowness in certain views of the application, so I have\n> been seeing it is apparently a memory problem because if I run again the\n> view runs fast, the base is in a virtual server with 24 GB of RAM and 8 GB\n> of shared buffer, with this information how much would you recommend to put\n> a memory in the server\n>\n\nThere is no way to answer that with the information you provide.\n\nAre the \"certain views\" run with different supplied parameters on different\nexecutions, or are they run with no parameters or unchanging ones?\n\nHow long can you wait between the first run and the second run before the\nsecond run is no longer fast?\n\nCheers,\n\nJeff\n\nOn Sun, Mar 11, 2018 at 5:48 AM, dangal <[email protected]> wrote:\nDear some consultation, I have a base of about 750 GB in size and we are\nhaving problem of slowness in certain views of the application, so I have\nbeen seeing it is apparently a memory problem because if I run again the\nview runs fast, the base is in a virtual server with 24 GB of RAM and 8 GB\nof shared buffer, with this information how much would you recommend to put\na memory in the server There is no way to answer that with the information you provide.  Are the \"certain views\" run with different supplied parameters on different executions, or are they run with no parameters or unchanging ones?How long can you wait between the first run and the second run before the second run is no longer fast?Cheers,Jeff", "msg_date": "Sun, 11 Mar 2018 09:59:09 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory size" }, { "msg_contents": "jeff thank you very much for your time, I tell you, they are the same queries\nwith the same parameters, I take 3 minutes for example, but I execute it and\nit takes me seconds, that's why I suspect it is the shared buffer\nThe server had 16 GB and we increased it to 24, but I really do not know if\nit should continue to increase since they are not our own resources, we have\nto ask for them and justify them\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Sun, 11 Mar 2018 10:33:42 -0700 (MST)", "msg_from": "dangal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory size" }, { "msg_contents": "On 03/11/2018 06:33 PM, dangal wrote:\n> jeff thank you very much for your time, I tell you, they are the same queries\n> with the same parameters, I take 3 minutes for example, but I execute it and\n> it takes me seconds, that's why I suspect it is the shared buffer\n> The server had 16 GB and we increased it to 24, but I really do not know if\n> it should continue to increase since they are not our own resources, we have\n> to ask for them and justify them\n> \n\nIt's not very clear if your question is about shared_buffers or amount\nof RAM in general. In any case, it looks like the performance difference\nis due to having to do I/O on the first execution, while the second\nexecution gets served from RAM. If that's the case, increasing shared\nbuffers is not going to help, in fact it's going to make matters worse\n(due to double buffering etc.).\n\nYou should be able to confirm this by analyzing system metrics,\nparticularly I/O and CPU time. There should be a lot of I/O during the\nfirst execution, and almost nothing during the second one.\n\nSo it seems you need to add more RAM, but it's unclear how much because\nwe don't know what part of the data is regularly accessed (I really\ndoubt it's the whole 750GB). That is something you have to determine by\nanalyzing your workload. All we know is data needed by this query likely\nfit into RAM, but then get pushed out by other queries after a while.\n\nAn alternative would be to use better storage system, although that will\nnot give you the same performance, of course.\n\nFWIW it's also possible something is going wrong at the hypervisor level\n(e.g. contention for storage cache used by multiple VMs). It's hard to\nsay, considering you haven't even shared an explain analyze of the\nqueries. Try EXPLAIN (ANALYZE, BUFFERS) both for the slow and fast\nexecutions, and show us the results.\n\nFWIW you might also read this first:\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Sun, 11 Mar 2018 19:12:36 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory size" }, { "msg_contents": "thank you very much Tomas, tomorrow at work I will see to capture plans of\nejcucion to see if you can give me a hand, I am really helping me a lot with\ntheir advice\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Sun, 11 Mar 2018 12:11:18 -0700 (MST)", "msg_from": "dangal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory size" }, { "msg_contents": "På søndag 11. mars 2018 kl. 14:57:52, skrev dangal <[email protected] \n<mailto:[email protected]>>:\nThe rest of the memory Andreas, 16 gb\n \nThen I'd blame it on the virtual environment. It's common at least in the \nVMWare-world to have a 8GB disk-cache and reads going beond that are slow. \nYou've not told us anything about table/index-size but I believe reading those \nfrom disk is the culprit here.\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Sun, 11 Mar 2018 22:45:50 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Sv: Re: Sv: Memory size" }, { "msg_contents": "On Sun, Mar 11, 2018 at 10:33 AM, dangal <[email protected]> wrote:\n\n> jeff thank you very much for your time, I tell you, they are the same\n> queries\n> with the same parameters, I take 3 minutes for example, but I execute it\n> and\n> it takes me seconds, that's why I suspect it is the shared buffer\n> The server had 16 GB and we increased it to 24, but I really do not know if\n> it should continue to increase since they are not our own resources, we\n> have\n> to ask for them and justify them\n>\n\nIf that is the only query that you have trouble with, it might be easiest\njust to set up a cron job to run it periodically just to keep that data set\nin cache. Not very elegant, but it can be effective.\n\nCheers,\n\nJeff\n\nOn Sun, Mar 11, 2018 at 10:33 AM, dangal <[email protected]> wrote:jeff thank you very much for your time, I tell you, they are the same queries\nwith the same parameters, I take 3 minutes for example, but I execute it and\nit takes me seconds, that's why I suspect it is the shared buffer\nThe server had 16 GB and we increased it to 24, but I really do not know if\nit should continue to increase since they are not our own resources, we have\nto ask for them and justify themIf that is the only query that you have trouble with, it might be easiest just to set up a cron job to run it periodically just to keep that data set in cache.  Not very elegant, but it can be effective. Cheers,Jeff", "msg_date": "Sun, 11 Mar 2018 16:16:09 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory size" }, { "msg_contents": "With several views, Jeff is following us\nTomorrow I will see if I can provide more data to see if you can guide me a\nbit\nThank you so much everyone\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Sun, 11 Mar 2018 18:43:06 -0700 (MST)", "msg_from": "dangal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory size" }, { "msg_contents": "\nI was seeing thanks to your recommendations and I found the following, to\nsee what you think\n\ncache hit rate 0.99637443599712620769 \n\n\nWe have the default values 5 minutes\n\ntotal checkpoint minutes beetween checkpoint \n26927 0.358545045634493\n\ntemp_files temp_size (in 10 days)\n16870 171 GB\n\nbelieve that the problem may come here?\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Mon, 12 Mar 2018 08:08:18 -0700 (MST)", "msg_from": "dangal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory size" } ]
[ { "msg_contents": "Hi team,\n\nMy postgre version is 9.4.9, and I face a space issue.\n\nEvery time I restart postgre server, it generates a new history file:\n0000156A.history => 0000156B.history\n\nNow it takes a lot of space about 800MB (5787 history file):\n-rw------- 1 pgsql pgsql 247K Mar 13 14:49 00001568.history\n-rw------- 1 pgsql pgsql 247K Mar 13 14:51 00001569.history\n-rw------- 1 pgsql pgsql 247K Mar 13 14:52 0000156A.history\n-rw------- 1 pgsql pgsql 247K Mar 13 15:03 0000156B.history\n-rw------- 1 pgsql pgsql 247K Mar 13 15:56 0000156C.history\n-rw------- 1 pgsql pgsql 247K Mar 13 16:06 0000156D.history\n-rw------- 1 pgsql pgsql 247K Mar 13 16:06 0000156E.history\n-rw------- 1 pgsql pgsql 248K Mar 13 17:13 0000156F.history\n-rw------- 1 pgsql pgsql 16M Mar 13 17:13 0000156F00000024000000F2\n-rw------- 1 pgsql pgsql 248K Mar 13 17:13 00001570.history\n-rw------- 1 pgsql pgsql 16M Mar 14 10:11 0000157000000024000000F2\n\n\nIs file 00001570.history important?\n\nWhen I do diff to these file I found new file just have a new line compared\nto previous history file :\ncommand:\n diff 0000156F.history 00001570.history\nresult\n 10971a10972,10973\n >\n > 5487 24/F2000090 reached consistency\n\n\n\nIs there a safety way to clean .history file?\n\n1. Just save latest one?\n2. Or remove history file which is marked as done\nin pg_xlog/archive_status like:\n-rw------- 1 pgsql pgsql 0 Mar 13 14:52 0000156A.history.done\n-rw------- 1 pgsql pgsql 0 Mar 13 15:03 0000156B.history.done\n-rw------- 1 pgsql pgsql 0 Mar 13 15:56 0000156C.history.done\n-rw------- 1 pgsql pgsql 0 Mar 13 16:06 0000156D.history.done\n-rw------- 1 pgsql pgsql 0 Mar 13 16:06 0000156E.history.done\n-rw------- 1 pgsql pgsql 0 Mar 13 17:13 0000156F.history.done\n-rw------- 1 pgsql pgsql 0 Mar 13 17:13 0000156F00000024000000F2.done\n-rw------- 1 pgsql pgsql 0 Mar 13 17:13 00001570.history.done\n\n\nThanks.\n\nBy Pin\n\nHi team,My postgre version is 9.4.9, and I face a space issue.Every time I restart postgre server, it generates a new history file:0000156A.history =>  0000156B.historyNow it takes a lot of space about 800MB (5787 history file):-rw-------  1 pgsql  pgsql   247K Mar 13 14:49 00001568.history-rw-------  1 pgsql  pgsql   247K Mar 13 14:51 00001569.history-rw-------  1 pgsql  pgsql   247K Mar 13 14:52 0000156A.history-rw-------  1 pgsql  pgsql   247K Mar 13 15:03 0000156B.history-rw-------  1 pgsql  pgsql   247K Mar 13 15:56 0000156C.history-rw-------  1 pgsql  pgsql   247K Mar 13 16:06 0000156D.history-rw-------  1 pgsql  pgsql   247K Mar 13 16:06 0000156E.history-rw-------  1 pgsql  pgsql   248K Mar 13 17:13 0000156F.history-rw-------  1 pgsql  pgsql    16M Mar 13 17:13 0000156F00000024000000F2-rw-------  1 pgsql  pgsql   248K Mar 13 17:13 00001570.history-rw-------  1 pgsql  pgsql    16M Mar 14 10:11 0000157000000024000000F2Is  file 00001570.history important?When I do diff to these file I found new file just have a new line compared to previous history file :command:    diff 0000156F.history 00001570.historyresult    10971a10972,10973    >     > 5487  24/F2000090     reached consistencyIs there a safety way to clean .history file? 1. Just save latest one?2. Or  remove  history file which is marked as done in pg_xlog/archive_status like:-rw-------  1 pgsql  pgsql  0 Mar 13 14:52 0000156A.history.done-rw-------  1 pgsql  pgsql  0 Mar 13 15:03 0000156B.history.done-rw-------  1 pgsql  pgsql  0 Mar 13 15:56 0000156C.history.done-rw-------  1 pgsql  pgsql  0 Mar 13 16:06 0000156D.history.done-rw-------  1 pgsql  pgsql  0 Mar 13 16:06 0000156E.history.done-rw-------  1 pgsql  pgsql  0 Mar 13 17:13 0000156F.history.done-rw-------  1 pgsql  pgsql  0 Mar 13 17:13 0000156F00000024000000F2.done-rw-------  1 pgsql  pgsql  0 Mar 13 17:13 00001570.history.doneThanks.By Pin", "msg_date": "Wed, 14 Mar 2018 10:31:19 +0800", "msg_from": "=?UTF-8?B?5b2t5pix5YKR?= <[email protected]>", "msg_from_op": true, "msg_subject": "Too many .history file in pg_xlog takes lots of space" }, { "msg_contents": "彭昱傑 wrote:\n> My postgre version is 9.4.9, and I face a space issue.\n> \n> Every time I restart postgre server, it generates a new history file:\n> 0000156A.history => 0000156B.history\n> \n> Now it takes a lot of space about 800MB (5787 history file):\n> [...]\n> Is file 00001570.history important?\n\nA new history file is created when a new timeline is opened,\nwhich happens after point-in-time-recovery or promotion of\na physical standby server.\n\nThere must be something weird in the way you start PostgreSQL.\nExamine the start script, maybe you can fix the problem.\n\nThese files are only necessary for point-in-time-recovery,\nso you don't have to retain them any longer than you retain\nyour WAL archives.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n", "msg_date": "Wed, 14 Mar 2018 06:56:52 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too many .history file in pg_xlog takes lots of space" }, { "msg_contents": "Thank you.\n\nIt's useful information for me.\nI will examine my restart script, and study point-in-time-recovery.\nAlso remove unused history file.\n\n2018-03-14 13:56 GMT+08:00 Laurenz Albe <[email protected]>:\n\n> 彭昱傑 wrote:\n> > My postgre version is 9.4.9, and I face a space issue.\n> >\n> > Every time I restart postgre server, it generates a new history file:\n> > 0000156A.history => 0000156B.history\n> >\n> > Now it takes a lot of space about 800MB (5787 history file):\n> > [...]\n> > Is file 00001570.history important?\n>\n> A new history file is created when a new timeline is opened,\n> which happens after point-in-time-recovery or promotion of\n> a physical standby server.\n>\n> There must be something weird in the way you start PostgreSQL.\n> Examine the start script, maybe you can fix the problem.\n>\n> These files are only necessary for point-in-time-recovery,\n> so you don't have to retain them any longer than you retain\n> your WAL archives.\n>\n> Yours,\n> Laurenz Albe\n> --\n> Cybertec | https://www.cybertec-postgresql.com\n>\n\nThank you.It's useful information for me.I will  examine my restart script, and study \n\npoint-in-time-recovery.Also remove unused history file.2018-03-14 13:56 GMT+08:00 Laurenz Albe <[email protected]>:彭昱傑 wrote:\n> My postgre version is 9.4.9, and I face a space issue.\n>\n> Every time I restart postgre server, it generates a new history file:\n> 0000156A.history =>  0000156B.history\n>\n> Now it takes a lot of space about 800MB (5787 history file):\n> [...]\n> Is  file 00001570.history important?\n\nA new history file is created when a new timeline is opened,\nwhich happens after point-in-time-recovery or promotion of\na physical standby server.\n\nThere must be something weird in the way you start PostgreSQL.\nExamine the start script, maybe you can fix the problem.\n\nThese files are only necessary for point-in-time-recovery,\nso you don't have to retain them any longer than you retain\nyour WAL archives.\n\nYours,\nLaurenz Albe\n--\nCybertec | https://www.cybertec-postgresql.com", "msg_date": "Wed, 14 Mar 2018 14:12:47 +0800", "msg_from": "=?UTF-8?B?5b2t5pix5YKR?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Too many .history file in pg_xlog takes lots of space" }, { "msg_contents": "On Wed, Mar 14, 2018 at 02:12:47PM +0800, 彭昱傑 wrote:\n> It's useful information for me.\n\nOnce archived, there is no need to keep them in the data folder as if\nneeded at recovery the startup process would look for timeline history\nfiles where necessary if it needs to do a timeline jump.\n\n> I will examine my restart script, and study point-in-time-recovery.\n> Also remove unused history file.\n\nAt the same time, the backend makes little effort to remove past\ntimeline history files, and those are just a couple of bytes, which\naccumulate, so after a couple of hundreds of failovers you could bloat\nthe data folder. Why not making their removal more aggressive at each\nrestart point created? You don't need any history files older than the\ncurrent timeline recovery is processing, so we could make the removal\npolicy more aggressive.\n--\nMichael", "msg_date": "Wed, 14 Mar 2018 16:04:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too many .history file in pg_xlog takes lots of space" }, { "msg_contents": "彭昱傑 wrote:\n\n> My postgre version is 9.4.9, and I face a space issue.\n\nLatest in 9.4 is 9.4.17, so you're missing about two years of bug fixes.\n\n> Every time I restart postgre server, it generates a new history file:\n\nThat's strange -- it shouldn't happen ... sounds like you're causing a\ncrash each time you restart. Are you using immediate mode in shutdown\nmaybe? If so, don't; use fast mode instead.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 14 Mar 2018 10:49:58 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too many .history file in pg_xlog takes lots of space" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> 彭昱傑 wrote:\n>> Every time I restart postgre server, it generates a new history file:\n\n> That's strange -- it shouldn't happen ... sounds like you're causing a\n> crash each time you restart. Are you using immediate mode in shutdown\n> maybe? If so, don't; use fast mode instead.\n\nI'm confused by this report too. Plain crashes shouldn't result in\nforking a new timeline. To check, I tried \"-m immediate\", as well as\n\"kill -9 postmaster\", and neither of those resulted in a new .history file\non restart. I wonder if the OP's restart process involves calling\npg_resetxlog or something like that (which would be risky as heck).\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 14 Mar 2018 11:29:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too many .history file in pg_xlog takes lots of space" }, { "msg_contents": "Hi Michael, Alvaro, Tom:\n\nReally appreciate yours help, this is an invalid report, and I'm sorry for\nthat.\n\nAfter I examine restart script, I found we generate recovery.conf every\ntime, and this cause lost of timeline.\n\nThanks.\n\n2018-03-14 23:29 GMT+08:00 Tom Lane <[email protected]>:\n\n> Alvaro Herrera <[email protected]> writes:\n> > 彭昱傑 wrote:\n> >> Every time I restart postgre server, it generates a new history file:\n>\n> > That's strange -- it shouldn't happen ... sounds like you're causing a\n> > crash each time you restart. Are you using immediate mode in shutdown\n> > maybe? If so, don't; use fast mode instead.\n>\n> I'm confused by this report too. Plain crashes shouldn't result in\n> forking a new timeline. To check, I tried \"-m immediate\", as well as\n> \"kill -9 postmaster\", and neither of those resulted in a new .history file\n> on restart. I wonder if the OP's restart process involves calling\n> pg_resetxlog or something like that (which would be risky as heck).\n>\n> regards, tom lane\n>\n\nHi \n\nMichael, Alvaro, Tom:Really appreciate yours help, this is an invalid report, and I'm sorry for that.After I  examine restart script, I found we generate recovery.conf every time, and this cause lost of timeline.Thanks.2018-03-14 23:29 GMT+08:00 Tom Lane <[email protected]>:Alvaro Herrera <[email protected]> writes:\n> 彭昱傑 wrote:\n>> Every time I restart postgre server, it generates a new history file:\n\n> That's strange -- it shouldn't happen ... sounds like you're causing a\n> crash each time you restart.  Are you using immediate mode in shutdown\n> maybe?  If so, don't; use fast mode instead.\n\nI'm confused by this report too.  Plain crashes shouldn't result in\nforking a new timeline.  To check, I tried \"-m immediate\", as well as\n\"kill -9 postmaster\", and neither of those resulted in a new .history file\non restart.  I wonder if the OP's restart process involves calling\npg_resetxlog or something like that (which would be risky as heck).\n\n                        regards, tom lane", "msg_date": "Thu, 15 Mar 2018 18:22:18 +0800", "msg_from": "=?UTF-8?B?5b2t5pix5YKR?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Too many .history file in pg_xlog takes lots of space" } ]
[ { "msg_contents": "Here's a weird one I can't figure out: the definitions of several columns\nof a view, which are not used in a query at all, have a massive effect on\nthe query planner, causing it to choose a seqscan over the largest table in\nour database when it should be using the primary key for the join.\nBackground: We've redesigned the tables that hold our primary data, but\nneed to create views that mimic the old design so that our applications\nwill continue working. The largest table (\"chemaxon.sdf\") holds the bulk of\nthe data, and we've changed it from raw text to gzipped bytea. To mimic the\nold schema, I created a short Perl function that does a simple gunzip\noperation, and used that in the definition of the view \"str_conntab\". (This\ngzip reduces our total database size to about a third of the original --\nit's very effective).\n\nHere are two query plans. The first is horrible. For the second, I removed\nthe gunzip functions and replaced them with constant values. But notice\nthat these pseudo columns are not used anywhere in the query. (Even if they\nwere, I don't understand why this should affect the planner.)\n\nThe tables VERSION and VERSION_PROPERTIES are also views; I've included\ntheir definitions and the underlying actual tables below.\n\nPostgres 9.6.7 running on Ubuntu 16.04.\n\nemolecules=> drop view str_conntab;\nDROP VIEW\nemolecules=> create view str_conntab as\nemolecules-> (select\nemolecules(> id,\nemolecules(> length(gunzip(sdf_gzip)) as contab_len,\nemolecules(> gunzip(sdf_gzip) as contab_data,\nemolecules(> ''::text as normalized\nemolecules(> from chemaxon.sdf);\nCREATE VIEW\n\nemolecules=> explain analyze\nselect VERSION.VERSION_ID, VERSION.ISOSMILES,\nVERSION_PROPERTIES.MOLECULAR_WEIGHT, VERSION_PROPERTIES.MOLECULAR_FORMULA\n from VERSION\n join VERSION_PROPERTIES on (VERSION.VERSION_ID =\nVERSION_PROPERTIES.VERSION_ID)\n join STR_CONNTAB on (VERSION.VERSION_ID = STR_CONNTAB.ID)\n where VERSION.VERSION_ID in\n(1485909,1485889,1485903,1485887,1485892,1485900,1485895,1485898,1485906,1485884);\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=62.99..162425.77 rows=5 width=60) (actual\ntime=34.718..152828.351 rows=10 loops=1)\n Join Filter: (s.id = p_1.id)\n -> Nested Loop (cost=62.56..162422.84 rows=6 width=55) (actual\ntime=34.701..152828.289 rows=10 loops=1)\n Join Filter: (s.id = parent.id)\n -> Nested Loop (cost=62.14..162419.48 rows=7 width=51) (actual\ntime=34.694..152828.250 rows=10 loops=1)\n Join Filter: (s.id = p.id)\n -> Hash Join (cost=61.72..162415.16 rows=9 width=47)\n(actual time=34.663..152828.110 rows=10 loops=1)\n Hash Cond: (sdf.id = s.id)\n -> Seq Scan on sdf (cost=0.00..158488.50 rows=281080\nwidth=72) (actual time=33.623..152630.514 rows=281080 loops=1)\n -> Hash (cost=61.59..61.59 rows=10 width=43) (actual\ntime=0.028..0.028 rows=10 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Index Scan using smiles_pkey on smiles s\n(cost=0.42..61.59 rows=10 width=43) (actual time=0.010..0.022 rows=10\nloops=1)\n Index Cond: (id = ANY\n('{1485909,1485889,1485903,1485887,1485892,1485900,1485895,1485898,1485906,1485884}'::integer[]))\n -> Index Only Scan using parent_pkey on parent p\n(cost=0.42..0.47 rows=1 width=4) (actual time=0.011..0.011 rows=1 loops=10)\n Index Cond: (id = sdf.id)\n Heap Fetches: 10\n -> Index Only Scan using parent_pkey on parent (cost=0.42..0.47\nrows=1 width=4) (actual time=0.002..0.002 rows=1 loops=10)\n Index Cond: (id = sdf.id)\n Heap Fetches: 10\n -> Index Scan using properties_pkey on properties p_1 (cost=0.42..0.48\nrows=1 width=21) (actual time=0.003..0.004 rows=1 loops=10)\n Index Cond: (id = sdf.id)\n Planning time: 1.330 ms\n Execution time: 152828.506 ms\n(23 rows)\n\nemolecules=> drop view str_conntab;\nDROP VIEW\nemolecules=> create view str_conntab as\nemolecules-> (select\nemolecules(> id,\nemolecules(> 0::integer contab_len,\nemolecules(> null::text as contab_data,\nemolecules(> ''::text as normalized\nemolecules(> from chemaxon.sdf);\nCREATE VIEW\nemolecules=> explain analyze\nselect VERSION.VERSION_ID, VERSION.ISOSMILES,\nVERSION_PROPERTIES.MOLECULAR_WEIGHT, VERSION_PROPERTIES.MOLECULAR_FORMULA\n from VERSION\n join VERSION_PROPERTIES on (VERSION.VERSION_ID =\nVERSION_PROPERTIES.VERSION_ID)\n join STR_CONNTAB on (VERSION.VERSION_ID = STR_CONNTAB.ID)\n where VERSION.VERSION_ID in\n(1485909,1485889,1485903,1485887,1485892,1485900,1485895,1485898,1485906,1485884);\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=2.11..156.71 rows=5 width=60) (actual time=0.018..0.096\nrows=10 loops=1)\n Join Filter: (s.id = p_1.id)\n -> Nested Loop (cost=1.69..153.77 rows=6 width=55) (actual\ntime=0.015..0.076 rows=10 loops=1)\n Join Filter: (s.id = parent.id)\n -> Nested Loop (cost=1.27..150.41 rows=7 width=51) (actual\ntime=0.012..0.059 rows=10 loops=1)\n Join Filter: (s.id = p.id)\n -> Nested Loop (cost=0.84..146.09 rows=9 width=47) (actual\ntime=0.008..0.037 rows=10 loops=1)\n -> Index Scan using smiles_pkey on smiles s\n(cost=0.42..61.59 rows=10 width=43) (actual time=0.003..0.016 rows=10\nloops=1)\n Index Cond: (id = ANY\n('{1485909,1485889,1485903,1485887,1485892,1485900,1485895,1485898,1485906,1485884}'::integer[]))\n -> Index Only Scan using sdf_pkey on sdf\n(cost=0.42..8.44 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=10)\n Index Cond: (id = s.id)\n Heap Fetches: 10\n -> Index Only Scan using parent_pkey on parent p\n(cost=0.42..0.47 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=10)\n Index Cond: (id = sdf.id)\n Heap Fetches: 10\n -> Index Only Scan using parent_pkey on parent (cost=0.42..0.47\nrows=1 width=4) (actual time=0.001..0.001 rows=1 loops=10)\n Index Cond: (id = sdf.id)\n Heap Fetches: 10\n -> Index Scan using properties_pkey on properties p_1 (cost=0.42..0.48\nrows=1 width=21) (actual time=0.001..0.002 rows=1 loops=10)\n Index Cond: (id = sdf.id)\n Planning time: 1.251 ms\n Execution time: 0.147 ms\n(22 rows)\n\nThe timing of the second query is excellent, and is what I expected. I\ndon't understand why including a function-defined column in the view would\nhave such a dramatic effect on the planner's ability to choose the sdf_pkey\nindex for the join.\n\nHere are the view and table definitions:\n\nemolecules=> \\d+ version\n View \"registry.version\"\n Column | Type | Collation | Nullable | Default | Storage |\nDescription\n------------+---------+-----------+----------+---------+----------+-------------\n version_id | integer | | | | plain |\n parent_id | integer | | | | plain |\n isosmiles | text | | | | extended |\n created | abstime | | | | plain |\nView definition:\n SELECT s.id AS version_id,\n p.parent_id,\n s.smiles AS isosmiles,\n timenow() AS created\n FROM chemaxon.smiles s\n JOIN chemaxon.parent p ON s.id = p.id;\n\nemolecules=> \\d+ version_properties\n View \"registry.version_properties\"\n Column | Type | Collation | Nullable | Default |\nStorage | Description\n-------------------+--------------+-----------+----------+---------+----------+-------------\n version_id | integer | | | | plain\n |\n molecular_weight | numeric(8,3) | | | | main\n |\n molecular_formula | text | | | |\nextended |\n mfcd | text | | | |\nextended |\n cas_number | text | | | |\nextended |\nView definition:\n SELECT p.id AS version_id,\n p.molecular_weight,\n p.molecular_formula,\n m.mfcd,\n c.cas_number\n FROM chemaxon.properties p\n LEFT JOIN chemaxon.mfcd m USING (id)\n LEFT JOIN chemaxon.cas_number c USING (id)\n JOIN chemaxon.parent USING (id);\n\n Table \"chemaxon.smiles\"\n Column | Type | Collation | Nullable | Default | Storage | Stats\ntarget | Description\n--------+---------+-----------+----------+---------+----------+--------------+-------------\n id | integer | | not null | | plain |\n |\n smiles | text | | not null | | extended |\n |\nIndexes:\n \"smiles_pkey\" PRIMARY KEY, btree (id)\n \"i_unique_smiles\" UNIQUE, btree (smiles)\n\n Table \"chemaxon.parent\"\n Column | Type | Collation | Nullable | Default | Storage | Stats\ntarget | Description\n-----------+---------+-----------+----------+---------+---------+--------------+-------------\n id | integer | | not null | | plain |\n |\n parent_id | integer | | not null | | plain |\n |\nIndexes:\n \"parent_pkey\" PRIMARY KEY, btree (id)\n \"i_parent_parent_id\" btree (parent_id)\n\nemolecules=> \\d chemaxon.cas_number\n Table \"chemaxon.cas_number\"\n Column | Type | Collation | Nullable | Default\n------------+---------+-----------+----------+---------\n id | integer | | not null |\n cas_number | text | | |\nIndexes:\n \"cas_number_pkey\" PRIMARY KEY, btree (id)\n \"i_cas_number_cas_number\" btree (cas_number)\n\nemolecules=> \\d chemaxon.cas_number\n Table \"chemaxon.cas_number\"\n Column | Type | Collation | Nullable | Default\n------------+---------+-----------+----------+---------\n id | integer | | not null |\n cas_number | text | | |\nIndexes:\n \"cas_number_pkey\" PRIMARY KEY, btree (id)\n \"i_cas_number_cas_number\" btree (cas_number)\n\nAnd the function \"gunzip\" is defined in perl (unsafe Perl) as:\n\ncreate or replace function gunzip(bytea) returns text as\n$gunzip$\n use IO::Uncompress::Gunzip qw(gunzip $GunzipError);\n my $compressed = decode_bytea($_[0]);\n my $uncompressed;\n if (!gunzip(\\$compressed, \\$uncompressed)) {\n return $GunzipError;\n }\n return $uncompressed;\n$gunzip$\nlanguage plperlu;\n\n\n\nThanks!\nCraig\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nHere's a weird one I can't figure out: the definitions of several columns of a view, which are not used in a query at all, have a massive effect on the query planner, causing it to choose a seqscan over the largest table in our database when it should be using the primary key for the join.Background: We've redesigned the tables that hold our primary data, but need to create views that mimic the old design so that our applications will continue working. The largest table (\"chemaxon.sdf\") holds the bulk of the data, and we've changed it from raw text to gzipped bytea. To mimic the old schema, I created a short Perl function that does a simple gunzip operation, and used that in the definition of the view \"str_conntab\". (This gzip reduces our total database size to about a third of the original -- it's very effective).Here are two query plans. The first is horrible. For the second, I removed the gunzip functions and replaced them with constant values. But notice that these pseudo columns are not used anywhere in the query. (Even if they were, I don't understand why this should affect the planner.)The tables VERSION and VERSION_PROPERTIES are also views; I've included their definitions and the underlying actual tables below.Postgres 9.6.7 running on Ubuntu 16.04.emolecules=> drop view str_conntab;DROP VIEWemolecules=> create view str_conntab asemolecules->  (selectemolecules(>    id,emolecules(>    length(gunzip(sdf_gzip)) as contab_len,emolecules(>    gunzip(sdf_gzip) as contab_data,emolecules(>    ''::text as normalizedemolecules(>   from chemaxon.sdf);CREATE VIEWemolecules=> explain analyzeselect VERSION.VERSION_ID, VERSION.ISOSMILES, VERSION_PROPERTIES.MOLECULAR_WEIGHT, VERSION_PROPERTIES.MOLECULAR_FORMULA from VERSION join VERSION_PROPERTIES on (VERSION.VERSION_ID = VERSION_PROPERTIES.VERSION_ID) join STR_CONNTAB on (VERSION.VERSION_ID = STR_CONNTAB.ID) where VERSION.VERSION_ID in (1485909,1485889,1485903,1485887,1485892,1485900,1485895,1485898,1485906,1485884);                                                                       QUERY PLAN                                                                        --------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=62.99..162425.77 rows=5 width=60) (actual time=34.718..152828.351 rows=10 loops=1)   Join Filter: (s.id = p_1.id)   ->  Nested Loop  (cost=62.56..162422.84 rows=6 width=55) (actual time=34.701..152828.289 rows=10 loops=1)         Join Filter: (s.id = parent.id)         ->  Nested Loop  (cost=62.14..162419.48 rows=7 width=51) (actual time=34.694..152828.250 rows=10 loops=1)               Join Filter: (s.id = p.id)               ->  Hash Join  (cost=61.72..162415.16 rows=9 width=47) (actual time=34.663..152828.110 rows=10 loops=1)                     Hash Cond: (sdf.id = s.id)                     ->  Seq Scan on sdf  (cost=0.00..158488.50 rows=281080 width=72) (actual time=33.623..152630.514 rows=281080 loops=1)                     ->  Hash  (cost=61.59..61.59 rows=10 width=43) (actual time=0.028..0.028 rows=10 loops=1)                           Buckets: 1024  Batches: 1  Memory Usage: 9kB                           ->  Index Scan using smiles_pkey on smiles s  (cost=0.42..61.59 rows=10 width=43) (actual time=0.010..0.022 rows=10 loops=1)                                 Index Cond: (id = ANY ('{1485909,1485889,1485903,1485887,1485892,1485900,1485895,1485898,1485906,1485884}'::integer[]))               ->  Index Only Scan using parent_pkey on parent p  (cost=0.42..0.47 rows=1 width=4) (actual time=0.011..0.011 rows=1 loops=10)                     Index Cond: (id = sdf.id)                     Heap Fetches: 10         ->  Index Only Scan using parent_pkey on parent  (cost=0.42..0.47 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=10)               Index Cond: (id = sdf.id)               Heap Fetches: 10   ->  Index Scan using properties_pkey on properties p_1  (cost=0.42..0.48 rows=1 width=21) (actual time=0.003..0.004 rows=1 loops=10)         Index Cond: (id = sdf.id) Planning time: 1.330 ms Execution time: 152828.506 ms(23 rows)emolecules=> drop view str_conntab;DROP VIEWemolecules=> create view str_conntab asemolecules->  (selectemolecules(>    id,emolecules(>    0::integer contab_len,emolecules(>    null::text as contab_data,emolecules(>    ''::text as normalizedemolecules(>   from chemaxon.sdf);CREATE VIEWemolecules=> explain analyzeselect VERSION.VERSION_ID, VERSION.ISOSMILES, VERSION_PROPERTIES.MOLECULAR_WEIGHT, VERSION_PROPERTIES.MOLECULAR_FORMULA from VERSION join VERSION_PROPERTIES on (VERSION.VERSION_ID = VERSION_PROPERTIES.VERSION_ID) join STR_CONNTAB on (VERSION.VERSION_ID = STR_CONNTAB.ID) where VERSION.VERSION_ID in (1485909,1485889,1485903,1485887,1485892,1485900,1485895,1485898,1485906,1485884);                                                                    QUERY PLAN                                                                     --------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=2.11..156.71 rows=5 width=60) (actual time=0.018..0.096 rows=10 loops=1)   Join Filter: (s.id = p_1.id)   ->  Nested Loop  (cost=1.69..153.77 rows=6 width=55) (actual time=0.015..0.076 rows=10 loops=1)         Join Filter: (s.id = parent.id)         ->  Nested Loop  (cost=1.27..150.41 rows=7 width=51) (actual time=0.012..0.059 rows=10 loops=1)               Join Filter: (s.id = p.id)               ->  Nested Loop  (cost=0.84..146.09 rows=9 width=47) (actual time=0.008..0.037 rows=10 loops=1)                     ->  Index Scan using smiles_pkey on smiles s  (cost=0.42..61.59 rows=10 width=43) (actual time=0.003..0.016 rows=10 loops=1)                           Index Cond: (id = ANY ('{1485909,1485889,1485903,1485887,1485892,1485900,1485895,1485898,1485906,1485884}'::integer[]))                     ->  Index Only Scan using sdf_pkey on sdf  (cost=0.42..8.44 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=10)                           Index Cond: (id = s.id)                           Heap Fetches: 10               ->  Index Only Scan using parent_pkey on parent p  (cost=0.42..0.47 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=10)                     Index Cond: (id = sdf.id)                     Heap Fetches: 10         ->  Index Only Scan using parent_pkey on parent  (cost=0.42..0.47 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=10)               Index Cond: (id = sdf.id)               Heap Fetches: 10   ->  Index Scan using properties_pkey on properties p_1  (cost=0.42..0.48 rows=1 width=21) (actual time=0.001..0.002 rows=1 loops=10)         Index Cond: (id = sdf.id) Planning time: 1.251 ms Execution time: 0.147 ms(22 rows)The timing of the second query is excellent, and is what I expected. I don't understand why including a function-defined column in the view would have such a dramatic effect on the planner's ability to choose the sdf_pkey index for the join.Here are the view and table definitions:emolecules=> \\d+ version                            View \"registry.version\"   Column   |  Type   | Collation | Nullable | Default | Storage  | Description ------------+---------+-----------+----------+---------+----------+------------- version_id | integer |           |          |         | plain    |  parent_id  | integer |           |          |         | plain    |  isosmiles  | text    |           |          |         | extended |  created    | abstime |           |          |         | plain    | View definition: SELECT s.id AS version_id,    p.parent_id,    s.smiles AS isosmiles,    timenow() AS created   FROM chemaxon.smiles s     JOIN chemaxon.parent p ON s.id = p.id;emolecules=> \\d+ version_properties                             View \"registry.version_properties\"      Column       |     Type     | Collation | Nullable | Default | Storage  | Description -------------------+--------------+-----------+----------+---------+----------+------------- version_id        | integer      |           |          |         | plain    |  molecular_weight  | numeric(8,3) |           |          |         | main     |  molecular_formula | text         |           |          |         | extended |  mfcd              | text         |           |          |         | extended |  cas_number        | text         |           |          |         | extended | View definition: SELECT p.id AS version_id,    p.molecular_weight,    p.molecular_formula,    m.mfcd,    c.cas_number   FROM chemaxon.properties p     LEFT JOIN chemaxon.mfcd m USING (id)     LEFT JOIN chemaxon.cas_number c USING (id)     JOIN chemaxon.parent USING (id);                                  Table \"chemaxon.smiles\" Column |  Type   | Collation | Nullable | Default | Storage  | Stats target | Description --------+---------+-----------+----------+---------+----------+--------------+------------- id     | integer |           | not null |         | plain    |              |  smiles | text    |           | not null |         | extended |              | Indexes:    \"smiles_pkey\" PRIMARY KEY, btree (id)    \"i_unique_smiles\" UNIQUE, btree (smiles)                                   Table \"chemaxon.parent\"  Column   |  Type   | Collation | Nullable | Default | Storage | Stats target | Description -----------+---------+-----------+----------+---------+---------+--------------+------------- id        | integer |           | not null |         | plain   |              |  parent_id | integer |           | not null |         | plain   |              | Indexes:    \"parent_pkey\" PRIMARY KEY, btree (id)    \"i_parent_parent_id\" btree (parent_id)emolecules=> \\d chemaxon.cas_number              Table \"chemaxon.cas_number\"   Column   |  Type   | Collation | Nullable | Default ------------+---------+-----------+----------+--------- id         | integer |           | not null |  cas_number | text    |           |          | Indexes:    \"cas_number_pkey\" PRIMARY KEY, btree (id)    \"i_cas_number_cas_number\" btree (cas_number)emolecules=> \\d chemaxon.cas_number              Table \"chemaxon.cas_number\"   Column   |  Type   | Collation | Nullable | Default ------------+---------+-----------+----------+--------- id         | integer |           | not null |  cas_number | text    |           |          | Indexes:    \"cas_number_pkey\" PRIMARY KEY, btree (id)    \"i_cas_number_cas_number\" btree (cas_number)And the function \"gunzip\" is defined in perl (unsafe Perl) as:create or replace function gunzip(bytea) returns text as$gunzip$  use IO::Uncompress::Gunzip qw(gunzip $GunzipError);  my $compressed = decode_bytea($_[0]);  my $uncompressed;  if (!gunzip(\\$compressed, \\$uncompressed)) {    return $GunzipError;  }  return $uncompressed;$gunzip$language plperlu;Thanks!Craig-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Fri, 16 Mar 2018 13:37:05 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Irrelevant columns cause massive performance change" }, { "msg_contents": "Hi,\n\nOn 2018-03-16 13:37:05 -0700, Craig James wrote:\n> The timing of the second query is excellent, and is what I expected. I\n> don't understand why including a function-defined column in the view would\n> have such a dramatic effect on the planner's ability to choose the sdf_pkey\n> index for the join.\n\n> create or replace function gunzip(bytea) returns text as\n> $gunzip$\n> use IO::Uncompress::Gunzip qw(gunzip $GunzipError);\n> my $compressed = decode_bytea($_[0]);\n> my $uncompressed;\n> if (!gunzip(\\$compressed, \\$uncompressed)) {\n> return $GunzipError;\n> }\n> return $uncompressed;\n> $gunzip$\n> language plperlu;\n\nI suspect at least part of the problem here is that the function is\ndeclared volatile (the default). That means it can have arbitrary\nsideeffects, which in turn means there's several places in the planner\nthat forgo optimizations if volatile functions are involved. If you\ndeclare the function as immutable, does the problem persist?\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Fri, 16 Mar 2018 13:50:13 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Irrelevant columns cause massive performance change" }, { "msg_contents": "On Fri, Mar 16, 2018 at 1:50 PM, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2018-03-16 13:37:05 -0700, Craig James wrote:\n> > The timing of the second query is excellent, and is what I expected. I\n> > don't understand why including a function-defined column in the view\n> would\n> > have such a dramatic effect on the planner's ability to choose the\n> sdf_pkey\n> > index for the join.\n>\n> > create or replace function gunzip(bytea) returns text as\n> > $gunzip$\n> > use IO::Uncompress::Gunzip qw(gunzip $GunzipError);\n> > my $compressed = decode_bytea($_[0]);\n> > my $uncompressed;\n> > if (!gunzip(\\$compressed, \\$uncompressed)) {\n> > return $GunzipError;\n> > }\n> > return $uncompressed;\n> > $gunzip$\n> > language plperlu;\n>\n> I suspect at least part of the problem here is that the function is\n> declared volatile (the default). That means it can have arbitrary\n> sideeffects, which in turn means there's several places in the planner\n> that forgo optimizations if volatile functions are involved. If you\n> declare the function as immutable, does the problem persist?\n>\n\nYes, perfect. That fixed the problem.\n\nThanks,\nCraig\n\n\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Fri, Mar 16, 2018 at 1:50 PM, Andres Freund <[email protected]> wrote:Hi,\n\nOn 2018-03-16 13:37:05 -0700, Craig James wrote:\n> The timing of the second query is excellent, and is what I expected. I\n> don't understand why including a function-defined column in the view would\n> have such a dramatic effect on the planner's ability to choose the sdf_pkey\n> index for the join.\n\n> create or replace function gunzip(bytea) returns text as\n> $gunzip$\n>   use IO::Uncompress::Gunzip qw(gunzip $GunzipError);\n>   my $compressed = decode_bytea($_[0]);\n>   my $uncompressed;\n>   if (!gunzip(\\$compressed, \\$uncompressed)) {\n>     return $GunzipError;\n>   }\n>   return $uncompressed;\n> $gunzip$\n> language plperlu;\n\nI suspect at least part of the problem here is that the function is\ndeclared volatile (the default). That means it can have arbitrary\nsideeffects, which in turn means there's several places in the planner\nthat forgo optimizations if volatile functions are involved.  If you\ndeclare the function as immutable, does the problem persist?Yes, perfect. That fixed the problem.Thanks,Craig \n\nGreetings,\n\nAndres Freund\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Fri, 16 Mar 2018 14:06:04 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Irrelevant columns cause massive performance change" } ]
[ { "msg_contents": "Hi, I was wondering if someone could help us work out why this query is so\nslow.\n\nWe've just dumped a database (Postgresql 9.1) and restored it to a new\ninstance (AWS RDS 9.6) (via pg_dump, restored to psql)\n\nWe immediately see that the following fairly straightforward query is now\nextremely slow with a huge number of shared buffers hit.\nOn the new instance it takes 25 seconds. On the original system, 0.05\nseconds.\n\nThe database schemas are both the same (with the same columns indexed) so\nI'm guessing it must be a configuration issue to make the planner go down a\ndifferent route.\nI have run an analyse on the whole database since restoring.\n\nAlso, if on the new instance, I disable indexscans, the query will take\n0.047 seconds.\n\nCan someone point us in the right direction on what's going on here?\n\n\n*Query:*\n\nexplain (buffers,analyse) select\n trans_date\n from stock_trans s\n join account_trans a using(account_trans_id)\n where product_id=100\n and credit_stock_account_id=3\n order by trans_date desc\n limit 1;\n\n\n*Bad Performance on New Instance:*\n\nhttps://explain.depesz.com/s/0HXq\n\n\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.87..9086.72 rows=1 width=4) (actual\ntime=25829.287..25829.287 rows=0 loops=1)\n Buffers: shared hit=43944066\n -> Nested Loop (cost=0.87..6196547.28 rows=682 width=4) (actual\ntime=25829.286..25829.286 rows=0 loops=1)\n Buffers: shared hit=43944066\n -> Index Scan Backward using account_trans_date_idx on\naccount_trans a (cost=0.43..392996.60 rows=11455133 width=8) (actual\ntime=0.007..3401.027 rows=11455133 loops=1)\n Buffers: shared hit=251082\n -> Index Scan using stock_trans_account_trans_idx on stock_trans\ns (cost=0.43..0.50 rows=1 width=4) (actual time=0.001..0.001 rows=0\nloops=11455133)\n Index Cond: (account_trans_id = a.account_trans_id)\n Filter: ((product_id = 2466420) AND (credit_stock_account_id\n= 3))\n Rows Removed by Filter: 1\n Buffers: shared hit=43692984\n Planning time: 0.271 ms\n Execution time: 25829.316 ms\n(13 rows)\n\n\n*Disabled indexscan:*\n\n=> set enable_indexscan=off;\n\nhttps://explain.depesz.com/s/zTVn\n\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=11049.80..11049.81 rows=1 width=4) (actual time=0.018..0.018\nrows=0 loops=1)\n Buffers: shared hit=3\n -> Sort (cost=11049.80..11051.51 rows=682 width=4) (actual\ntime=0.017..0.017 rows=0 loops=1)\n Sort Key: a.trans_date DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=3\n -> Nested Loop (cost=35.99..11046.39 rows=682 width=4) (actual\ntime=0.011..0.011 rows=0 loops=1)\n Buffers: shared hit=3\n -> Bitmap Heap Scan on stock_trans s (cost=31.59..5301.09\nrows=682 width=4) (actual time=0.011..0.011 rows=0 loops=1)\n Recheck Cond: (product_id = 2466420)\n Filter: (credit_stock_account_id = 3)\n Buffers: shared hit=3\n -> Bitmap Index Scan on stock_trans_product_idx\n(cost=0.00..31.42 rows=1465 width=0) (actual time=0.009..0.009 rows=0\nloops=1)\n Index Cond: (product_id = 2466420)\n Buffers: shared hit=3\n -> Bitmap Heap Scan on account_trans a (cost=4.40..8.41\nrows=1 width=8) (never executed)\n Recheck Cond: (account_trans_id = s.account_trans_id)\n -> Bitmap Index Scan on account_trans_pkey\n(cost=0.00..4.40 rows=1 width=0) (never executed)\n Index Cond: (account_trans_id =\ns.account_trans_id)\n Planning time: 0.272 ms\n Execution time: 0.047 ms\n(21 rows)\n\n\n*Explain from the same query on the original database:*\n\nhttps://explain.depesz.com/s/WHKJ\n\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=9811.51..9811.52 rows=1 width=4) (actual time=0.020..0.020\nrows=0 loops=1)\n Buffers: shared hit=3\n -> Sort (cost=9811.51..9813.23 rows=685 width=4) (actual\ntime=0.019..0.019 rows=0 loops=1)\n Sort Key: a.trans_date\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=3\n -> Nested Loop (cost=0.00..9808.09 rows=685 width=4) (actual\ntime=0.014..0.014 rows=0 loops=1)\n Buffers: shared hit=3\n -> Index Scan using stock_trans_product_idx on stock_trans\ns (cost=0.00..3300.01 rows=685 width=4) (actual time=0.014..0.014 rows=0\nloops=1)\n Index Cond: (product_id = 2466420)\n Filter: (credit_stock_account_id = 3)\n Buffers: shared hit=3\n -> Index Scan using account_trans_pkey on account_trans a\n(cost=0.00..9.49 rows=1 width=8) (never executed)\n Index Cond: (account_trans_id = s.account_trans_id)\n Total runtime: 0.050 ms\n(15 rows)\n\n\nRegards,\n-- \nDavid\n\n Hi, I was wondering if someone could help us work out why this query is so slow.We've just dumped a database (Postgresql 9.1) and restored it to a new instance (AWS RDS 9.6) (via pg_dump, restored to psql)We immediately see that the following fairly straightforward query is now extremely slow with a huge number of shared buffers hit.On the new instance it takes 25 seconds. On the original system, 0.05 seconds.The database schemas are both the same (with the same columns indexed) so I'm guessing it must be a configuration issue to make the planner go down a different route.I have run an analyse on the whole database since restoring.Also, if on the new instance, I disable indexscans, the query will take 0.047 seconds.Can someone point us in the right direction on what's going on here?Query:explain (buffers,analyse)  select            trans_date            from stock_trans s            join account_trans a using(account_trans_id)            where product_id=100            and credit_stock_account_id=3            order by trans_date desc            limit 1;Bad Performance on New Instance:https://explain.depesz.com/s/0HXq                                                                                    QUERY PLAN          ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.87..9086.72 rows=1 width=4) (actual time=25829.287..25829.287 rows=0 loops=1)   Buffers: shared hit=43944066   ->  Nested Loop  (cost=0.87..6196547.28 rows=682 width=4) (actual time=25829.286..25829.286 rows=0 loops=1)         Buffers: shared hit=43944066         ->  Index Scan Backward using account_trans_date_idx on account_trans a  (cost=0.43..392996.60 rows=11455133 width=8) (actual time=0.007..3401.027 rows=11455133 loops=1)               Buffers: shared hit=251082         ->  Index Scan using stock_trans_account_trans_idx on stock_trans s  (cost=0.43..0.50 rows=1 width=4) (actual time=0.001..0.001 rows=0 loops=11455133)               Index Cond: (account_trans_id = a.account_trans_id)               Filter: ((product_id = 2466420) AND (credit_stock_account_id = 3))               Rows Removed by Filter: 1               Buffers: shared hit=43692984 Planning time: 0.271 ms Execution time: 25829.316 ms(13 rows)Disabled indexscan:=> set enable_indexscan=off;https://explain.depesz.com/s/zTVn                                                                      QUERY PLAN                        ------------------------------------------------------------------------------------------------------------------------------------------------------ Limit  (cost=11049.80..11049.81 rows=1 width=4) (actual time=0.018..0.018 rows=0 loops=1)   Buffers: shared hit=3   ->  Sort  (cost=11049.80..11051.51 rows=682 width=4) (actual time=0.017..0.017 rows=0 loops=1)         Sort Key: a.trans_date DESC         Sort Method: quicksort  Memory: 25kB         Buffers: shared hit=3         ->  Nested Loop  (cost=35.99..11046.39 rows=682 width=4) (actual time=0.011..0.011 rows=0 loops=1)               Buffers: shared hit=3               ->  Bitmap Heap Scan on stock_trans s  (cost=31.59..5301.09 rows=682 width=4) (actual time=0.011..0.011 rows=0 loops=1)                     Recheck Cond: (product_id = 2466420)                     Filter: (credit_stock_account_id = 3)                     Buffers: shared hit=3                     ->  Bitmap Index Scan on stock_trans_product_idx  (cost=0.00..31.42 rows=1465 width=0) (actual time=0.009..0.009 rows=0 loops=1)                           Index Cond: (product_id = 2466420)                           Buffers: shared hit=3               ->  Bitmap Heap Scan on account_trans a  (cost=4.40..8.41 rows=1 width=8) (never executed)                     Recheck Cond: (account_trans_id = s.account_trans_id)                     ->  Bitmap Index Scan on account_trans_pkey  (cost=0.00..4.40 rows=1 width=0) (never executed)                           Index Cond: (account_trans_id = s.account_trans_id) Planning time: 0.272 ms Execution time: 0.047 ms(21 rows)Explain from the same query on the original database:https://explain.depesz.com/s/WHKJ                                                                          QUERY PLAN                                                  -------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=9811.51..9811.52 rows=1 width=4) (actual time=0.020..0.020 rows=0 loops=1)   Buffers: shared hit=3   ->  Sort  (cost=9811.51..9813.23 rows=685 width=4) (actual time=0.019..0.019 rows=0 loops=1)         Sort Key: a.trans_date         Sort Method: quicksort  Memory: 25kB         Buffers: shared hit=3         ->  Nested Loop  (cost=0.00..9808.09 rows=685 width=4) (actual time=0.014..0.014 rows=0 loops=1)               Buffers: shared hit=3               ->  Index Scan using stock_trans_product_idx on stock_trans s  (cost=0.00..3300.01 rows=685 width=4) (actual time=0.014..0.014 rows=0 loops=1)                     Index Cond: (product_id = 2466420)                     Filter: (credit_stock_account_id = 3)                     Buffers: shared hit=3               ->  Index Scan using account_trans_pkey on account_trans a  (cost=0.00..9.49 rows=1 width=8) (never executed)                     Index Cond: (account_trans_id = s.account_trans_id) Total runtime: 0.050 ms(15 rows)Regards,-- David", "msg_date": "Mon, 19 Mar 2018 15:13:37 +0000", "msg_from": "David Osborne <[email protected]>", "msg_from_op": true, "msg_subject": "Slow performance after restoring a dump" }, { "msg_contents": "David Osborne <[email protected]> writes:\n> Hi, I was wondering if someone could help us work out why this query is so\n> slow.\n> We've just dumped a database (Postgresql 9.1) and restored it to a new\n> instance (AWS RDS 9.6) (via pg_dump, restored to psql)\n\nThe first question people will ask is did you re-ANALYZE the new\ndatabase? pg_dump doesn't take care of that for you, and auto-analyze\nmight not think it needs to process the smaller tables.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 19 Mar 2018 11:35:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow performance after restoring a dump" }, { "msg_contents": "Hi, yes I've run \"analyse\" against the newly restored database. Should that\nbe enough?\n\nOn 19 March 2018 at 15:35, Tom Lane <[email protected]> wrote:\n\n> David Osborne <[email protected]> writes:\n>\n> The first question people will ask is did you re-ANALYZE the new\n> database? pg_dump doesn't take care of that for you, and auto-analyze\n> might not think it needs to process the smaller tables.\n>\n>\n\nHi, yes I've run \"analyse\" against the newly restored database. Should that be enough?On 19 March 2018 at 15:35, Tom Lane <[email protected]> wrote:David Osborne <[email protected]> writes:\nThe first question people will ask is did you re-ANALYZE the new\ndatabase?  pg_dump doesn't take care of that for you, and auto-analyze\nmight not think it needs to process the smaller tables.", "msg_date": "Mon, 19 Mar 2018 15:43:47 +0000", "msg_from": "David Osborne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow performance after restoring a dump" }, { "msg_contents": "David Osborne <[email protected]> writes:\n> Hi, yes I've run \"analyse\" against the newly restored database. Should that\n> be enough?\n\nMy apologies, you did say that further down in the original message.\nIt looks like the core of the problem is the poor rowcount estimation\nhere:\n\n -> Bitmap Index Scan on stock_trans_product_idx (cost=0.00..31.42 rows=1465 width=0) (actual time=0.009..0.009 rows=0 loops=1)\n Index Cond: (product_id = 2466420)\n Buffers: shared hit=3\n\nYou might be able to improve that by raising the statistics target\nfor stock_trans.product_id. I'm not sure why you weren't getting\nbitten by the same issue in 9.1; but the cost estimates aren't\nthat far apart for the two plans, so maybe you were just lucky ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 19 Mar 2018 12:22:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow performance after restoring a dump" }, { "msg_contents": "That did the trick... thanks!\nyes perhaps a minor planner difference just tipped us over the edge\npreviously\n\n=> alter table stock_trans alter column product_id set STATISTICS 1000;\n\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3461.10..3461.10 rows=1 width=4) (actual time=0.014..0.014\nrows=0 loops=1)\n Buffers: shared hit=3\n -> Sort (cost=3461.10..3461.75 rows=260 width=4) (actual\ntime=0.013..0.013 rows=0 loops=1)\n Sort Key: a.trans_date DESC\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=3\n -> Nested Loop (cost=0.87..3459.80 rows=260 width=4) (actual\ntime=0.008..0.008 rows=0 loops=1)\n Buffers: shared hit=3\n -> Index Scan using stock_trans_product_idx on stock_trans\ns (cost=0.43..1263.55 rows=260 width=4) (actual time=0.007..0.007 rows=0\nloops=1)\n Index Cond: (product_id = 2466420)\n Filter: (credit_stock_account_id = 3)\n Buffers: shared hit=3\n -> Index Scan using account_trans_pkey on account_trans a\n(cost=0.43..8.44 rows=1 width=8) (never executed)\n Index Cond: (account_trans_id = s.account_trans_id)\n Planning time: 0.255 ms\n Execution time: 0.039 ms\n(16 rows)\n\n\n\n\nOn 19 March 2018 at 16:22, Tom Lane <[email protected]> wrote:\n\n> David Osborne <[email protected]> writes:\n> > Hi, yes I've run \"analyse\" against the newly restored database. Should\n> that\n> > be enough?\n>\n> My apologies, you did say that further down in the original message.\n> It looks like the core of the problem is the poor rowcount estimation\n> here:\n>\n> -> Bitmap Index Scan on stock_trans_product_idx\n> (cost=0.00..31.42 rows=1465 width=0) (actual time=0.009..0.009 rows=0\n> loops=1)\n> Index Cond: (product_id = 2466420)\n> Buffers: shared hit=3\n>\n> You might be able to improve that by raising the statistics target\n> for stock_trans.product_id. I'm not sure why you weren't getting\n> bitten by the same issue in 9.1; but the cost estimates aren't\n> that far apart for the two plans, so maybe you were just lucky ...\n>\n> regards, tom lane\n>\n\n\n\n-- \nDavid Osborne\nQcode Software Limited\nhttp://www.qcode.co.uk\nT: +44 (0)1463 896484\n\nThat did the trick... thanks!  yes perhaps a minor planner difference just tipped us over the edge previously=> alter table stock_trans alter column product_id set STATISTICS 1000;                                                                          QUERY PLAN                    -------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=3461.10..3461.10 rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1)   Buffers: shared hit=3   ->  Sort  (cost=3461.10..3461.75 rows=260 width=4) (actual time=0.013..0.013 rows=0 loops=1)         Sort Key: a.trans_date DESC         Sort Method: quicksort  Memory: 25kB         Buffers: shared hit=3         ->  Nested Loop  (cost=0.87..3459.80 rows=260 width=4) (actual time=0.008..0.008 rows=0 loops=1)               Buffers: shared hit=3               ->  Index Scan using stock_trans_product_idx on stock_trans s  (cost=0.43..1263.55 rows=260 width=4) (actual time=0.007..0.007 rows=0 loops=1)                     Index Cond: (product_id = 2466420)                     Filter: (credit_stock_account_id = 3)                     Buffers: shared hit=3               ->  Index Scan using account_trans_pkey on account_trans a  (cost=0.43..8.44 rows=1 width=8) (never executed)                     Index Cond: (account_trans_id = s.account_trans_id) Planning time: 0.255 ms Execution time: 0.039 ms(16 rows)On 19 March 2018 at 16:22, Tom Lane <[email protected]> wrote:David Osborne <[email protected]> writes:\n> Hi, yes I've run \"analyse\" against the newly restored database. Should that\n> be enough?\n\nMy apologies, you did say that further down in the original message.\nIt looks like the core of the problem is the poor rowcount estimation\nhere:\n\n                     ->  Bitmap Index Scan on stock_trans_product_idx (cost=0.00..31.42 rows=1465 width=0) (actual time=0.009..0.009 rows=0 loops=1)\n                           Index Cond: (product_id = 2466420)\n                           Buffers: shared hit=3\n\nYou might be able to improve that by raising the statistics target\nfor stock_trans.product_id.  I'm not sure why you weren't getting\nbitten by the same issue in 9.1; but the cost estimates aren't\nthat far apart for the two plans, so maybe you were just lucky ...\n\n                        regards, tom lane\n-- David OsborneQcode Software Limitedhttp://www.qcode.co.uk\nT: +44 (0)1463 896484", "msg_date": "Mon, 19 Mar 2018 16:33:26 +0000", "msg_from": "David Osborne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow performance after restoring a dump" } ]
[ { "msg_contents": "We are trying to implement postgresql code to load a large object into\na postgresql bytea in chunks to avoid loading the file into memory in\nthe client.\n\nFirst attempt was to do\n\nupdate build_attachment set chunk = chunk || newdata ;\n\nthis did not scale and got significantly slower after 4000-5000 updates.\n\nThe chunks are 4K in size, and I'm testing with a 128MB input file,\nrequiring 32,774 chunk updates.\n\nNext, I tried creating an aggregate, thus:\n\n(taken from stackoverflow)\n\nCREATE AGGREGATE bytea_agg(bytea) (SFUNC=byteacat,STYPE=bytea);\n\nchanged the code to insert the chunks to a temporary table :\n\ncreate temporary table build_attachment (seq bigserial primary key,\nchunk bytea ) on commit drop;\n\nwe then insert our 4K chunks to this, which takes very little time (20\nseconds for the 32,774 inserts)\n\nHere's an example though of trying to select the aggregate:\n\ngary=> \\timing\nTiming is on.\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 4000 \\g output\nTime: 13372.843 ms\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 8000 \\g output\nTime: 54447.541 ms\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 16000 \\g output\nTime: 582219.773 ms\n\nSo those partial aggregates completed in somewhat acceptable times but ...\n\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 32000 \\g output\nthis one hadn't completed in an hour - the PostgreSQL connection\nprocess for my connection on the server goes to 100% CPU and stays\nthere, not using much RAM, not doing much IO, oddly\n\nEXPLAINing these aggregate selects doesn't show anything useful.\n\nAm I doomed to not be able to update a bytea this way? Is there some\nway I can tune this?\n\n", "msg_date": "Wed, 21 Mar 2018 12:03:17 +0000", "msg_from": "Gary Cowell <[email protected]>", "msg_from_op": true, "msg_subject": "badly scaling performance with appending to bytea" }, { "msg_contents": "Can you use a materialized view to do the bytea_agg() and then refresh\nconcurrently whenever you need updated data?\nThe refresh concurrently might take a few hours or days to run to keep the\nmatview up to date, but your queries would be pretty fast.\n\nA possible problem is that you are running out of memory, so the larger\nqueries are going to disk. If you can set up temp space on a faster\nvolume, or bump up your memory configuration it might help.\nie, work_mem, shared_buffers, and file system cache could all play into\nlarger aggregations running faster.\n\n\nOn Wed, Mar 21, 2018 at 8:03 AM, Gary Cowell <[email protected]> wrote:\n\n> We are trying to implement postgresql code to load a large object into\n> a postgresql bytea in chunks to avoid loading the file into memory in\n> the client.\n>\n> First attempt was to do\n>\n> update build_attachment set chunk = chunk || newdata ;\n>\n> this did not scale and got significantly slower after 4000-5000 updates.\n>\n> The chunks are 4K in size, and I'm testing with a 128MB input file,\n> requiring 32,774 chunk updates.\n>\n> Next, I tried creating an aggregate, thus:\n>\n> (taken from stackoverflow)\n>\n> CREATE AGGREGATE bytea_agg(bytea) (SFUNC=byteacat,STYPE=bytea);\n>\n> changed the code to insert the chunks to a temporary table :\n>\n> create temporary table build_attachment (seq bigserial primary key,\n> chunk bytea ) on commit drop;\n>\n> we then insert our 4K chunks to this, which takes very little time (20\n> seconds for the 32,774 inserts)\n>\n> Here's an example though of trying to select the aggregate:\n>\n> gary=> \\timing\n> Timing is on.\n> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> where seq < 4000 \\g output\n> Time: 13372.843 ms\n> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> where seq < 8000 \\g output\n> Time: 54447.541 ms\n> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> where seq < 16000 \\g output\n> Time: 582219.773 ms\n>\n> So those partial aggregates completed in somewhat acceptable times but ...\n>\n> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> where seq < 32000 \\g output\n> this one hadn't completed in an hour - the PostgreSQL connection\n> process for my connection on the server goes to 100% CPU and stays\n> there, not using much RAM, not doing much IO, oddly\n>\n> EXPLAINing these aggregate selects doesn't show anything useful.\n>\n> Am I doomed to not be able to update a bytea this way? Is there some\n> way I can tune this?\n>\n>\n\nCan you use a materialized view to do the bytea_agg() and then refresh concurrently whenever you need updated data?The refresh concurrently might take a few hours or days to run to keep the matview up to date, but your queries would be pretty fast.A possible problem is  that you are running out of memory, so the larger queries are going to disk.  If you can set up temp space on a faster volume, or bump up your memory configuration it might help.ie, work_mem, shared_buffers, and file system cache could all play into larger aggregations running faster.On Wed, Mar 21, 2018 at 8:03 AM, Gary Cowell <[email protected]> wrote:We are trying to implement postgresql code to load a large object into\na postgresql bytea in chunks to avoid loading the file into memory in\nthe client.\n\nFirst attempt was to do\n\nupdate build_attachment set chunk = chunk || newdata ;\n\nthis did not scale and got significantly slower after 4000-5000 updates.\n\nThe chunks are 4K in size, and I'm testing with a 128MB input file,\nrequiring 32,774 chunk updates.\n\nNext, I tried creating an aggregate, thus:\n\n(taken from stackoverflow)\n\nCREATE AGGREGATE bytea_agg(bytea) (SFUNC=byteacat,STYPE=bytea);\n\nchanged the code to insert the chunks to a temporary table :\n\ncreate temporary table build_attachment (seq bigserial primary key,\nchunk bytea ) on commit drop;\n\nwe then insert our 4K chunks to this, which takes very little time (20\nseconds for the 32,774 inserts)\n\nHere's an example though of trying to select the aggregate:\n\ngary=> \\timing\nTiming is on.\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 4000 \\g output\nTime: 13372.843 ms\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 8000 \\g output\nTime: 54447.541 ms\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 16000 \\g output\nTime: 582219.773 ms\n\nSo those partial aggregates completed in somewhat acceptable times but ...\n\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 32000 \\g output\nthis one hadn't completed in an hour - the PostgreSQL connection\nprocess for my connection on the server goes to 100% CPU and stays\nthere, not using much RAM, not doing much IO, oddly\n\nEXPLAINing these aggregate selects doesn't show anything useful.\n\nAm I doomed to not be able to update a bytea this way? Is there some\nway I can tune this?", "msg_date": "Wed, 21 Mar 2018 08:12:31 -0400", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: badly scaling performance with appending to bytea" }, { "msg_contents": "2018-03-21 13:03 GMT+01:00 Gary Cowell <[email protected]>:\n\n> We are trying to implement postgresql code to load a large object into\n> a postgresql bytea in chunks to avoid loading the file into memory in\n> the client.\n>\n> First attempt was to do\n>\n> update build_attachment set chunk = chunk || newdata ;\n>\n> this did not scale and got significantly slower after 4000-5000 updates.\n>\n> The chunks are 4K in size, and I'm testing with a 128MB input file,\n> requiring 32,774 chunk updates.\n>\n> Next, I tried creating an aggregate, thus:\n>\n> (taken from stackoverflow)\n>\n> CREATE AGGREGATE bytea_agg(bytea) (SFUNC=byteacat,STYPE=bytea);\n>\n> changed the code to insert the chunks to a temporary table :\n>\n> create temporary table build_attachment (seq bigserial primary key,\n> chunk bytea ) on commit drop;\n>\n> we then insert our 4K chunks to this, which takes very little time (20\n> seconds for the 32,774 inserts)\n>\n> Here's an example though of trying to select the aggregate:\n>\n> gary=> \\timing\n> Timing is on.\n> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> where seq < 4000 \\g output\n> Time: 13372.843 ms\n> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> where seq < 8000 \\g output\n> Time: 54447.541 ms\n> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> where seq < 16000 \\g output\n> Time: 582219.773 ms\n>\n> So those partial aggregates completed in somewhat acceptable times but ...\n>\n> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> where seq < 32000 \\g output\n> this one hadn't completed in an hour - the PostgreSQL connection\n> process for my connection on the server goes to 100% CPU and stays\n> there, not using much RAM, not doing much IO, oddly\n>\n> EXPLAINing these aggregate selects doesn't show anything useful.\n>\n> Am I doomed to not be able to update a bytea this way? Is there some\n> way I can tune this?\n>\n>\nbytea is immutable object without preallocation - so update of big tasks is\nvery expensive.\n\nI am thinking so using LO API and then transformation to bytea will be much\nmore effective\n\n\\lo_import path\n\nyou can use\n\n CREATE OR REPLACE FUNCTION attachment_to_bytea(attachment oid)\n RETURNS bytea AS $$\n DECLARE\n fd integer;\n size integer;\n BEGIN\n fd := lo_open(attachment, 262144);\n size := lo_lseek(fd, 0, 2);\n PERFORM lo_lseek(fd, 0, 0);\n RETURN loread(fd, size);\n EXCEPTION WHEN undefined_object THEN\n PERFORM lo_close(fd);\n RETURN NULL;\n END;\n $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog';\n\nfunction\n\nimport cca 44MB was in few seconds\n\nRegards\n\nPavel\n\n2018-03-21 13:03 GMT+01:00 Gary Cowell <[email protected]>:We are trying to implement postgresql code to load a large object into\na postgresql bytea in chunks to avoid loading the file into memory in\nthe client.\n\nFirst attempt was to do\n\nupdate build_attachment set chunk = chunk || newdata ;\n\nthis did not scale and got significantly slower after 4000-5000 updates.\n\nThe chunks are 4K in size, and I'm testing with a 128MB input file,\nrequiring 32,774 chunk updates.\n\nNext, I tried creating an aggregate, thus:\n\n(taken from stackoverflow)\n\nCREATE AGGREGATE bytea_agg(bytea) (SFUNC=byteacat,STYPE=bytea);\n\nchanged the code to insert the chunks to a temporary table :\n\ncreate temporary table build_attachment (seq bigserial primary key,\nchunk bytea ) on commit drop;\n\nwe then insert our 4K chunks to this, which takes very little time (20\nseconds for the 32,774 inserts)\n\nHere's an example though of trying to select the aggregate:\n\ngary=> \\timing\nTiming is on.\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 4000 \\g output\nTime: 13372.843 ms\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 8000 \\g output\nTime: 54447.541 ms\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 16000 \\g output\nTime: 582219.773 ms\n\nSo those partial aggregates completed in somewhat acceptable times but ...\n\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 32000 \\g output\nthis one hadn't completed in an hour - the PostgreSQL connection\nprocess for my connection on the server goes to 100% CPU and stays\nthere, not using much RAM, not doing much IO, oddly\n\nEXPLAINing these aggregate selects doesn't show anything useful.\n\nAm I doomed to not be able to update a bytea this way? Is there some\nway I can tune this?\nbytea is immutable object without preallocation - so update of big tasks is very expensive. I am thinking so using LO API and then transformation to bytea will be much more effective\\lo_import pathyou can use  CREATE OR REPLACE FUNCTION attachment_to_bytea(attachment oid) RETURNS bytea AS $$ DECLARE  fd        integer;  size      integer; BEGIN  fd   := lo_open(attachment, 262144);  size := lo_lseek(fd, 0, 2);  PERFORM lo_lseek(fd, 0, 0);  RETURN loread(fd, size); EXCEPTION WHEN undefined_object THEN   PERFORM lo_close(fd);   RETURN NULL; END; $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog';functionimport cca 44MB was in few secondsRegardsPavel", "msg_date": "Wed, 21 Mar 2018 13:56:24 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: badly scaling performance with appending to bytea" }, { "msg_contents": "2018-03-21 13:56 GMT+01:00 Pavel Stehule <[email protected]>:\n\n>\n>\n> 2018-03-21 13:03 GMT+01:00 Gary Cowell <[email protected]>:\n>\n>> We are trying to implement postgresql code to load a large object into\n>> a postgresql bytea in chunks to avoid loading the file into memory in\n>> the client.\n>>\n>> First attempt was to do\n>>\n>> update build_attachment set chunk = chunk || newdata ;\n>>\n>> this did not scale and got significantly slower after 4000-5000 updates.\n>>\n>> The chunks are 4K in size, and I'm testing with a 128MB input file,\n>> requiring 32,774 chunk updates.\n>>\n>> Next, I tried creating an aggregate, thus:\n>>\n>> (taken from stackoverflow)\n>>\n>> CREATE AGGREGATE bytea_agg(bytea) (SFUNC=byteacat,STYPE=bytea);\n>>\n>> changed the code to insert the chunks to a temporary table :\n>>\n>> create temporary table build_attachment (seq bigserial primary key,\n>> chunk bytea ) on commit drop;\n>>\n>> we then insert our 4K chunks to this, which takes very little time (20\n>> seconds for the 32,774 inserts)\n>>\n>> Here's an example though of trying to select the aggregate:\n>>\n>> gary=> \\timing\n>> Timing is on.\n>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>> where seq < 4000 \\g output\n>> Time: 13372.843 ms\n>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>> where seq < 8000 \\g output\n>> Time: 54447.541 ms\n>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>> where seq < 16000 \\g output\n>> Time: 582219.773 ms\n>>\n>> So those partial aggregates completed in somewhat acceptable times but ...\n>>\n>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>> where seq < 32000 \\g output\n>> this one hadn't completed in an hour - the PostgreSQL connection\n>> process for my connection on the server goes to 100% CPU and stays\n>> there, not using much RAM, not doing much IO, oddly\n>>\n>> EXPLAINing these aggregate selects doesn't show anything useful.\n>>\n>> Am I doomed to not be able to update a bytea this way? Is there some\n>> way I can tune this?\n>>\n>>\n> bytea is immutable object without preallocation - so update of big tasks\n> is very expensive.\n>\n> I am thinking so using LO API and then transformation to bytea will be\n> much more effective\n>\n> \\lo_import path\n>\n> you can use\n>\n> CREATE OR REPLACE FUNCTION attachment_to_bytea(attachment oid)\n> RETURNS bytea AS $$\n> DECLARE\n> fd integer;\n> size integer;\n> BEGIN\n> fd := lo_open(attachment, 262144);\n> size := lo_lseek(fd, 0, 2);\n> PERFORM lo_lseek(fd, 0, 0);\n> RETURN loread(fd, size);\n> EXCEPTION WHEN undefined_object THEN\n> PERFORM lo_close(fd);\n> RETURN NULL;\n> END;\n> $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path =\n> 'pg_catalog';\n>\n> function\n>\n> import cca 44MB was in few seconds\n>\n\nthere is native function lo_get\n\n https://www.postgresql.org/docs/current/static/lo-funcs.html\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n\n2018-03-21 13:56 GMT+01:00 Pavel Stehule <[email protected]>:2018-03-21 13:03 GMT+01:00 Gary Cowell <[email protected]>:We are trying to implement postgresql code to load a large object into\na postgresql bytea in chunks to avoid loading the file into memory in\nthe client.\n\nFirst attempt was to do\n\nupdate build_attachment set chunk = chunk || newdata ;\n\nthis did not scale and got significantly slower after 4000-5000 updates.\n\nThe chunks are 4K in size, and I'm testing with a 128MB input file,\nrequiring 32,774 chunk updates.\n\nNext, I tried creating an aggregate, thus:\n\n(taken from stackoverflow)\n\nCREATE AGGREGATE bytea_agg(bytea) (SFUNC=byteacat,STYPE=bytea);\n\nchanged the code to insert the chunks to a temporary table :\n\ncreate temporary table build_attachment (seq bigserial primary key,\nchunk bytea ) on commit drop;\n\nwe then insert our 4K chunks to this, which takes very little time (20\nseconds for the 32,774 inserts)\n\nHere's an example though of trying to select the aggregate:\n\ngary=> \\timing\nTiming is on.\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 4000 \\g output\nTime: 13372.843 ms\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 8000 \\g output\nTime: 54447.541 ms\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 16000 \\g output\nTime: 582219.773 ms\n\nSo those partial aggregates completed in somewhat acceptable times but ...\n\ngary=> select bytea_agg(chunk order by seq) from build_attachment\nwhere seq < 32000 \\g output\nthis one hadn't completed in an hour - the PostgreSQL connection\nprocess for my connection on the server goes to 100% CPU and stays\nthere, not using much RAM, not doing much IO, oddly\n\nEXPLAINing these aggregate selects doesn't show anything useful.\n\nAm I doomed to not be able to update a bytea this way? Is there some\nway I can tune this?\nbytea is immutable object without preallocation - so update of big tasks is very expensive. I am thinking so using LO API and then transformation to bytea will be much more effective\\lo_import pathyou can use  CREATE OR REPLACE FUNCTION attachment_to_bytea(attachment oid) RETURNS bytea AS $$ DECLARE  fd        integer;  size      integer; BEGIN  fd   := lo_open(attachment, 262144);  size := lo_lseek(fd, 0, 2);  PERFORM lo_lseek(fd, 0, 0);  RETURN loread(fd, size); EXCEPTION WHEN undefined_object THEN   PERFORM lo_close(fd);   RETURN NULL; END; $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path = 'pg_catalog';functionimport cca 44MB was in few secondsthere is native function lo_get https://www.postgresql.org/docs/current/static/lo-funcs.htmlRegardsPavel", "msg_date": "Wed, 21 Mar 2018 13:59:47 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: badly scaling performance with appending to bytea" }, { "msg_contents": "Thank you Pavel for those ideas.\n\nI should probably have mentioned we don't have access to the file\nsystem on the PostgreSQL server, as it's provided by Amazon AWS RDS\nservice.\n\nThese functions look good when you can push the file to be loaded into\nthe database file system.\n\nI'll see if it's possible to do this on AWS PostgreSQL RDS service but\nthis sort of thing is usually not\n\nOn 21 March 2018 at 12:59, Pavel Stehule <[email protected]> wrote:\n>\n>\n> 2018-03-21 13:56 GMT+01:00 Pavel Stehule <[email protected]>:\n>>\n>>\n>>\n>> 2018-03-21 13:03 GMT+01:00 Gary Cowell <[email protected]>:\n>>>\n>>> We are trying to implement postgresql code to load a large object into\n>>> a postgresql bytea in chunks to avoid loading the file into memory in\n>>> the client.\n>>>\n>>> First attempt was to do\n>>>\n>>> update build_attachment set chunk = chunk || newdata ;\n>>>\n>>> this did not scale and got significantly slower after 4000-5000 updates.\n>>>\n>>> The chunks are 4K in size, and I'm testing with a 128MB input file,\n>>> requiring 32,774 chunk updates.\n>>>\n>>> Next, I tried creating an aggregate, thus:\n>>>\n>>> (taken from stackoverflow)\n>>>\n>>> CREATE AGGREGATE bytea_agg(bytea) (SFUNC=byteacat,STYPE=bytea);\n>>>\n>>> changed the code to insert the chunks to a temporary table :\n>>>\n>>> create temporary table build_attachment (seq bigserial primary key,\n>>> chunk bytea ) on commit drop;\n>>>\n>>> we then insert our 4K chunks to this, which takes very little time (20\n>>> seconds for the 32,774 inserts)\n>>>\n>>> Here's an example though of trying to select the aggregate:\n>>>\n>>> gary=> \\timing\n>>> Timing is on.\n>>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>>> where seq < 4000 \\g output\n>>> Time: 13372.843 ms\n>>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>>> where seq < 8000 \\g output\n>>> Time: 54447.541 ms\n>>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>>> where seq < 16000 \\g output\n>>> Time: 582219.773 ms\n>>>\n>>> So those partial aggregates completed in somewhat acceptable times but\n>>> ...\n>>>\n>>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>>> where seq < 32000 \\g output\n>>> this one hadn't completed in an hour - the PostgreSQL connection\n>>> process for my connection on the server goes to 100% CPU and stays\n>>> there, not using much RAM, not doing much IO, oddly\n>>>\n>>> EXPLAINing these aggregate selects doesn't show anything useful.\n>>>\n>>> Am I doomed to not be able to update a bytea this way? Is there some\n>>> way I can tune this?\n>>>\n>>\n>> bytea is immutable object without preallocation - so update of big tasks\n>> is very expensive.\n>>\n>> I am thinking so using LO API and then transformation to bytea will be\n>> much more effective\n>>\n>> \\lo_import path\n>>\n>> you can use\n>>\n>> CREATE OR REPLACE FUNCTION attachment_to_bytea(attachment oid)\n>> RETURNS bytea AS $$\n>> DECLARE\n>> fd integer;\n>> size integer;\n>> BEGIN\n>> fd := lo_open(attachment, 262144);\n>> size := lo_lseek(fd, 0, 2);\n>> PERFORM lo_lseek(fd, 0, 0);\n>> RETURN loread(fd, size);\n>> EXCEPTION WHEN undefined_object THEN\n>> PERFORM lo_close(fd);\n>> RETURN NULL;\n>> END;\n>> $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path =\n>> 'pg_catalog';\n>>\n>> function\n>>\n>> import cca 44MB was in few seconds\n>\n>\n> there is native function lo_get\n>\n> https://www.postgresql.org/docs/current/static/lo-funcs.html\n>\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\n\n", "msg_date": "Wed, 21 Mar 2018 13:04:55 +0000", "msg_from": "Gary Cowell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: badly scaling performance with appending to bytea" }, { "msg_contents": "2018-03-21 14:04 GMT+01:00 Gary Cowell <[email protected]>:\n\n> Thank you Pavel for those ideas.\n>\n> I should probably have mentioned we don't have access to the file\n> system on the PostgreSQL server, as it's provided by Amazon AWS RDS\n> service.\n>\n> These functions look good when you can push the file to be loaded into\n> the database file system.\n>\n> I'll see if it's possible to do this on AWS PostgreSQL RDS service but\n> this sort of thing is usually not\n>\n\nlo API doesn't need file access\n\n https://www.postgresql.org/docs/9.2/static/lo-interfaces.html\n\nyou can use lo_write function\n\n\n\n> On 21 March 2018 at 12:59, Pavel Stehule <[email protected]> wrote:\n> >\n> >\n> > 2018-03-21 13:56 GMT+01:00 Pavel Stehule <[email protected]>:\n> >>\n> >>\n> >>\n> >> 2018-03-21 13:03 GMT+01:00 Gary Cowell <[email protected]>:\n> >>>\n> >>> We are trying to implement postgresql code to load a large object into\n> >>> a postgresql bytea in chunks to avoid loading the file into memory in\n> >>> the client.\n> >>>\n> >>> First attempt was to do\n> >>>\n> >>> update build_attachment set chunk = chunk || newdata ;\n> >>>\n> >>> this did not scale and got significantly slower after 4000-5000\n> updates.\n> >>>\n> >>> The chunks are 4K in size, and I'm testing with a 128MB input file,\n> >>> requiring 32,774 chunk updates.\n> >>>\n> >>> Next, I tried creating an aggregate, thus:\n> >>>\n> >>> (taken from stackoverflow)\n> >>>\n> >>> CREATE AGGREGATE bytea_agg(bytea) (SFUNC=byteacat,STYPE=bytea);\n> >>>\n> >>> changed the code to insert the chunks to a temporary table :\n> >>>\n> >>> create temporary table build_attachment (seq bigserial primary key,\n> >>> chunk bytea ) on commit drop;\n> >>>\n> >>> we then insert our 4K chunks to this, which takes very little time (20\n> >>> seconds for the 32,774 inserts)\n> >>>\n> >>> Here's an example though of trying to select the aggregate:\n> >>>\n> >>> gary=> \\timing\n> >>> Timing is on.\n> >>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> >>> where seq < 4000 \\g output\n> >>> Time: 13372.843 ms\n> >>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> >>> where seq < 8000 \\g output\n> >>> Time: 54447.541 ms\n> >>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> >>> where seq < 16000 \\g output\n> >>> Time: 582219.773 ms\n> >>>\n> >>> So those partial aggregates completed in somewhat acceptable times but\n> >>> ...\n> >>>\n> >>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n> >>> where seq < 32000 \\g output\n> >>> this one hadn't completed in an hour - the PostgreSQL connection\n> >>> process for my connection on the server goes to 100% CPU and stays\n> >>> there, not using much RAM, not doing much IO, oddly\n> >>>\n> >>> EXPLAINing these aggregate selects doesn't show anything useful.\n> >>>\n> >>> Am I doomed to not be able to update a bytea this way? Is there some\n> >>> way I can tune this?\n> >>>\n> >>\n> >> bytea is immutable object without preallocation - so update of big tasks\n> >> is very expensive.\n> >>\n> >> I am thinking so using LO API and then transformation to bytea will be\n> >> much more effective\n> >>\n> >> \\lo_import path\n> >>\n> >> you can use\n> >>\n> >> CREATE OR REPLACE FUNCTION attachment_to_bytea(attachment oid)\n> >> RETURNS bytea AS $$\n> >> DECLARE\n> >> fd integer;\n> >> size integer;\n> >> BEGIN\n> >> fd := lo_open(attachment, 262144);\n> >> size := lo_lseek(fd, 0, 2);\n> >> PERFORM lo_lseek(fd, 0, 0);\n> >> RETURN loread(fd, size);\n> >> EXCEPTION WHEN undefined_object THEN\n> >> PERFORM lo_close(fd);\n> >> RETURN NULL;\n> >> END;\n> >> $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path =\n> >> 'pg_catalog';\n> >>\n> >> function\n> >>\n> >> import cca 44MB was in few seconds\n> >\n> >\n> > there is native function lo_get\n> >\n> > https://www.postgresql.org/docs/current/static/lo-funcs.html\n> >\n> >\n> >>\n> >> Regards\n> >>\n> >> Pavel\n> >>\n> >\n>\n>\n\n2018-03-21 14:04 GMT+01:00 Gary Cowell <[email protected]>:Thank you Pavel for those ideas.\n\nI should probably have mentioned we don't have access to the file\nsystem on the PostgreSQL server, as it's provided by Amazon AWS RDS\nservice.\n\nThese functions look good when you can push the file to be loaded into\nthe database file system.\n\nI'll see if it's possible to do this on AWS PostgreSQL RDS service but\nthis sort of thing is usually notlo API doesn't need file access https://www.postgresql.org/docs/9.2/static/lo-interfaces.htmlyou can use lo_write function\n\nOn 21 March 2018 at 12:59, Pavel Stehule <[email protected]> wrote:\n>\n>\n> 2018-03-21 13:56 GMT+01:00 Pavel Stehule <[email protected]>:\n>>\n>>\n>>\n>> 2018-03-21 13:03 GMT+01:00 Gary Cowell <[email protected]>:\n>>>\n>>> We are trying to implement postgresql code to load a large object into\n>>> a postgresql bytea in chunks to avoid loading the file into memory in\n>>> the client.\n>>>\n>>> First attempt was to do\n>>>\n>>> update build_attachment set chunk = chunk || newdata ;\n>>>\n>>> this did not scale and got significantly slower after 4000-5000 updates.\n>>>\n>>> The chunks are 4K in size, and I'm testing with a 128MB input file,\n>>> requiring 32,774 chunk updates.\n>>>\n>>> Next, I tried creating an aggregate, thus:\n>>>\n>>> (taken from stackoverflow)\n>>>\n>>> CREATE AGGREGATE bytea_agg(bytea) (SFUNC=byteacat,STYPE=bytea);\n>>>\n>>> changed the code to insert the chunks to a temporary table :\n>>>\n>>> create temporary table build_attachment (seq bigserial primary key,\n>>> chunk bytea ) on commit drop;\n>>>\n>>> we then insert our 4K chunks to this, which takes very little time (20\n>>> seconds for the 32,774 inserts)\n>>>\n>>> Here's an example though of trying to select the aggregate:\n>>>\n>>> gary=> \\timing\n>>> Timing is on.\n>>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>>> where seq < 4000 \\g output\n>>> Time: 13372.843 ms\n>>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>>> where seq < 8000 \\g output\n>>> Time: 54447.541 ms\n>>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>>> where seq < 16000 \\g output\n>>> Time: 582219.773 ms\n>>>\n>>> So those partial aggregates completed in somewhat acceptable times but\n>>> ...\n>>>\n>>> gary=> select bytea_agg(chunk order by seq) from build_attachment\n>>> where seq < 32000 \\g output\n>>> this one hadn't completed in an hour - the PostgreSQL connection\n>>> process for my connection on the server goes to 100% CPU and stays\n>>> there, not using much RAM, not doing much IO, oddly\n>>>\n>>> EXPLAINing these aggregate selects doesn't show anything useful.\n>>>\n>>> Am I doomed to not be able to update a bytea this way? Is there some\n>>> way I can tune this?\n>>>\n>>\n>> bytea is immutable object without preallocation - so update of big tasks\n>> is very expensive.\n>>\n>> I am thinking so using LO API and then transformation to bytea will be\n>> much more effective\n>>\n>> \\lo_import path\n>>\n>> you can use\n>>\n>>  CREATE OR REPLACE FUNCTION attachment_to_bytea(attachment oid)\n>>  RETURNS bytea AS $$\n>>  DECLARE\n>>   fd        integer;\n>>   size      integer;\n>>  BEGIN\n>>   fd   := lo_open(attachment, 262144);\n>>   size := lo_lseek(fd, 0, 2);\n>>   PERFORM lo_lseek(fd, 0, 0);\n>>   RETURN loread(fd, size);\n>>  EXCEPTION WHEN undefined_object THEN\n>>    PERFORM lo_close(fd);\n>>    RETURN NULL;\n>>  END;\n>>  $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path =\n>> 'pg_catalog';\n>>\n>> function\n>>\n>> import cca 44MB was in few seconds\n>\n>\n> there is native function lo_get\n>\n>  https://www.postgresql.org/docs/current/static/lo-funcs.html\n>\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>", "msg_date": "Wed, 21 Mar 2018 14:09:27 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: badly scaling performance with appending to bytea" } ]
[ { "msg_contents": "Hi,\nI have a query on DB corruption. Is there any way to recover from it \nwithout losing data ?\n\nStarting postgresql service: [ OK ]\npsql: FATAL: index \"pg_authid_rolname_index\" contains unexpected zero page \nat block 0\nHINT: Please REINDEX it.\npsql: FATAL: \"base/11564\" is not a valid data directory\nDETAIL: File \"base/11564/PG_VERSION\" does not contain valid data.\nHINT: You might need to initdb.\npsql: FATAL: \"base/11564\" is not a valid data directory\nDETAIL: File \"base/11564/PG_VERSION\" does not contain valid data.\nHINT: You might need to initdb.\npsql: FATAL: \"base/11564\" is not a valid data directory\nDETAIL: File \"base/11564/PG_VERSION\" does not contain valid data.\nHINT: You might need to initdb.\npsql: FATAL: \"base/11564\" is not a valid data directory\nDETAIL: File \"base/11564/PG_VERSION\" does not contain valid data.\nHINT: You might need to initdb. \n\nWith Best Regards\nAkshay\n\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\n\n\nHi,\nI have a query on DB corruption. Is there\nany way to recover from it without losing data ?\n\nStarting postgresql service:\n[ OK ]\npsql: FATAL: index \"pg_authid_rolname_index\" contains unexpected\nzero page at block 0\nHINT: Please REINDEX it.\npsql: FATAL: \"base/11564\" is not a valid data directory\nDETAIL: File \"base/11564/PG_VERSION\" does not contain valid data.\nHINT: You might need to initdb.\npsql: FATAL: \"base/11564\" is not a valid data directory\nDETAIL: File \"base/11564/PG_VERSION\" does not contain valid data.\nHINT: You might need to initdb.\npsql: FATAL: \"base/11564\" is not a valid data directory\nDETAIL: File \"base/11564/PG_VERSION\" does not contain valid data.\nHINT: You might need to initdb.\npsql: FATAL: \"base/11564\" is not a valid data directory\nDETAIL: File \"base/11564/PG_VERSION\" does not contain valid data.\nHINT: You might need to initdb. \n\nWith Best Regards\nAkshay\n\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you", "msg_date": "Fri, 23 Mar 2018 13:29:35 +0530", "msg_from": "Akshay Ballarpure <[email protected]>", "msg_from_op": true, "msg_subject": "DB corruption" }, { "msg_contents": "On Fri, Mar 23, 2018 at 01:29:35PM +0530, Akshay Ballarpure wrote:\n> I have a query on DB corruption. Is there any way to recover from it \n> without losing data ?\n\nCorrupted pages which need to be zeroed in order to recover the rest is\ndata lost forever, except if you have a backup you can rollback to.\nPlease see here for some global instructions about how to deal with such\nsituations:\nhttps://wiki.postgresql.org/wiki/Corruption\n\nFirst take a deep breath, and take the time to read and understand it.\n\n> Notice: The information contained in this e-mail\n> message and/or attachments to it may contain \n> confidential or privileged information. If you are \n> not the intended recipient, any dissemination, use, \n> review, distribution, printing or copying of the \n> information contained in this e-mail message \n> and/or attachments to it are strictly prohibited. If \n> you have received this communication in error, \n> please notify us by reply e-mail or telephone and \n> immediately and permanently delete the message \n> and any attachments. Thank you\n\nThis is a public mailing list.\n--\nMichael", "msg_date": "Fri, 23 Mar 2018 17:04:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB corruption" }, { "msg_contents": "Akshay Ballarpure <[email protected]> writes:\n> I have a query on DB corruption. Is there any way to recover from it \n> without losing data ?\n\nYou've already lost data, evidently.\n\n> Starting postgresql service: [ OK ]\n> psql: FATAL: index \"pg_authid_rolname_index\" contains unexpected zero page \n> at block 0\n> HINT: Please REINDEX it.\n\nThis is not good. It'd be possible to reindex that index, certainly,\nbut the question is what other files have also been clobbered.\n\n> psql: FATAL: \"base/11564\" is not a valid data directory\n> DETAIL: File \"base/11564/PG_VERSION\" does not contain valid data.\n> HINT: You might need to initdb.\n\nBased on the OID I'm going to guess that this is from an attempt to\nconnect to the \"postgres\" database. (I'm also going to guess that\nyou're running 8.4.x, because any later PG version would have a higher\nOID for \"postgres\".) Can you connect to any other databases? If so,\ndo their contents seem intact? If you're really lucky, meaning (a) the\ndamage is confined to that DB and (b) you didn't keep any important\ndata in it, then dropping and recreating the \"postgres\" DB might be\nenough to get you out of trouble. But pg_authid_rolname_index is\na cluster-global index, not specific to the \"postgres\" DB, so the\nfact that it too seems to be damaged is not promising.\n\nTBH your best bet, if the data in this installation is valuable and\nyou don't have adequate backups, is to hire a professional data\nrecovery service --- there are several companies that specialize in\ngetting as much out of a corrupted PG installation as possible.\n(See https://www.postgresql.org/support/professional_support/ for\nsome links.) You should then plan on updating to some newer PG\nrelease; 8.4.x has been out of support for years, and there are lots\nof known-and-unfixed bugs in it.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 23 Mar 2018 11:08:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB corruption" } ]
[ { "msg_contents": "\nMy queries get up to 10 times faster when I disable from_collapse\n(setting from_collapse_limit=1).\n\nAfter this finding, The pramatic solution is easy: it needs to be\nswitched off.\n\nBUT:\nI found this perchance, accidentally (after the queries had been\nrunning for years). And this gives me some questions about\ndocumentation and best practices.\n\nI could not find any documentation or evaluation that would say\nthat from_collapse can have detrimental effects. Even less, which\ntype of queries may suffer from that.\n\nSince we cannot experimentally for all of our queries try out all\nkinds of options, if they might have significant (negative) effects,\nmy understanding now is that, as a best practice, from_collapse\nshould be switched off by default. And only after development it\nshould be tested if activating it gives a positive improvement.\n\nSadly, my knowledge does not reach into the internals. I can\nunderstand which *logical* result I should expect from an SQL\nstatement. But I do not know how this is achieved internally.\nSo, I have a very hard time when trying to understand output from\nEXPLAIN, or to make an educated guess on how the design of a\nquery may influence execution strategy. I am usually happy when\nI found some SQL that would correctly produce the results I need.\nIn short: I lack the experience to do manual optimization, or to\nsee where manual optimization might be feasible.\n\nThe manual section \"Controlling the Planner with Explicit JOIN\nClauses\" gives a little discussion on the issue. But it seems only\nconcerned about an increasing amount of cycles used for the\nplanning activity, not about bad results from the optimization.\nWorse, it creates the impression that giving the planner maximum\nfreedom is usually a good thing (at least until it takes too much\ncycles for the planner to evaluate all possibilities).\n\nIn my case, planning uses 1 or 2% of the cycles needed for\nexecution; that seems alright to me. \nAnd, as said above, I cannot see why my queries might be an\natypical case (I don't think they are).\n\nIf somebody would like to get a hands-on look onto the actual\ncase, I'd be happy to put it online.\n\nrgds,\nPMc\n\n", "msg_date": "Fri, 23 Mar 2018 11:03:08 +0100", "msg_from": "Peter <[email protected]>", "msg_from_op": true, "msg_subject": "Should from_collapse be switched off? (queries 10 times faster)" }, { "msg_contents": "Peter schrieb am 23.03.2018 um 11:03:\n> My queries get up to 10 times faster when I disable from_collapse\n> (setting from_collapse_limit=1).\n> \n> After this finding, The pramatic solution is easy: it needs to be\n> switched off.\n\nYou should post some example queries together with the slow and fast plans. \nIdeally generated using \"explain(analyze, buffers)\" instead of a simple \"explain\" to see details on the execution \n\nThomas\n\n\n\n", "msg_date": "Fri, 23 Mar 2018 12:39:07 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should from_collapse be switched off? (queries 10 times faster)" }, { "msg_contents": "Peter wrote:\n> My queries get up to 10 times faster when I disable from_collapse\n> (setting from_collapse_limit=1).\n> \n> After this finding, The pramatic solution is easy: it needs to be\n> switched off.\n> \n> BUT:\n> I found this perchance, accidentally (after the queries had been\n> running for years). And this gives me some questions about\n> documentation and best practices.\n> \n> I could not find any documentation or evaluation that would say\n> that from_collapse can have detrimental effects. Even less, which\n> type of queries may suffer from that.\n\nhttps://www.postgresql.org/docs/current/static/explicit-joins.html\nstates towards the end of the page that the search tree grows\nexponentially with the number of relations, and from_collapse_limit\ncan be set to control that.\n\n> In my case, planning uses 1 or 2% of the cycles needed for\n> execution; that seems alright to me. \n> And, as said above, I cannot see why my queries might be an\n> atypical case (I don't think they are).\n> \n> If somebody would like to get a hands-on look onto the actual\n> case, I'd be happy to put it online.\n\nIt seems like you are barking up the wrong tree.\n\nYour query does not take long because of the many relations in the\nFROM list, but because the optimizer makes a wrong choice.\n\nIf you set from_collapse_limit to 1, you force the optimizer to\njoin the tables in the order in which they appear in the query, and\nby accident this yields a better plan than the one generated if the\noptimizer is free to do what it thinks is best.\n\nThe correct solution is *not* to set from_collapse_limit = 1, but\nto find and fix the problem that causes the optimizer to make a\nwrong choice.\n\nIf you send the query and the output of\nEXPLAIN (ANALYZE, BUFFERS) SELECT ...\nwe have a chance of telling you what's wrong.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n", "msg_date": "Fri, 23 Mar 2018 12:41:35 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should from_collapse be switched off? (queries 10 times faster)" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> Peter wrote:\n>> I could not find any documentation or evaluation that would say\n>> that from_collapse can have detrimental effects. Even less, which\n>> type of queries may suffer from that.\n\n> https://www.postgresql.org/docs/current/static/explicit-joins.html\n> states towards the end of the page that the search tree grows\n> exponentially with the number of relations, and from_collapse_limit\n> can be set to control that.\n\nIt's conceivable that the OP's problem is actually planning time\n(if the query joins sufficiently many tables) and that restricting\nthe cost of the join plan search is really what he needs to do.\nLacking any further information about the problem, we can't say.\n\nWe can, however, point to\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nconcerning how to ask this type of question effectively.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 23 Mar 2018 10:14:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should from_collapse be switched off? (queries 10 times faster)" }, { "msg_contents": "\nThe problem appeared when I found the queries suddenly taking longer\nthan usual. Investigation showed that execution time greatly depends\non the way the queries are invoked.\nConsider fn(x) simply a macro containing a plain SQL SELECT statement\nreturning SETOF (further detail follows below):\n\n# SELECT fn(x);\n-> 6.3 sec.\n\n# SELECT a from fn(x) as a;\n-> 1.3 sec.\n\nFurther investigation with auto_explain shows different plans being\nchosen. The slower one uses an Index Only Scan, which seems to perform\nbad. Slightly increasing random_page_cost solves this, but this seems\nthe wrong way, because we are on SSD+ZFS, where random_page_cost\nactually should be DEcreased, as there is no difference if random or\nsequential.\n\nDuring this effort I accidentally came upon from_collapse_limit,\nand setting it off significantly changed things:\n\n# SET from_collapse_limit = 1;\n\n# SELECT fn(x);\n-> 0.6 sec.\n\n# SELECT a from fn(x) as a;\n-> 1.2 sec.\n\nThe plans look different now (obviousely), and again the difference\nbetween the two invocations comes from an an Index Only Scan, but\nthis time the Index Only Scan is faster. So now we can reduce\nrandom_page_cost in order to better reflect physical circumstances,\nand then both invocations will be fast.\n\n From here it looks like from_collapse is the problem.\n\n\nNow for the details:\n\nVACUUM ANALYZE is up to date, and all respective configurations are as\ndefault.\n\nThe query itself contains three nested SELECTS working all on the same\ntable. The table is 400'000 rows, 36 MB. (The machine is a pentium-3,\nwhich is my router - so don't be surprized about the comparatively long\nexecution times.)\n\nThis is the (critical part of the) query - let $1 be something like\n'2017-03-03':\n\n SELECT MAX(quotes.datum) AS ratedate, aktkurs.*\n FROM quotes, wpnames, places,\n (SELECT quotes.datum, close, quotes.wpname_id, places.waehrung\n FROM quotes, wpnames, places,\n (SELECT MAX(datum) AS datum, wpname_id\n FROM quotes\n WHERE datum <= $1\n GROUP BY wpname_id) AS newest\n WHERE newest.datum = quotes.datum\n AND newest.wpname_id = quotes.wpname_id\n AND quotes.wpname_id = wpnames.id\n AND wpnames.place_id = places.id) AS aktkurs\n WHERE quotes.wpname_id = wpnames.id\n AND wpnames.place_id = places.id AND places.platz = 'WAEHR'\n AND wpnames.nummer = aktkurs.waehrung\n AND quotes.datum <= aktkurs.datum\n GROUP BY aktkurs.datum, aktkurs.close, aktkurs.wpname_id,\n aktkurs.waehrung\n\nHere are the (respective parts of the) tables:\n\nCREATE TABLE public.quotes -- rows = 405466, 36 MB\n(\n id integer NOT NULL DEFAULT nextval('quotes_id_seq'::regclass),\n wpname_id integer NOT NULL,\n datum date NOT NULL,\n close double precision NOT NULL,\n CONSTRAINT quotes_pkey PRIMARY KEY (id),\n CONSTRAINT fk_rails_626c320689 FOREIGN KEY (wpname_id)\n REFERENCES public.wpnames (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n) \nCREATE INDEX quotes_wd_idx -- 8912 kB\n ON public.quotes\n USING btree\n (wpname_id, datum);\n\nCREATE TABLE public.wpnames -- rows = 357, 40 kB\n(\n id integer NOT NULL DEFAULT nextval('wpnames_id_seq'::regclass),\n place_id integer NOT NULL,\n nummer text NOT NULL,\n name text NOT NULL,\n CONSTRAINT wpnames_pkey PRIMARY KEY (id),\n CONSTRAINT fk_rails_18eae07552 FOREIGN KEY (place_id)\n REFERENCES public.places (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\n \nCREATE TABLE public.places -- rows = 11, 8192 b\n(\n id integer NOT NULL DEFAULT nextval('places_id_seq'::regclass),\n platz text NOT NULL,\n text text,\n waehrung character varying(3) NOT NULL,\n CONSTRAINT places_pkey PRIMARY KEY (id)\n)\n\nHint: the quotes table contains daily stock quotes AND forex quotes,\nand what the thing does is fetch the newest quotes before a given\ndate (inmost SELECT), fetch the respective currency (\"waehrung\") from\nwpnames+places (next SELECT), and fetch the (date of the) respective\nnewest forex quote (last SELECT). (A final outermost fourth select\nwill then put it all together, but thats not part of the problem.)\n\nFinally, the execution plans:\n\n6 sec. index only scan with from_collapse:\nhttps://explain.depesz.com/s/IPaT\n\n1.3 sec. seq scan with from_collapse:\nhttps://explain.depesz.com/s/Bxys\n\n1.2 sec. seq scan w/o from_collapse:\nhttps://explain.depesz.com/s/V02L\n\n0.6 sec. index only scan w/o from_collapse:\nhttps://explain.depesz.com/s/8Xh\n\n\nAddendum: from the Guides for the mailing list, supplemental\ninformation as requested. As this concerns planner strategy, which is\ninfluenced by statistics, it appears difficult to me to create a\nproper test-case, because I would need to know from where the planner\nfetches the decision-relevant information - which is exactly my\nquestion: how does it get the clue to choose the bad plans?\n\n CPU: Intel Pentium III (945.02-MHz 686-class CPU)\n avail memory = 2089263104 (1992 MB)\n FreeBSD 11.1-RELEASE-p7\n PostgreSQL 9.5.7 on i386-portbld-freebsd11.1, compiled by FreeBSD clang version 4.0.0 (tags/RELEASE_400/final 297347) (based on LLVM 4.0.0), 32-bit\n\n name | current_setting | source \n------------------------------+----------------------------------------+--------------------\n application_name | psql | client\n archive_command | ~pgsql/autojobs/RedoLog.copy \"%f\" \"%p\" | configuration file\n archive_mode | on | configuration file\n autovacuum | off | configuration file\n autovacuum_naptime | 5min | configuration file\n checkpoint_completion_target | 0 | configuration file\n checkpoint_timeout | 10min | configuration file\n client_encoding | UTF8 | client\n DateStyle | German, DMY | configuration file\n default_text_search_config | pg_catalog.german | configuration file\n dynamic_shared_memory_type | posix | configuration file\n effective_cache_size | 1GB | configuration file\n effective_io_concurrency | 2 | configuration file\n full_page_writes | off | configuration file\n lc_messages | en_US.UTF-8 | configuration file\n lc_monetary | en_US.UTF-8 | configuration file\n lc_numeric | en_US.UTF-8 | configuration file\n lc_time | de_DE.UTF-8 | configuration file\n listen_addresses | 192.168.97.9,192.168.97.17 | configuration file\n log_checkpoints | on | configuration file\n log_connections | on | configuration file\n log_destination | syslog | configuration file\n log_disconnections | on | configuration file\n log_error_verbosity | terse | configuration file\n log_line_prefix | %u:%d[%r] | configuration file\n log_lock_waits | on | configuration file\n log_min_duration_statement | 1min | configuration file\n log_min_messages | info | configuration file\n log_temp_files | 10000kB | configuration file\n maintenance_work_mem | 350MB | configuration file\n max_connections | 60 | configuration file\n max_files_per_process | 200 | configuration file\n max_stack_depth | 60MB | configuration file\n max_wal_size | 1GB | configuration file\n min_wal_size | 80MB | configuration file\n shared_buffers | 180MB | configuration file\n synchronous_commit | on | configuration file\n temp_buffers | 80MB | configuration file\n unix_socket_permissions | 0777 | configuration file\n wal_buffers | 256kB | configuration file\n wal_level | archive | configuration file\n wal_writer_delay | 2s | configuration file\n work_mem | 350MB | configuration file\n\n\n", "msg_date": "Fri, 23 Mar 2018 15:30:21 +0100", "msg_from": "Peter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should from_collapse be switched off? (queries 10 times faster)" }, { "msg_contents": "On Fri, Mar 23, 2018 at 12:41:35PM +0100, Laurenz Albe wrote:\n\n! https://www.postgresql.org/docs/current/static/explicit-joins.html\n! states towards the end of the page that the search tree grows\n! exponentially with the number of relations, and from_collapse_limit\n! can be set to control that.\n\nYes, I read that page.\n\n! > In my case, planning uses 1 or 2% of the cycles needed for\n! > execution; that seems alright to me. \n! > And, as said above, I cannot see why my queries might be an\n! > atypical case (I don't think they are).\n! > \n! > If somebody would like to get a hands-on look onto the actual\n! > case, I'd be happy to put it online.\n! \n! It seems like you are barking up the wrong tree.\n! \n! Your query does not take long because of the many relations in the\n! FROM list, but because the optimizer makes a wrong choice.\n\nExactly! \nAnd I am working hard in order to understand WHY this happens.\n\n! The correct solution is *not* to set from_collapse_limit = 1, but\n! to find and fix the problem that causes the optimizer to make a\n! wrong choice.\n! \n! If you send the query and the output of\n! EXPLAIN (ANALYZE, BUFFERS) SELECT ...\n! we have a chance of telling you what's wrong.\n\nYour viewpoint would be preferrable, only I am lacking any idea on\nwhere there could be such a problem that would make up a root cause.\n\nI will gladly follow Your suggestion; data is underway. \n\nP.\n\n", "msg_date": "Fri, 23 Mar 2018 15:30:52 +0100", "msg_from": "Peter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should from_collapse be switched off? (queries 10 times faster)" }, { "msg_contents": "On Fri, Mar 23, 2018 at 10:14:19AM -0400, Tom Lane wrote:\n\n! It's conceivable that the OP's problem is actually planning time\n! (if the query joins sufficiently many tables) and that restricting\n! the cost of the join plan search is really what he needs to do.\n\nNegative. Plnning time 10 to 27 ms. Execution time 600 to 6300 ms.\n\n! Lacking any further information about the problem, we can't say.\n! We can, however, point to\n! https://wiki.postgresql.org/wiki/Slow_Query_Questions\n! concerning how to ask this type of question effectively.\n\nI strongly hope the data that I sent as followup will now \nsuffice Your expectations.\n\nrgds,\nPMc\n\n", "msg_date": "Fri, 23 Mar 2018 18:36:58 +0100", "msg_from": "Peter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should from_collapse be switched off? (queries 10 times faster)" }, { "msg_contents": "Peter wrote:\n> On Fri, Mar 23, 2018 at 10:14:19AM -0400, Tom Lane wrote:\n> \n> ! It's conceivable that the OP's problem is actually planning time\n> ! (if the query joins sufficiently many tables) and that restricting\n> ! the cost of the join plan search is really what he needs to do.\n> \n> Negative. Plnning time 10 to 27 ms. Execution time 600 to 6300 ms.\n> \n> ! Lacking any further information about the problem, we can't say.\n> ! We can, however, point to\n> ! https://wiki.postgresql.org/wiki/Slow_Query_Questions\n> ! concerning how to ask this type of question effectively.\n> \n> I strongly hope the data that I sent as followup will now \n> suffice Your expectations.\n\nYour reported execution times don't match the time reported in the\nEXPLAIN output...\n\nThe cause of the long execution time is clear:\n\nThe row count of the join between \"places\" (WHERE platz = 'WAEHR'),\n\"wpnames\" and \"places AS places_1\" is underestimated by a factor of 10\n(1 row instead of 10).\n\nThe nested loop join that is chosen as a consequence is now executed\n10 times instead of the estimated 1 time, which is where almost all the\nexecution time is spent.\n\nThe question how to fix that is more complicated, and I cannot solve\nit off-hand with a complicated query like that.\n\nSetting \"enable_nestloop = off\" is as coarse as forcing \"from_collapse = 1\"\nand will negatively impact other queries - if it helps at all.\n\nYou'll probably have to rewrite the query.\nSorry that I cannot be of more help.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n", "msg_date": "Sun, 25 Mar 2018 07:12:08 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should from_collapse be switched off? (queries 10 times faster)" }, { "msg_contents": "Laurenz,\n\nthank You very much for Your comments!\n\nOn Sun, Mar 25, 2018 at 07:12:08AM +0200, Laurenz Albe wrote:\n\n! Your reported execution times don't match the time reported in the\n! EXPLAIN output...\n\nShould these match? \nIt seems the EXPLAIN (ANALYZE, BUFFERS) does additional things, not \njust execute the query. \n\n! The cause of the long execution time is clear:\n! \n! The row count of the join between \"places\" (WHERE platz = 'WAEHR'),\n! \"wpnames\" and \"places AS places_1\" is underestimated by a factor of 10\n! (1 row instead of 10).\n! \n! The nested loop join that is chosen as a consequence is now executed\n! 10 times instead of the estimated 1 time, which is where almost all the\n! execution time is spent.\n\nI've seen this, but do not fully understand it yet.\n \n! Setting \"enable_nestloop = off\" is as coarse as forcing \"from_collapse = 1\"\n! and will negatively impact other queries - if it helps at all.\n\nSince this query is already put into a function, I found I can easily\nset from_collapse=1 only for this function, by means of \"ALTER\nFUNCTION ... SET ...\", so it does only influence this query. \nIt seems this is the most straight-forward solution here.\n \nrgds,\nP.\n\n", "msg_date": "Mon, 26 Mar 2018 12:36:13 +0200", "msg_from": "Peter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should from_collapse be switched off? (queries 10 times faster)" }, { "msg_contents": "Peter wrote:\n> ! Your reported execution times don't match the time reported in the\n> ! EXPLAIN output...\n> \n> Should these match? \n> It seems the EXPLAIN (ANALYZE, BUFFERS) does additional things, not \n> just execute the query.\n\nTrue.\nI had assumed you were speaking about the duration of the EXPLAIN (ANALYZE).\n \n> ! Setting \"enable_nestloop = off\" is as coarse as forcing \"from_collapse = 1\"\n> ! and will negatively impact other queries - if it helps at all.\n> \n> Since this query is already put into a function, I found I can easily\n> set from_collapse=1 only for this function, by means of \"ALTER\n> FUNCTION ... SET ...\", so it does only influence this query. \n> It seems this is the most straight-forward solution here.\n\nIt is an option, although not one that makes one happy.\n\nYou might have to revisit the decision if the data distribution changes\nand the chosen query plan becomes inefficient.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n", "msg_date": "Mon, 26 Mar 2018 19:13:21 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should from_collapse be switched off? (queries 10 times faster)" } ]
[ { "msg_contents": "Hi,\n\nI have a table api.issues that has a text column \"body\" with long texts (1000+ chars). I also wrote a custom function \"normalizeBody\" with plv8 that is a simple Text -> Text conversion. Now I created an index applying the function to the body column, so I can quickly run\n\nSELECT * FROM api.issues WHERE normalizeBody(body) = normalizeBody($1)\n\nThe issue is, that the planning time is very slow (1.8 seconds). When I replace \"normalizeBody\" with \"md5\", however, I get a planning time of 0.5ms.\n\nPlease note that row level security is enabled on the api.issues and most other tables.\n\nThanks for your help,\nBen\n\n\nDetails below:\n- Managed AWS Postgres with default settings, no replication\n- PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\n- Table api.issues has approx. 40 000 rows.\n\n```\nexplain (analyze, buffers) select 1 from api.issues\n where normalizeunidiff(body) = normalizeunidiff('');\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using rejectedissues_normalized on issues (cost=0.00..218.80 rows=217 width=4) (actual time=0.160..0.204 rows=3 loops=1)\n Index Cond: (normalizeunidiff(body) = ''::text)\n Buffers: shared hit=5\n Planning time: 1878.783 ms\n Execution time: 0.230 ms\n(5 rows)\n```\n\n```\nexplain (analyze, buffers) select 1 from api.issues\n where md5(body) = md5('');\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------\n Index Scan using rejectedissues_md5 on issues (cost=0.00..218.80 rows=217 width=4) (actual time=0.016..0.016 rows=0 loops=1)\n Index Cond: (md5(body) = 'd41d8cd98f00b204e9800998ecf8427e'::text)\n Buffers: shared hit=2\n Planning time: 0.565 ms\n Execution time: 0.043 ms\n(5 rows)\n```\n\n\n```\nCREATE OR REPLACE FUNCTION public.normalizeunidiff(\n\tunidiff text)\n RETURNS text\n LANGUAGE 'plv8'\n\n COST 100\n IMMUTABLE STRICT PARALLEL SAFE\nAS $BODY$\n\n return unidiff\n .replace(/[\\s\\S]*@@/m, '') // remove header\n .split('\\n')\n .map(function (line) { return line.trim() })\n .filter(function (line) { return line.search(/^[+-]/) >= 0 })\n .join('\\n')\n .trim()\n\n$BODY$;\n```\n\nThe indices are created this way where md5 is normalizeunidiff for the second one:\n```\nCREATE INDEX \"rejectedissues_md5\"\n ON api.issues using hash\n (md5(body));\n```\n", "msg_date": "Fri, 23 Mar 2018 21:28:22 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Slow planning time for custom function" }, { "msg_contents": "Hi,\n\nOn 2018-03-23 21:28:22 +0100, [email protected] wrote:\n> I have a table api.issues that has a text column \"body\" with long texts (1000+ chars). I also wrote a custom function \"normalizeBody\" with plv8 that is a simple Text -> Text conversion. Now I created an index applying the function to the body column, so I can quickly run\n> \n> SELECT * FROM api.issues WHERE normalizeBody(body) = normalizeBody($1)\n> \n> The issue is, that the planning time is very slow (1.8 seconds). When I replace \"normalizeBody\" with \"md5\", however, I get a planning time of 0.5ms.\n\nHow long does planning take if you repeat this? I wonder if a good chunk\nof those 1.8s is initial loading of plv8.\n\nGreetings,\n\nAndres Freund\n\n", "msg_date": "Fri, 23 Mar 2018 18:35:52 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow planning time for custom function" }, { "msg_contents": "On 24 March 2018 at 14:35, Andres Freund <[email protected]> wrote:\n> How long does planning take if you repeat this? I wonder if a good chunk\n> of those 1.8s is initial loading of plv8.\n\nMaybe, but it also could be the execution of the function, after all,\nthe planner does invoke immutable functions:\n\n# explain verbose select lower('TEST');\n QUERY PLAN\n-------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=32)\n Output: 'test'::text\n(2 rows)\n\nWould be interesting to see what changes without the IMMUTABLE flag.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Sat, 24 Mar 2018 14:52:29 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow planning time for custom function" }, { "msg_contents": "Hi,\n\nthanks for your help which already resolved the issue for me. I worked through your replies and it is indeed a startup delay for the first call to a plv8 function in a session. I pasted the query plans below for comparison.\n\n```\nexplain analyze select normalizeunidiff('')\n\n QUERY PLAN \n-------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=32) (actual time=0.002..0.002 rows=1 loops=1)\n Planning time: 1863.782 ms\n Execution time: 0.022 ms\n```\n\nThen I ran again multiple times, to make sure that there was not some kind of startup delay:\n```\nselect normalizeunidiff('');\nexplain analyze select normalizeunidiff('');\n\n QUERY PLAN \n-------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=32) (actual time=0.000..0.001 rows=1 loops=1)\n Planning time: 0.190 ms\n Execution time: 0.008 ms\n```\n\nThanks again\n-Ben\n\n\n> On 24. Mar 2018, at 02:52, David Rowley <[email protected]> wrote:\n> \n> On 24 March 2018 at 14:35, Andres Freund <[email protected]> wrote:\n>> How long does planning take if you repeat this? I wonder if a good chunk\n>> of those 1.8s is initial loading of plv8.\n> \n> Maybe, but it also could be the execution of the function, after all,\n> the planner does invoke immutable functions:\n> \n> # explain verbose select lower('TEST');\n> QUERY PLAN\n> -------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=32)\n> Output: 'test'::text\n> (2 rows)\n> \n> Would be interesting to see what changes without the IMMUTABLE flag.\n> \n> -- \n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Sun, 25 Mar 2018 20:18:44 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Slow planning time for custom function" } ]
[ { "msg_contents": "\nGiven an arbitrary function fn(x) returning numeric.\n\nQuestion: how often is the function executed?\n\n\nA. \nselect fn('const'), fn('const');\n\nAnswer:\nTwice. \n\nThis is not a surprize.\n\n\nB.\nselect v,v from fn('const') as v; [1]\n\nAnswer:\nOnce.\n\n\nC.\nselect v.v,v.v from (select fn('const') as v) as v;\n\nAnswer:\nOnce if declared VOLATILE.\nTwice if declared STABLE.\n\nNow this IS a surprize. It is clear that the system is not allowed to\nexecute the function twice when declared VOLATILE. It IS ALLOWED to\nexecute it twice when STABLE - but to what point, except prolonging\nexecution time?\n\nOver all, VOLATILE performs better than STABLE.\n\n\n[1] I seem to remember that I was not allowed to do this when I coded\nmy SQL, because expressions in the from clause must return SETOF, not\na single value. Now it seems to work.\n\n", "msg_date": "Sat, 24 Mar 2018 02:27:47 +0100", "msg_from": "Peter <[email protected]>", "msg_from_op": true, "msg_subject": "functions: VOLATILE performs better than STABLE" }, { "msg_contents": "Peter wrote:\n> Given an arbitrary function fn(x) returning numeric.\n> \n> Question: how often is the function executed?\n> [...]\n> C.\n> select v.v,v.v from (select fn('const') as v) as v;\n> \n> Answer:\n> Once if declared VOLATILE.\n> Twice if declared STABLE.\n> \n> Now this IS a surprize. It is clear that the system is not allowed to\n> execute the function twice when declared VOLATILE. It IS ALLOWED to\n> execute it twice when STABLE - but to what point, except prolonging\n> execution time?\n> \n> Over all, VOLATILE performs better than STABLE.\n\nThe reason is that the subquery with the VOLATILE function can be\nflattened; see the EXPLAIN (VERBOSE) output.\n\nThere is not guarantee that less volatility means better performance.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n", "msg_date": "Sun, 25 Mar 2018 07:00:43 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: functions: VOLATILE performs better than STABLE" }, { "msg_contents": "On 25 March 2018 at 18:00, Laurenz Albe <[email protected]> wrote:\n> Peter wrote:\n>> Over all, VOLATILE performs better than STABLE.\n>\n> The reason is that the subquery with the VOLATILE function can be\n> flattened; see the EXPLAIN (VERBOSE) output.\n>\n> There is not guarantee that less volatility means better performance.\n\nAlthough, it would be nice.\n\nTPC-H Q1 does appear to be crafted to allow database with smarter\nexpression evaluation to get a better score.\n\nIt would probably require some sort of recursive expression evaluation\nwhere at each level we check if that expression has already been seen,\nif it has, then replace it with some sort of placeholder, then\nevaluate each placeholder in the required order.\n\nProbably the first part could be done during planning. It would mean\ntargetlists would need to carry a bit more weight.\n\nIt would be an interesting project to work on, but not planning to personally.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Mon, 26 Mar 2018 01:24:00 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: functions: VOLATILE performs better than STABLE" }, { "msg_contents": "On Sun, Mar 25, 2018 at 12:00 AM, Laurenz Albe <[email protected]> wrote:\n> Peter wrote:\n>> Given an arbitrary function fn(x) returning numeric.\n>>\n>> Question: how often is the function executed?\n>> [...]\n>> C.\n>> select v.v,v.v from (select fn('const') as v) as v;\n>>\n>> Answer:\n>> Once if declared VOLATILE.\n>> Twice if declared STABLE.\n>>\n>> Now this IS a surprize. It is clear that the system is not allowed to\n>> execute the function twice when declared VOLATILE. It IS ALLOWED to\n>> execute it twice when STABLE - but to what point, except prolonging\n>> execution time?\n>>\n>> Over all, VOLATILE performs better than STABLE.\n>\n> The reason is that the subquery with the VOLATILE function can be\n> flattened; see the EXPLAIN (VERBOSE) output.\n>\n> There is not guarantee that less volatility means better performance.\n\nI think you have it backwards. The STABLE query is flattened into\nsomething like:\n\nselect fn('const'), v fn('const') v;\n\nThe VOLATILE version can't be flattened that way since it's forced to\nexecute as the user sees it (one for the inner query).\n\nYou can probably get the fast plan via:\n\nselect v.v,v.v from (select fn('const') as v offset 0) as v;\n\nThe contents of the function fn() also matter very much here as we\nwould want to know if the function is a candidate for inlining.\n\nmerlin\n\n", "msg_date": "Thu, 5 Apr 2018 09:18:42 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: functions: VOLATILE performs better than STABLE" } ]
[ { "msg_contents": "Hi I am having terrible trouble with a simple partitioned table.\nSelect queries are very slow.\n\nIe\n\nSELECT ts::timestamptz, s1.sensor_id, s1.value\n FROM sensor_values s1\n WHERE s1.sensor_id =\nANY(ARRAY[596304,597992,610978,597998])\n AND s1.ts >= '2000-01-01\n00:01:01'::timestamptz AND\n s1.ts < '2018-03-20\n00:01:01'::timestamptz\n\nTakes over five minutes.\n\n\nPostgres version is PostgreSQL 9.5.9 on x86_64-pc-linux-gnu, compiled\nby gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit\nshared_buffers = 3000MB\nwork_mem = 50MB\nmaintenance_work_mem = 64MB\nwal_writer_delay = 10000ms\n#effective_cache_size = 4GB I guess this is the default\n\nMy amount of memory is 15G\n\n\nThe table gets constant inserts (thousands a minute)\nThe table has something like 700000000 rows.\n\nSo the table is defined as\n\n\n\\d+ sensor_values;\n Table \"public.sensor_values\"\n Column | Type | Modifiers\n | Storage | Stats target | Description\n-----------+--------------------------+--------------------------------------------+---------+--------------+-------------\n ts | timestamp with time zone | not null\n | plain | |\n value | double precision | not null default 'NaN'::real\n | plain | |\n sensor_id | integer | not null\n | plain | |\n status | tridium_status | not null default\n'unknown'::tridium_status | plain | |\nIndexes:\n \"sensor_values_sensor_id_timestamp_index\" UNIQUE, btree (sensor_id, ts)\nForeign-key constraints:\n \"sensor_values_sensor_id_fkey\" FOREIGN KEY (sensor_id) REFERENCES\nsensors(id)\nTriggers:\n a_statistics_trigger BEFORE INSERT OR DELETE ON sensor_values FOR\nEACH ROW EXECUTE PROCEDURE stat_info()\n sensor_values_trigger_timestamp_sensor_insert_sensor_values BEFORE\nINSERT ON sensor_values FOR EACH ROW EXECUTE PROCEDURE\nsensor_values_timestamp_sensor_func_insert_trigger()\nChild tables: sensor_values_2007q1,\n sensor_values_2007q2,\n sensor_values_2007q3,\n sensor_values_2007q4,\n sensor_values_2008q1,\n sensor_values_2008q2,\n sensor_values_2008q3,\n sensor_values_2008q4,\n sensor_values_2009q1,\n sensor_values_2009q2,\n sensor_values_2009q3,\n sensor_values_2009q4,\n sensor_values_2010q1,\n sensor_values_2010q2,\n sensor_values_2010q3,\n sensor_values_2010q4,\n sensor_values_2011q1,\n sensor_values_2011q2,\n sensor_values_2011q3,\n sensor_values_2011q4,\n sensor_values_2012q1,\n sensor_values_2012q2,\n sensor_values_2012q3,\n sensor_values_2012q4,\n sensor_values_2013q1,\n sensor_values_2013q2,\n sensor_values_2013q3,\n sensor_values_2013q4,\n sensor_values_2014q1,\n sensor_values_2014q2,\n sensor_values_2014q3,\n sensor_values_2014q4,\n sensor_values_2015q1,\n sensor_values_2015q2,\n sensor_values_2015q3,\n sensor_values_2015q4,\n sensor_values_2016q1,\n sensor_values_2016q2,\n sensor_values_2016q3,\n sensor_values_2016q4,\n sensor_values_2017q1,\n sensor_values_2017q2,\n sensor_values_2017q3,\n sensor_values_2017q4,\n sensor_values_2018q1,\n sensor_values_2018q2,\n sensor_values_2018q3,\n sensor_values_2018q4,\n sensor_values_2019q1,\n sensor_values_2019q2,\n sensor_values_2019q3,\n sensor_values_2019q4,\n sensor_values_2020q1,\n sensor_values_2020q2,\n sensor_values_2020q3,\n sensor_values_2020q4\n\n\n\nThe child tables are all like\n\n Column | Type | Modifiers\n | Storage | Stats target | Description\n-----------+--------------------------+--------------------------------------------+---------+--------------+-------------\n ts | timestamp with time zone | not null\n | plain | |\n value | double precision | not null default 'NaN'::real\n | plain | |\n sensor_id | integer | not null\n | plain | |\n status | tridium_status | not null default\n'unknown'::tridium_status | plain | |\nIndexes:\n \"sensor_values_2018q1_sensor_id_timestamp_index\" UNIQUE, btree\n(sensor_id, ts)\nCheck constraints:\n \"sensor_values_2018q1_timestamp_check\" CHECK (ts >= '2018-01-01\n00:00:00+00'::timestamp with time zone AND ts < '2018-04-01\n01:00:00+01'::timestamp with time zone)\nInherits: sensor_values\n\n\n\n\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT ts::timestamptz, s1.sensor_id, s1.value\n FROM sensor_values s1\n WHERE s1.sensor_id =\nANY(ARRAY[596304,597992,610978,597998])\n AND s1.ts >= '2000-01-01\n00:01:01'::timestamptz AND\n s1.ts < '2018-03-20\n00:01:01'::timestamptz\n[2018-03-27 14:45:39] 260 rows retrieved starting from 1 in 13m 13s\n221ms (execution: 13m 13s 141ms, fetching: 80ms)\n\n\nShows the following output\n\n\nhttps://explain.depesz.com/s/c8HU\n\n\n\nAny idea why this query takes so long ?\n\nThanks\n\n", "msg_date": "Tue, 27 Mar 2018 15:14:30 +0100", "msg_from": "Glenn Pierce <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query on partitioned table." }, { "msg_contents": "On Tue, Mar 27, 2018 at 03:14:30PM +0100, Glenn Pierce wrote:\n> Hi I am having terrible trouble with a simple partitioned table.\n> Select queries are very slow.\n....\n> The child tables are all like\n> Check constraints:\n> \"sensor_values_2018q1_timestamp_check\" CHECK (ts >= '2018-01-01\n> 00:00:00+00'::timestamp with time zone AND ts < '2018-04-01\n> 01:00:00+01'::timestamp with time zone)\n\n> EXPLAIN (ANALYZE, BUFFERS) SELECT ts::timestamptz, s1.sensor_id, s1.value\n> FROM sensor_values s1\n> WHERE s1.sensor_id =\n> ANY(ARRAY[596304,597992,610978,597998])\n> AND s1.ts >= '2000-01-01\n> 00:01:01'::timestamptz AND\n> s1.ts < '2018-03-20\n> 00:01:01'::timestamptz\n\n> Shows the following output\n> https://explain.depesz.com/s/c8HU\n\nIt's scanning all partitions, so apparently constraint_exclusion isn't working.\n\nIs it because the CHECK has ts with different timezones +00 and +01 ??\n\nAlso, it looks funny to use 00:01:01 as the beginning of the day (although I\nthink it's true that an HR department would understand that better..).\n\nJustin\n\n", "msg_date": "Tue, 27 Mar 2018 09:43:15 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query on partitioned table." }, { "msg_contents": "Re-added -performance.\n\nOn Tue, Mar 27, 2018 at 05:13:25PM +0100, Glenn Pierce wrote:\n> Damn as I was playing with the indexes I must have deleted the constraints :(\n> Question if I have a constraint like\n> \n> ALTER TABLE sensor_values_2007q1\n> ADD CONSTRAINT sensor_values_2007q1_sensor_id_timestamp_constraint\n> UNIQUE (sensor_id, ts);\n> \n> will that be used like an index or do I need to add a separate index ?\n\nYes:\n\nhttps://www.postgresql.org/docs/current/static/ddl-constraints.html\n|Adding a unique constraint will automatically create a unique B-tree index on\nthe column or group of columns listed in the constraint\n\nhttps://www.postgresql.org/docs/current/static/indexes-unique.html\n|PostgreSQL automatically creates a unique index when a unique constraint or\n|primary key is defined for a table. The index ... is the mechanism that\n|enforces the constraint.\n\nJustin\n\n", "msg_date": "Tue, 27 Mar 2018 16:00:32 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query on partitioned table." } ]
[ { "msg_contents": "Hey all,\nI'm using Postgres 10.3\n6 core VM with 16gb of ram\n\nMy database schema requires a good bit of temporal data stored in a\nfew my tables, and I make use of ranges and exclusion constraints to\nkeep my data consistent.\n\nI have quite a few queries in my DB which are using a very sub-optimal\nindex choice compared to others available. I am just looking for ways\nto tune things to make it less likely to use the backing index for an\nexclusion constraint for queries where better indexes are available.\n\nHere is an example of a query which exhibits this behavior:\nSELECT *\nFROM claim\nINNER JOIN claim_amounts\nON claim.claim_id = claim_amounts.claim_id\nLEFT JOIN deduction_claim\nON deduction_claim.claim_id = claim.claim_id\nAND upper_inf(deduction_claim.active_range)\nWHERE claim.claim_id = ANY ('{uuids_go_here}'::uuid[]);\n\nHere is the plan which is always chosen: https://explain.depesz.com/s/rCjO\n\nI then dropped the exclusion constraint temporarily to test, and this\nwas the plan chosen after: https://explain.depesz.com/s/xSm0\n\nThe table definition is:\nCREATE TABLE deduction_claim\n(\n deduction_id uuid NOT NULL,\n claim_id uuid NOT NULL,\n deduction_amount_allotted numeric NOT NULL,\n active_range tstzrange NOT NULL DEFAULT tstzrange(now(),\nNULL::timestamp with time zone),\n inoperative boolean DEFAULT false,\n deduction_claim_id uuid NOT NULL DEFAULT gen_random_uuid(),\n CONSTRAINT deduction_claim_pkey PRIMARY KEY (deduction_claim_id),\n CONSTRAINT deduction_claim_claim_id_fkey FOREIGN KEY (claim_id)\n REFERENCES claim (claim_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT deduction_claim_deduction_id_fkey FOREIGN KEY (deduction_id)\n REFERENCES deduction (deduction_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT deduction_claim_active_range_excl EXCLUDE\n USING gist (deduction_id WITH =, claim_id WITH =, active_range WITH &&),\n CONSTRAINT deduction_claim_ar_empty_check CHECK (active_range <>\n'empty'::tstzrange)\n);\n\n-- Index: idx_deduction_claim_claim_id\n\n-- DROP INDEX idx_deduction_claim_claim_id;\n\nCREATE INDEX idx_deduction_claim_claim_id\n ON deduction_claim\n USING btree\n (claim_id)\n WHERE upper_inf(active_range);\n\n-- Index: idx_deduction_claim_deduction_id\n\n-- DROP INDEX idx_deduction_claim_deduction_id;\n\nCREATE INDEX idx_deduction_claim_deduction_id\n ON deduction_claim\n USING btree\n (deduction_id)\n WHERE upper_inf(active_range);\n\nIf there is any more info I can provide, please let me know.\n\nThanks in advance for any advice you can give.\n\n", "msg_date": "Fri, 6 Apr 2018 12:30:52 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "[PERFORM] Dissuade the use of exclusion constraint index" }, { "msg_contents": "Just wondering if anyone has any thoughts on what I can do to alleviate\nthis issue?\n\nI'll kinda at a loss as to what to try to tweak for this.\n\nJust wondering if anyone has any thoughts on what I can do to alleviate this issue?I'll kinda at a loss as to what to try to tweak for this.", "msg_date": "Wed, 11 Apr 2018 05:59:07 +0000", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Dissuade the use of exclusion constraint index" }, { "msg_contents": "Adam,\n\nI think the first thing to do is to make hackers aware of the specifics of\nwhich indexes are being used etc so that the planner could be taught how to\nuse better ones.\n\nSelf contained examples do wonders\n\nDave Cramer\n\[email protected]\nwww.postgresintl.com\n\nOn 11 April 2018 at 01:59, Adam Brusselback <[email protected]>\nwrote:\n\n> Just wondering if anyone has any thoughts on what I can do to alleviate\n> this issue?\n>\n> I'll kinda at a loss as to what to try to tweak for this.\n>\n\nAdam,I think the first thing to do is to make hackers aware of the specifics of which indexes are being used etc so that the planner could be taught how to use better ones.Self contained examples do wondersDave [email protected]\nOn 11 April 2018 at 01:59, Adam Brusselback <[email protected]> wrote:Just wondering if anyone has any thoughts on what I can do to alleviate this issue?I'll kinda at a loss as to what to try to tweak for this.", "msg_date": "Wed, 11 Apr 2018 05:41:45 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Dissuade the use of exclusion constraint index" }, { "msg_contents": "> Self contained examples do wonders\nGood point, will work on that and post once I have something usable.\n\n", "msg_date": "Wed, 11 Apr 2018 10:47:02 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Dissuade the use of exclusion constraint index" }, { "msg_contents": "Alright, the first two attempts to reply to this thread I don't believe\nworked, likely due to the attachment size. Hoping this time it does...\n\n> > Self contained examples do wonders\n> Good point, will work on that and post once I have something usable.\n\nFinally got around to making a self contained example... busy few months.\n\nAttached is a pg_dump file which will create a schema called test, and\nload up some real-world data for the specified tables. Extract it,\nthen load.\n> psql -f excl_test.sql\n\nThen you can run the following which should hit the condition outlined\nabove:\n\nANALYZE test.claim;\nANALYZE test.claim_amounts;\nANALYZE test.deduction;\nANALYZE test.deduction_claim;\n\nSELECT *\nFROM test.claim\nINNER JOIN test.claim_amounts\nON claim.claim_id = claim_amounts.claim_id\nLEFT JOIN test.deduction_claim\nON deduction_claim.claim_id = claim.claim_id\nAND upper_inf(deduction_claim.active_range)\nWHERE claim.claim_id = ANY (\n'{79d037ea-4c56-419b-92c4-c2fd6dab9a28\n,d3d5d2ef-fb23-451a-bd06-9a976600492e\n,dff9bbf9-0816-46b0-baac-f3875ddf6624\n,1ac5dc75-3cce-448a-8e37-ba1f5c2f271a\n,b7b6af7e-22d2-412c-b56e-f2a589da63de\n,fa29d4c9-d820-4852-a39b-5e5a822d6fe5\n,9d8ae491-c4a2-44ce-bf1e-0edad8456b5a\n,1796635d-1b87-4315-b6eb-d45eec7dfa98\n,d7e8a26a-a00a-4216-ae53-15fba2045adb\n,391f0bb7-853a-47b4-b4aa-bc9094a2a0b9}'::uuid[]\n);\n\n\nHere is the schema / data for the test case:\nhttps://drive.google.com/open?id=1LcEv56GkH19AgEfhRB85SCPnou43jYur\n\nAlright, the first two attempts to reply to this thread I don't believe worked, likely due to the attachment size. Hoping this time it does...\n> > Self contained examples do wonders> Good point, will work on that and post once I have something usable.Finally got around to making a self contained example... busy few months.Attached is a pg_dump file which will create a schema called test, andload up some real-world data for the specified tables. Extract it,then load.> psql -f excl_test.sqlThen you can run the following which should hit the condition outlined above:ANALYZE test.claim;ANALYZE test.claim_amounts;ANALYZE test.deduction;ANALYZE test.deduction_claim;SELECT *FROM test.claimINNER JOIN test.claim_amountsON claim.claim_id = claim_amounts.claim_idLEFT JOIN test.deduction_claim\nON deduction_claim.claim_id = claim.claim_idAND upper_inf(deduction_claim.active_range)WHERE claim.claim_id = ANY ('{79d037ea-4c56-419b-92c4-c2fd6dab9a28,d3d5d2ef-fb23-451a-bd06-9a976600492e,dff9bbf9-0816-46b0-baac-f3875ddf6624,1ac5dc75-3cce-448a-8e37-ba1f5c2f271a,b7b6af7e-22d2-412c-b56e-f2a589da63de,fa29d4c9-d820-4852-a39b-5e5a822d6fe5,9d8ae491-c4a2-44ce-bf1e-0edad8456b5a,1796635d-1b87-4315-b6eb-d45eec7dfa98,d7e8a26a-a00a-4216-ae53-15fba2045adb,391f0bb7-853a-47b4-b4aa-bc9094a2a0b9}'::uuid[]);\nHere is the schema / data for the test case: https://drive.google.com/open?id=1LcEv56GkH19AgEfhRB85SCPnou43jYur", "msg_date": "Wed, 13 Jun 2018 11:59:48 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Dissuade the use of exclusion constraint index" }, { "msg_contents": "Hello all,\nJust wondering if there is anything else I can provide to help figure this\nout.\n\nOne thing I did notice, is there is a discussion about \"invisible indexes\"\ngoing on, which seems that if it was implemented, would be one way to \"fix\"\nmy problem:\nhttps://www.postgresql.org/message-id/flat/ed8c9ed7-bb5d-aaec-065b-ad4893645deb%402ndQuadrant.com#[email protected]\n\nThe down side to that approach, is even when that index really is the best\noption for a query, it cannot utilize it.\n\nLet me know if I can provide anything else.\n-Adam\n\nHello all,Just wondering if there is anything else I can provide to help figure this out.One thing I did notice, is there is a discussion about \"invisible indexes\" going on, which seems that if it was implemented, would be one way to \"fix\" my problem: https://www.postgresql.org/message-id/flat/ed8c9ed7-bb5d-aaec-065b-ad4893645deb%402ndQuadrant.com#[email protected] down side to that approach, is even when that index really is the best option for a query, it cannot utilize it.Let me know if I can provide anything else.-Adam", "msg_date": "Tue, 19 Jun 2018 15:50:15 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Dissuade the use of exclusion constraint index" } ]
[ { "msg_contents": "Folks, I read following (PostgreSQL: Documentation: 9.6: citext) and it does not hold true in my testing.. i.e citext is not performing better than lower.Am I missing something? help is appreciated.\n\n| \n| \n| \n| | |\n\n |\n\n |\n| \n| | \nPostgreSQL: Documentation: 9.6: citext\n\n\n |\n\n |\n\n |\n\n\n\n\n\"citext is not as efficient as text because the operator functions and the B-tree comparison functions must make copies of the data and convert it to lower case for comparisons. It is, however, slightly more efficient than using lower to get case-insensitive matching.\"\n\n\nHere is what I have done \ndrop table test;drop table testci;\nCREATE TABLE test (id INTEGER PRIMARY KEY,name character varying(254));CREATE TABLE testci (id INTEGER PRIMARY KEY,name citext\n);\nINSERT INTO test(id, name)SELECT generate_series(1000001,2000000), (md5(random()::text));\nINSERT INTO testci(id, name)SELECT generate_series(1,1000000), (md5(random()::text));\n\nNow, I have done sequential search\nexplain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');\n- Plan:     Node Type: \"Seq Scan\"    Parallel Aware: false    Relation Name: \"test\"    Alias: \"test\"    Startup Cost: 0.00    Total Cost: 23334.00    Plan Rows: 5000    Plan Width: 37    Actual Startup Time: 0.016    Actual Total Time: 680.199    Actual Rows: 1    Actual Loops: 1    Filter: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"    Rows Removed by Filter: 999999  Planning Time: 0.045  Triggers:   Execution Time: 680.213\n\nexplain (analyze on, format yaml) select * from testci where name='956d692092f0b9f85f36bf2b2501f3ad';\n- Plan:     Node Type: \"Seq Scan\"    Parallel Aware: false    Relation Name: \"testci\"    Alias: \"testci\"    Startup Cost: 0.00    Total Cost: 20834.00    Plan Rows: 1    Plan Width: 37    Actual Startup Time: 0.017    Actual Total Time: 1184.485    Actual Rows: 1    Actual Loops: 1    Filter: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"    Rows Removed by Filter: 999999  Planning Time: 0.029  Triggers:   Execution Time: 1184.496\n\n\nYou can see sequential searches with lower working twice as fast as citext.\nNow I added index on citext and equivalent functional index (lower) on text.\n\nCREATE INDEX textlowerindex ON test (lower(name));\ncreate index textindex on test(name);\n\n\nIndex creation took longer with citext v/s creating lower functional index.\n\nNow here comes execution with indexes\nexplain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');\n\n - Plan:     Node Type: \"Bitmap Heap Scan\"    Parallel Aware: false    Relation Name: \"test\"    Alias: \"test\"    Startup Cost: 187.18    Total Cost: 7809.06    Plan Rows: 5000    Plan Width: 37    Actual Startup Time: 0.020    Actual Total Time: 0.020    Actual Rows: 1    Actual Loops: 1    Recheck Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"    Rows Removed by Index Recheck: 0    Exact Heap Blocks: 1    Lossy Heap Blocks: 0    Plans:       - Node Type: \"Bitmap Index Scan\"        Parent Relationship: \"Outer\"        Parallel Aware: false        Index Name: \"textlowerindex\"        Startup Cost: 0.00        Total Cost: 185.93        Plan Rows: 5000        Plan Width: 0        Actual Startup Time: 0.016        Actual Total Time: 0.016        Actual Rows: 1        Actual Loops: 1        Index Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"  Planning Time: 0.051  Triggers:   Execution Time: 0.035\n\n\n\n explain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');\n\n - Plan:     Node Type: \"Index Scan\"    Parallel Aware: false    Scan Direction: \"Forward\"    Index Name: \"citextindex\"    Relation Name: \"testci\"    Alias: \"testci\"    Startup Cost: 0.42    Total Cost: 8.44    Plan Rows: 1    Plan Width: 37    Actual Startup Time: 0.049    Actual Total Time: 0.050    Actual Rows: 1    Actual Loops: 1    Index Cond: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"    Rows Removed by Index Recheck: 0  Planning Time: 0.051  Triggers:   Execution Time: 0.064\n\nDeepak\nFolks, I read following (PostgreSQL: Documentation: 9.6: citext) and it does not hold true in my testing.. i.e citext is not performing better than lower.Am I missing something? help is appreciated.PostgreSQL: Documentation: 9.6: citext\"citext is not as efficient as text because the operator functions and the B-tree comparison functions must make copies of the data and convert it to lower case for comparisons. It is, however, slightly more efficient than using lower to get case-insensitive matching.\"Here is what I have done drop table test;drop table testci;CREATE TABLE test (id INTEGER PRIMARY KEY,name character varying(254));CREATE TABLE testci (id INTEGER PRIMARY KEY,name citext);INSERT INTO test(id, name)SELECT generate_series(1000001,2000000), (md5(random()::text));INSERT INTO testci(id, name)SELECT generate_series(1,1000000), (md5(random()::text));Now, I have done sequential searchexplain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');- Plan:     Node Type: \"Seq Scan\"    Parallel Aware: false    Relation Name: \"test\"    Alias: \"test\"    Startup Cost: 0.00    Total Cost: 23334.00    Plan Rows: 5000    Plan Width: 37    Actual Startup Time: 0.016    Actual Total Time: 680.199    Actual Rows: 1    Actual Loops: 1    Filter: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"    Rows Removed by Filter: 999999  Planning Time: 0.045  Triggers:   Execution Time: 680.213explain (analyze on, format yaml) select * from testci where name='956d692092f0b9f85f36bf2b2501f3ad';- Plan:     Node Type: \"Seq Scan\"    Parallel Aware: false    Relation Name: \"testci\"    Alias: \"testci\"    Startup Cost: 0.00    Total Cost: 20834.00    Plan Rows: 1    Plan Width: 37    Actual Startup Time: 0.017    Actual Total Time: 1184.485    Actual Rows: 1    Actual Loops: 1    Filter: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"    Rows Removed by Filter: 999999  Planning Time: 0.029  Triggers:   Execution Time: 1184.496You can see sequential searches with lower working twice as fast as citext.Now I added index on citext and equivalent functional index (lower) on text.CREATE INDEX textlowerindex ON test (lower(name));create index textindex on test(name);Index creation took longer with citext v/s creating lower functional index.Now here comes execution with indexesexplain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de'); - Plan:     Node Type: \"Bitmap Heap Scan\"    Parallel Aware: false    Relation Name: \"test\"    Alias: \"test\"    Startup Cost: 187.18    Total Cost: 7809.06    Plan Rows: 5000    Plan Width: 37    Actual Startup Time: 0.020    Actual Total Time: 0.020    Actual Rows: 1    Actual Loops: 1    Recheck Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"    Rows Removed by Index Recheck: 0    Exact Heap Blocks: 1    Lossy Heap Blocks: 0    Plans:       - Node Type: \"Bitmap Index Scan\"        Parent Relationship: \"Outer\"        Parallel Aware: false        Index Name: \"textlowerindex\"        Startup Cost: 0.00        Total Cost: 185.93        Plan Rows: 5000        Plan Width: 0        Actual Startup Time: 0.016        Actual Total Time: 0.016        Actual Rows: 1        Actual Loops: 1        Index Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"  Planning Time: 0.051  Triggers:   Execution Time: 0.035 explain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de'); - Plan:     Node Type: \"Index Scan\"    Parallel Aware: false    Scan Direction: \"Forward\"    Index Name: \"citextindex\"    Relation Name: \"testci\"    Alias: \"testci\"    Startup Cost: 0.42    Total Cost: 8.44    Plan Rows: 1    Plan Width: 37    Actual Startup Time: 0.049    Actual Total Time: 0.050    Actual Rows: 1    Actual Loops: 1    Index Cond: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"    Rows Removed by Index Recheck: 0  Planning Time: 0.051  Triggers:   Execution Time: 0.064Deepak", "msg_date": "Fri, 6 Apr 2018 16:51:14 +0000 (UTC)", "msg_from": "Deepak Somaiya <[email protected]>", "msg_from_op": true, "msg_subject": "citext performance" }, { "msg_contents": "Hi,\n\nI have also faced the same problem with citext extension. It does not\nuse index when thereby making it almost unusable. The problem has to\ndo with how collation is handled from what I have read in old threads\nin postgres mailing list (please refer\nhttps://dba.stackexchange.com/questions/105244/index-on-column-with-data-type-citext-not-used/105250#105250\n).\n\nRegards,\nNanda\n\nOn Fri, Apr 6, 2018 at 10:21 PM, Deepak Somaiya <[email protected]> wrote:\n>\n> Folks,\n> I read following (PostgreSQL: Documentation: 9.6: citext) and it does not hold true in my testing.. i.e citext is not performing better than lower.Am I missing something? help is appreciated.\n>\n> PostgreSQL: Documentation: 9.6: citext\n>\n>\n>\n>\n> \"citext is not as efficient as text because the operator functions and the B-tree comparison functions must make copies of the data and convert it to lower case for comparisons. It is, however, slightly more efficient than using lower to get case-insensitive matching.\"\n>\n>\n> Here is what I have done\n>\n> drop table test;\n> drop table testci;\n>\n> CREATE TABLE test (\n> id INTEGER PRIMARY KEY,\n> name character varying(254)\n> );\n> CREATE TABLE testci (\n> id INTEGER PRIMARY KEY,\n> name citext\n>\n> );\n>\n> INSERT INTO test(id, name)\n> SELECT generate_series(1000001,2000000), (md5(random()::text));\n>\n> INSERT INTO testci(id, name)\n> SELECT generate_series(1,1000000), (md5(random()::text));\n>\n>\n> Now, I have done sequential search\n>\n> explain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');\n> - Plan:\n> Node Type: \"Seq Scan\"\n> Parallel Aware: false\n> Relation Name: \"test\"\n> Alias: \"test\"\n> Startup Cost: 0.00\n> Total Cost: 23334.00\n> Plan Rows: 5000\n> Plan Width: 37\n> Actual Startup Time: 0.016\n> Actual Total Time: 680.199\n> Actual Rows: 1\n> Actual Loops: 1\n> Filter: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"\n> Rows Removed by Filter: 999999\n> Planning Time: 0.045\n> Triggers:\n> Execution Time: 680.213\n>\n>\n> explain (analyze on, format yaml) select * from testci where name='956d692092f0b9f85f36bf2b2501f3ad';\n> - Plan:\n> Node Type: \"Seq Scan\"\n> Parallel Aware: false\n> Relation Name: \"testci\"\n> Alias: \"testci\"\n> Startup Cost: 0.00\n> Total Cost: 20834.00\n> Plan Rows: 1\n> Plan Width: 37\n> Actual Startup Time: 0.017\n> Actual Total Time: 1184.485\n> Actual Rows: 1\n> Actual Loops: 1\n> Filter: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"\n> Rows Removed by Filter: 999999\n> Planning Time: 0.029\n> Triggers:\n> Execution Time: 1184.496\n>\n>\n>\n> You can see sequential searches with lower working twice as fast as citext.\n>\n> Now I added index on citext and equivalent functional index (lower) on text.\n>\n>\n> CREATE INDEX textlowerindex ON test (lower(name));\n> create index textindex on test(name);\n>\n>\n> Index creation took longer with citext v/s creating lower functional index.\n>\n>\n> Now here comes execution with indexes\n>\n> explain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');\n>\n> - Plan:\n> Node Type: \"Bitmap Heap Scan\"\n> Parallel Aware: false\n> Relation Name: \"test\"\n> Alias: \"test\"\n> Startup Cost: 187.18\n> Total Cost: 7809.06\n> Plan Rows: 5000\n> Plan Width: 37\n> Actual Startup Time: 0.020\n> Actual Total Time: 0.020\n> Actual Rows: 1\n> Actual Loops: 1\n> Recheck Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"\n> Rows Removed by Index Recheck: 0\n> Exact Heap Blocks: 1\n> Lossy Heap Blocks: 0\n> Plans:\n> - Node Type: \"Bitmap Index Scan\"\n> Parent Relationship: \"Outer\"\n> Parallel Aware: false\n> Index Name: \"textlowerindex\"\n> Startup Cost: 0.00\n> Total Cost: 185.93\n> Plan Rows: 5000\n> Plan Width: 0\n> Actual Startup Time: 0.016\n> Actual Total Time: 0.016\n> Actual Rows: 1\n> Actual Loops: 1\n> Index Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"\n> Planning Time: 0.051\n> Triggers:\n> Execution Time: 0.035\n>\n>\n>\n>\n> explain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');\n>\n> - Plan:\n> Node Type: \"Index Scan\"\n> Parallel Aware: false\n> Scan Direction: \"Forward\"\n> Index Name: \"citextindex\"\n> Relation Name: \"testci\"\n> Alias: \"testci\"\n> Startup Cost: 0.42\n> Total Cost: 8.44\n> Plan Rows: 1\n> Plan Width: 37\n> Actual Startup Time: 0.049\n> Actual Total Time: 0.050\n> Actual Rows: 1\n> Actual Loops: 1\n> Index Cond: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"\n> Rows Removed by Index Recheck: 0\n> Planning Time: 0.051\n> Triggers:\n> Execution Time: 0.064\n>\n>\n> Deepak\n\n", "msg_date": "Sun, 8 Apr 2018 15:42:46 +0530", "msg_from": "Nandakumar M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: citext performance" }, { "msg_contents": "It is using index here , it is just that performance i.e query that use functional index (one with lower) is performing better then index created on citext column.\nDeepak\n On Sunday, April 8, 2018, 3:13:26 AM PDT, Nandakumar M <[email protected]> wrote: \n \n Hi,\n\nI have also faced the same problem with citext extension. It does not\nuse index when thereby making it almost unusable. The problem has to\ndo with how collation is handled from what I have read in old threads\nin postgres mailing list (please refer\nhttps://dba.stackexchange.com/questions/105244/index-on-column-with-data-type-citext-not-used/105250#105250\n).\n\nRegards,\nNanda\n\nOn Fri, Apr 6, 2018 at 10:21 PM, Deepak Somaiya <[email protected]> wrote:\n>\n> Folks,\n>  I read following (PostgreSQL: Documentation: 9.6: citext) and it does not hold true in my testing.. i.e citext is not performing better than lower.Am I missing something? help is appreciated.\n>\n> PostgreSQL: Documentation: 9.6: citext\n>\n>\n>\n>\n> \"citext is not as efficient as text because the operator functions and the B-tree comparison functions must make copies of the data and convert it to lower case for comparisons. It is, however, slightly more efficient than using lower to get case-insensitive matching.\"\n>\n>\n> Here is what I have done\n>\n> drop table test;\n> drop table testci;\n>\n> CREATE TABLE test (\n> id INTEGER PRIMARY KEY,\n> name character varying(254)\n> );\n> CREATE TABLE testci (\n> id INTEGER PRIMARY KEY,\n> name citext\n>\n> );\n>\n> INSERT INTO test(id, name)\n> SELECT generate_series(1000001,2000000), (md5(random()::text));\n>\n> INSERT INTO testci(id, name)\n> SELECT generate_series(1,1000000), (md5(random()::text));\n>\n>\n> Now, I have done sequential search\n>\n> explain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');\n> - Plan:\n>    Node Type: \"Seq Scan\"\n>    Parallel Aware: false\n>    Relation Name: \"test\"\n>    Alias: \"test\"\n>    Startup Cost: 0.00\n>    Total Cost: 23334.00\n>    Plan Rows: 5000\n>    Plan Width: 37\n>    Actual Startup Time: 0.016\n>    Actual Total Time: 680.199\n>    Actual Rows: 1\n>    Actual Loops: 1\n>    Filter: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"\n>    Rows Removed by Filter: 999999\n>  Planning Time: 0.045\n>  Triggers:\n>  Execution Time: 680.213\n>\n>\n> explain (analyze on, format yaml) select * from testci where name='956d692092f0b9f85f36bf2b2501f3ad';\n> - Plan:\n>    Node Type: \"Seq Scan\"\n>    Parallel Aware: false\n>    Relation Name: \"testci\"\n>    Alias: \"testci\"\n>    Startup Cost: 0.00\n>    Total Cost: 20834.00\n>    Plan Rows: 1\n>    Plan Width: 37\n>    Actual Startup Time: 0.017\n>    Actual Total Time: 1184.485\n>    Actual Rows: 1\n>    Actual Loops: 1\n>    Filter: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"\n>    Rows Removed by Filter: 999999\n>  Planning Time: 0.029\n>  Triggers:\n>  Execution Time: 1184.496\n>\n>\n>\n> You can see sequential searches with lower working twice as fast as citext.\n>\n> Now I added index on citext and equivalent functional index (lower) on text.\n>\n>\n> CREATE INDEX textlowerindex ON test (lower(name));\n> create index textindex on test(name);\n>\n>\n> Index creation took longer with citext v/s creating lower functional index.\n>\n>\n> Now here comes execution with indexes\n>\n> explain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');\n>\n> - Plan:\n>    Node Type: \"Bitmap Heap Scan\"\n>    Parallel Aware: false\n>    Relation Name: \"test\"\n>    Alias: \"test\"\n>    Startup Cost: 187.18\n>    Total Cost: 7809.06\n>    Plan Rows: 5000\n>    Plan Width: 37\n>    Actual Startup Time: 0.020\n>    Actual Total Time: 0.020\n>    Actual Rows: 1\n>    Actual Loops: 1\n>    Recheck Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"\n>    Rows Removed by Index Recheck: 0\n>    Exact Heap Blocks: 1\n>    Lossy Heap Blocks: 0\n>    Plans:\n>      - Node Type: \"Bitmap Index Scan\"\n>        Parent Relationship: \"Outer\"\n>        Parallel Aware: false\n>        Index Name: \"textlowerindex\"\n>        Startup Cost: 0.00\n>        Total Cost: 185.93\n>        Plan Rows: 5000\n>        Plan Width: 0\n>        Actual Startup Time: 0.016\n>        Actual Total Time: 0.016\n>        Actual Rows: 1\n>        Actual Loops: 1\n>        Index Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"\n>  Planning Time: 0.051\n>  Triggers:\n>  Execution Time: 0.035\n>\n>\n>\n>\n> explain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');\n>\n> - Plan:\n>    Node Type: \"Index Scan\"\n>    Parallel Aware: false\n>    Scan Direction: \"Forward\"\n>    Index Name: \"citextindex\"\n>    Relation Name: \"testci\"\n>    Alias: \"testci\"\n>    Startup Cost: 0.42\n>    Total Cost: 8.44\n>    Plan Rows: 1\n>    Plan Width: 37\n>    Actual Startup Time: 0.049\n>    Actual Total Time: 0.050\n>    Actual Rows: 1\n>    Actual Loops: 1\n>    Index Cond: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"\n>    Rows Removed by Index Recheck: 0\n>  Planning Time: 0.051\n>  Triggers:\n>  Execution Time: 0.064\n>\n>\n> Deepak\n \n\nIt is using index here , it is just that performance i.e query that use functional index (one with lower) is performing better then index created on citext column.Deepak\n\n\n\n On Sunday, April 8, 2018, 3:13:26 AM PDT, Nandakumar M <[email protected]> wrote:\n \n\n\nHi,I have also faced the same problem with citext extension. It does notuse index when thereby making it almost unusable. The problem has todo with how collation is handled from what I have read in old threadsin postgres mailing list (please referhttps://dba.stackexchange.com/questions/105244/index-on-column-with-data-type-citext-not-used/105250#105250).Regards,NandaOn Fri, Apr 6, 2018 at 10:21 PM, Deepak Somaiya <[email protected]> wrote:>> Folks,>  I read following (PostgreSQL: Documentation: 9.6: citext) and it does not hold true in my testing.. i.e citext is not performing better than lower.Am I missing something? help is appreciated.>> PostgreSQL: Documentation: 9.6: citext>>>>> \"citext is not as efficient as text because the operator functions and the B-tree comparison functions must make copies of the data and convert it to lower case for comparisons. It is, however, slightly more efficient than using lower to get case-insensitive matching.\">>> Here is what I have done>> drop table test;> drop table testci;>> CREATE TABLE test (> id INTEGER PRIMARY KEY,> name character varying(254)> );> CREATE TABLE testci (> id INTEGER PRIMARY KEY,> name citext>> );>> INSERT INTO test(id, name)> SELECT generate_series(1000001,2000000), (md5(random()::text));>> INSERT INTO testci(id, name)> SELECT generate_series(1,1000000), (md5(random()::text));>>> Now, I have done sequential search>> explain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');> - Plan:>    Node Type: \"Seq Scan\">    Parallel Aware: false>    Relation Name: \"test\">    Alias: \"test\">    Startup Cost: 0.00>    Total Cost: 23334.00>    Plan Rows: 5000>    Plan Width: 37>    Actual Startup Time: 0.016>    Actual Total Time: 680.199>    Actual Rows: 1>    Actual Loops: 1>    Filter: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\">    Rows Removed by Filter: 999999>  Planning Time: 0.045>  Triggers:>  Execution Time: 680.213>>> explain (analyze on, format yaml) select * from testci where name='956d692092f0b9f85f36bf2b2501f3ad';> - Plan:>    Node Type: \"Seq Scan\">    Parallel Aware: false>    Relation Name: \"testci\">    Alias: \"testci\">    Startup Cost: 0.00>    Total Cost: 20834.00>    Plan Rows: 1>    Plan Width: 37>    Actual Startup Time: 0.017>    Actual Total Time: 1184.485>    Actual Rows: 1>    Actual Loops: 1>    Filter: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\">    Rows Removed by Filter: 999999>  Planning Time: 0.029>  Triggers:>  Execution Time: 1184.496>>>> You can see sequential searches with lower working twice as fast as citext.>> Now I added index on citext and equivalent functional index (lower) on text.>>> CREATE INDEX textlowerindex ON test (lower(name));> create index textindex on test(name);>>> Index creation took longer with citext v/s creating lower functional index.>>> Now here comes execution with indexes>> explain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');>> - Plan:>    Node Type: \"Bitmap Heap Scan\">    Parallel Aware: false>    Relation Name: \"test\">    Alias: \"test\">    Startup Cost: 187.18>    Total Cost: 7809.06>    Plan Rows: 5000>    Plan Width: 37>    Actual Startup Time: 0.020>    Actual Total Time: 0.020>    Actual Rows: 1>    Actual Loops: 1>    Recheck Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\">    Rows Removed by Index Recheck: 0>    Exact Heap Blocks: 1>    Lossy Heap Blocks: 0>    Plans:>      - Node Type: \"Bitmap Index Scan\">        Parent Relationship: \"Outer\">        Parallel Aware: false>        Index Name: \"textlowerindex\">        Startup Cost: 0.00>        Total Cost: 185.93>        Plan Rows: 5000>        Plan Width: 0>        Actual Startup Time: 0.016>        Actual Total Time: 0.016>        Actual Rows: 1>        Actual Loops: 1>        Index Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\">  Planning Time: 0.051>  Triggers:>  Execution Time: 0.035>>>>> explain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');>> - Plan:>    Node Type: \"Index Scan\">    Parallel Aware: false>    Scan Direction: \"Forward\">    Index Name: \"citextindex\">    Relation Name: \"testci\">    Alias: \"testci\">    Startup Cost: 0.42>    Total Cost: 8.44>    Plan Rows: 1>    Plan Width: 37>    Actual Startup Time: 0.049>    Actual Total Time: 0.050>    Actual Rows: 1>    Actual Loops: 1>    Index Cond: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\">    Rows Removed by Index Recheck: 0>  Planning Time: 0.051>  Triggers:>  Execution Time: 0.064>>> Deepak", "msg_date": "Sun, 8 Apr 2018 20:14:29 +0000 (UTC)", "msg_from": "Deepak Somaiya <[email protected]>", "msg_from_op": true, "msg_subject": "Re: citext performance" } ]
[ { "msg_contents": "One of our four \"big iron\" (spinning disks) servers went belly up today.\n(Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to\na cloud service at the end of the year, so bad timing on this. We didn't\nwant to buy any more hardware, but now it looks like we have to.\n\nI followed the discussions about SSD drives when they were first becoming\nmainstream; at that time, the Intel devices were king. Can anyone recommend\nwhat's a good SSD configuration these days? I don't think we want to buy a\nnew server with spinning disks.\n\nWe're replacing:\n 8 core (Intel)\n 48GB memory\n 12-drive 7200 RPM 500GB\n RAID1 (2 disks, OS and WAL log)\n RAID10 (8 disks, postgres data dir)\n 2 spares\n Ubuntu 16.04\n Postgres 9.6\n\nThe current system peaks at about 7000 TPS from pgbench.\n\nOur system is a mix of non-transactional searching (customers) and\ntransactional data loading (us).\n\nThanks!\nCraig\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOne of our four \"big iron\" (spinning disks) servers went belly up today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a cloud service at the end of the year, so bad timing on this. We didn't want to buy any more hardware, but now it looks like we have to.I followed the discussions about SSD drives when they were first becoming mainstream; at that time, the Intel devices were king. Can anyone recommend what's a good SSD configuration these days? I don't think we want to buy a new server with spinning disks.We're replacing:  8 core (Intel)  48GB memory  12-drive 7200 RPM 500GB     RAID1 (2 disks, OS and WAL log)     RAID10 (8 disks, postgres data dir)     2 spares  Ubuntu 16.04  Postgres 9.6The current system peaks at about 7000 TPS from pgbench.Our system is a mix of non-transactional searching (customers) and transactional data loading (us).Thanks!Craig-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Mon, 9 Apr 2018 19:36:27 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Latest advice on SSD?" }, { "msg_contents": "På tirsdag 10. april 2018 kl. 04:36:27, skrev Craig James <[email protected]\n <mailto:[email protected]>>:\nOne of our four \"big iron\" (spinning disks) servers went belly up today. \n(Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a \ncloud service at the end of the year, so bad timing on this. We didn't want to \nbuy any more hardware, but now it looks like we have to. \nI followed the discussions about SSD drives when they were first becoming \nmainstream; at that time, the Intel devices were king. Can anyone recommend \nwhat's a good SSD configuration these days? I don't think we want to buy a new \nserver with spinning disks.\n \nWe're replacing:\n  8 core (Intel)\n  48GB memory\n   12-drive 7200 RPM 500GB\n     RAID1 (2 disks, OS and WAL log)\n     RAID10 (8 disks, postgres data dir)\n     2 spares\n  Ubuntu 16.04\n  Postgres 9.6\n \nThe current system peaks at about 7000 TPS from pgbench.\n\n \nWith what arguments (also initialization)?\n \n--\nAndreas Joseph Krogh", "msg_date": "Tue, 10 Apr 2018 09:21:53 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Sv: Latest advice on SSD?" }, { "msg_contents": "On Tue, Apr 10, 2018 at 12:21 AM, Andreas Joseph Krogh <[email protected]>\nwrote:\n\n> På tirsdag 10. april 2018 kl. 04:36:27, skrev Craig James <\n> [email protected]>:\n>\n> One of our four \"big iron\" (spinning disks) servers went belly up today.\n> (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to\n> a cloud service at the end of the year, so bad timing on this. We didn't\n> want to buy any more hardware, but now it looks like we have to.\n>\n> I followed the discussions about SSD drives when they were first becoming\n> mainstream; at that time, the Intel devices were king. Can anyone recommend\n> what's a good SSD configuration these days? I don't think we want to buy a\n> new server with spinning disks.\n>\n> We're replacing:\n> 8 core (Intel)\n> 48GB memory\n> 12-drive 7200 RPM 500GB\n> RAID1 (2 disks, OS and WAL log)\n> RAID10 (8 disks, postgres data dir)\n> 2 spares\n> Ubuntu 16.04\n> Postgres 9.6\n>\n> The current system peaks at about 7000 TPS from pgbench.\n>\n>\n> With what arguments (also initialization)?\n>\n\n\npgbench -i -s 100 -U test\npgbench -U test -c ... -t ...\n\n-c -t TPS\n5 20000 5202\n10 10000 7916\n20 5000 7924\n30 3333 7270\n40 2500 5020\n50 2000 6417\n\n\n>\n> --\n> Andreas Joseph Krogh\n>\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Tue, Apr 10, 2018 at 12:21 AM, Andreas Joseph Krogh <[email protected]> wrote:På tirsdag 10. april 2018 kl. 04:36:27, skrev Craig James <[email protected]>:\n\nOne of our four \"big iron\" (spinning disks) servers went belly up today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a cloud service at the end of the year, so bad timing on this. We didn't want to buy any more hardware, but now it looks like we have to.\n \nI followed the discussions about SSD drives when they were first becoming mainstream; at that time, the Intel devices were king. Can anyone recommend what's a good SSD configuration these days? I don't think we want to buy a new server with spinning disks.\n \nWe're replacing:\n  8 core (Intel)\n\n  48GB memory\n  12-drive 7200 RPM 500GB\n     RAID1 (2 disks, OS and WAL log)\n     RAID10 (8 disks, postgres data dir)\n     2 spares\n  Ubuntu 16.04\n  Postgres 9.6\n \nThe current system peaks at about 7000 TPS from pgbench.\n\n\n \nWith what arguments (also initialization)?pgbench -i -s 100 -U testpgbench -U test -c ... -t ...-c  -t     TPS5   20000  520210  10000  791620  5000   792430  3333   727040  2500   502050  2000   6417 \n \n\n--\nAndreas Joseph Krogh\n\n -- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Tue, 10 Apr 2018 10:41:59 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Latest advice on SSD?" }, { "msg_contents": "You don't mention the size of your database. Does it fit in memory? If so\nyour disks aren't going to matter a whole lot outside of potentially being\ni/o bound on the writes. Otherwise getting your data into SSDs absolutely\ncan have a few multiples of performance impact. The NVME M.2 drives can\nreally pump out the data. Maybe push your WAL onto those (as few\nmotherboards have more than two connectors) and use regular SSDs for your\ndata if you have high write rates.\n\nMeanwhile, if you're looking for strong cloud hosting for Postgres but the\nspeed of physical hardware, feel free to contact me as my company does this\nfor some companies who found i/o limits on regular cloud providers to be\nway too slow for their needs.\n\ngood luck (and pardon the crass commercial comments!),\n\n -- Ben Scherrey\n\nOn Tue, Apr 10, 2018 at 9:36 AM, Craig James <[email protected]> wrote:\n\n> One of our four \"big iron\" (spinning disks) servers went belly up today.\n> (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to\n> a cloud service at the end of the year, so bad timing on this. We didn't\n> want to buy any more hardware, but now it looks like we have to.\n>\n> I followed the discussions about SSD drives when they were first becoming\n> mainstream; at that time, the Intel devices were king. Can anyone recommend\n> what's a good SSD configuration these days? I don't think we want to buy a\n> new server with spinning disks.\n>\n> We're replacing:\n> 8 core (Intel)\n> 48GB memory\n> 12-drive 7200 RPM 500GB\n> RAID1 (2 disks, OS and WAL log)\n> RAID10 (8 disks, postgres data dir)\n> 2 spares\n> Ubuntu 16.04\n> Postgres 9.6\n>\n> The current system peaks at about 7000 TPS from pgbench.\n>\n> Our system is a mix of non-transactional searching (customers) and\n> transactional data loading (us).\n>\n> Thanks!\n> Craig\n>\n> --\n> ---------------------------------\n> Craig A. James\n> Chief Technology Officer\n> eMolecules, Inc.\n> ---------------------------------\n>\n\nYou don't mention the size of your database. Does it fit in memory? If so your disks aren't going to matter a whole lot outside of potentially being i/o bound on the writes. Otherwise getting your data into SSDs absolutely can have a few multiples of performance impact. The NVME M.2 drives can really pump out the data. Maybe push your WAL onto those (as few motherboards have more than two connectors) and use regular SSDs for your data if you have high write rates.Meanwhile, if you're looking for strong cloud hosting for Postgres but the speed of physical hardware, feel free to contact me as my company does this for some companies who found i/o limits on regular cloud providers to be way too slow for their needs. good luck (and pardon the crass commercial comments!),  -- Ben ScherreyOn Tue, Apr 10, 2018 at 9:36 AM, Craig James <[email protected]> wrote:One of our four \"big iron\" (spinning disks) servers went belly up today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a cloud service at the end of the year, so bad timing on this. We didn't want to buy any more hardware, but now it looks like we have to.I followed the discussions about SSD drives when they were first becoming mainstream; at that time, the Intel devices were king. Can anyone recommend what's a good SSD configuration these days? I don't think we want to buy a new server with spinning disks.We're replacing:  8 core (Intel)  48GB memory  12-drive 7200 RPM 500GB     RAID1 (2 disks, OS and WAL log)     RAID10 (8 disks, postgres data dir)     2 spares  Ubuntu 16.04  Postgres 9.6The current system peaks at about 7000 TPS from pgbench.Our system is a mix of non-transactional searching (customers) and transactional data loading (us).Thanks!Craig-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Wed, 11 Apr 2018 00:54:11 +0700", "msg_from": "Benjamin Scherrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Latest advice on SSD?" }, { "msg_contents": "RDBMS such as pg are beasts that turn random IO requests, traditionally slow in spinning drives, into sequential. WAL is a good example of this. \n\nSSDs are generally slower than spinning at sequential IO and way faster at random.\n\nExpect therefore for SSD to help if you are random IO bound. (Some cloud vendors offer SSD as a way to get dedicated local io and bandwidth - so sometimes it helps stablize performance vs. virtualized shared io.)\n\nA REASONABLE PERSON SHOULD ASSUME THAT UNBENCHMARKED AND UNRESEARCHED MIGRATION FROM TUNED SPINNING TO SSD WILL SLOW YOU DOWN\n\n/Aaron \n\n\n> On Apr 10, 2018, at 12:54 PM, Benjamin Scherrey <[email protected]> wrote:\n> \n> You don't mention the size of your database. Does it fit in memory? If so your disks aren't going to matter a whole lot outside of potentially being i/o bound on the writes. Otherwise getting your data into SSDs absolutely can have a few multiples of performance impact. The NVME M.2 drives can really pump out the data. Maybe push your WAL onto those (as few motherboards have more than two connectors) and use regular SSDs for your data if you have high write rates.\n> \n> Meanwhile, if you're looking for strong cloud hosting for Postgres but the speed of physical hardware, feel free to contact me as my company does this for some companies who found i/o limits on regular cloud providers to be way too slow for their needs. \n> \n> good luck (and pardon the crass commercial comments!),\n> \n> -- Ben Scherrey\n> \n>> On Tue, Apr 10, 2018 at 9:36 AM, Craig James <[email protected]> wrote:\n>> One of our four \"big iron\" (spinning disks) servers went belly up today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a cloud service at the end of the year, so bad timing on this. We didn't want to buy any more hardware, but now it looks like we have to.\n>> \n>> I followed the discussions about SSD drives when they were first becoming mainstream; at that time, the Intel devices were king. Can anyone recommend what's a good SSD configuration these days? I don't think we want to buy a new server with spinning disks.\n>> \n>> We're replacing:\n>> 8 core (Intel)\n>> 48GB memory\n>> 12-drive 7200 RPM 500GB\n>> RAID1 (2 disks, OS and WAL log)\n>> RAID10 (8 disks, postgres data dir)\n>> 2 spares\n>> Ubuntu 16.04\n>> Postgres 9.6\n>> \n>> The current system peaks at about 7000 TPS from pgbench.\n>> \n>> Our system is a mix of non-transactional searching (customers) and transactional data loading (us).\n>> \n>> Thanks!\n>> Craig\n>> \n>> -- \n>> ---------------------------------\n>> Craig A. James\n>> Chief Technology Officer\n>> eMolecules, Inc.\n>> ---------------------------------\n> \n\nRDBMS such as pg are beasts that turn random IO requests, traditionally slow in spinning drives, into sequential. WAL is a good example of this. SSDs are generally slower than spinning at sequential IO and way faster at random.Expect therefore for SSD to help if you are random IO bound. (Some cloud vendors offer SSD as a way to get dedicated local io and bandwidth - so sometimes it helps stablize performance vs. virtualized shared io.)A REASONABLE PERSON SHOULD ASSUME THAT UNBENCHMARKED AND UNRESEARCHED MIGRATION FROM TUNED SPINNING TO SSD WILL SLOW YOU DOWN/Aaron On Apr 10, 2018, at 12:54 PM, Benjamin Scherrey <[email protected]> wrote:You don't mention the size of your database. Does it fit in memory? If so your disks aren't going to matter a whole lot outside of potentially being i/o bound on the writes. Otherwise getting your data into SSDs absolutely can have a few multiples of performance impact. The NVME M.2 drives can really pump out the data. Maybe push your WAL onto those (as few motherboards have more than two connectors) and use regular SSDs for your data if you have high write rates.Meanwhile, if you're looking for strong cloud hosting for Postgres but the speed of physical hardware, feel free to contact me as my company does this for some companies who found i/o limits on regular cloud providers to be way too slow for their needs. good luck (and pardon the crass commercial comments!),  -- Ben ScherreyOn Tue, Apr 10, 2018 at 9:36 AM, Craig James <[email protected]> wrote:One of our four \"big iron\" (spinning disks) servers went belly up today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a cloud service at the end of the year, so bad timing on this. We didn't want to buy any more hardware, but now it looks like we have to.I followed the discussions about SSD drives when they were first becoming mainstream; at that time, the Intel devices were king. Can anyone recommend what's a good SSD configuration these days? I don't think we want to buy a new server with spinning disks.We're replacing:  8 core (Intel)  48GB memory  12-drive 7200 RPM 500GB     RAID1 (2 disks, OS and WAL log)     RAID10 (8 disks, postgres data dir)     2 spares  Ubuntu 16.04  Postgres 9.6The current system peaks at about 7000 TPS from pgbench.Our system is a mix of non-transactional searching (customers) and transactional data loading (us).Thanks!Craig-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Tue, 10 Apr 2018 14:00:14 -0500", "msg_from": "Aaron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Latest advice on SSD?" }, { "msg_contents": "> SSDs are generally slower than spinning at sequential IO and way faster\nat random.\n\nUnreleased yet Seagate HDD boasts 480MB/s sequential read speed [1], and no\nHDD now can achieve that.\nEven SATA-3 SSD's could be faster than that for years now (550MB/s are\nquite typical), and NVME ones could be easily faster than 1GB/s and up to\n3GB/s+.\n\nI'm curious to know where are you drawing these conclusions from?\n\n1. https://blog.seagate.com/enterprises/mach2-and-hamr-breakthrough-ocp/\n\n\nDmitry Shalashov, relap.io & surfingbird.ru\n\n2018-04-10 22:00 GMT+03:00 Aaron <[email protected]>:\n\n> RDBMS such as pg are beasts that turn random IO requests, traditionally\n> slow in spinning drives, into sequential. WAL is a good example of this.\n>\n> SSDs are generally slower than spinning at sequential IO and way faster at\n> random.\n>\n> Expect therefore for SSD to help if you are random IO bound. (Some cloud\n> vendors offer SSD as a way to get dedicated local io and bandwidth - so\n> sometimes it helps stablize performance vs. virtualized shared io.)\n>\n> A REASONABLE PERSON SHOULD ASSUME THAT UNBENCHMARKED AND UNRESEARCHED\n> MIGRATION FROM TUNED SPINNING TO SSD WILL SLOW YOU DOWN\n>\n> /Aaron\n>\n>\n> On Apr 10, 2018, at 12:54 PM, Benjamin Scherrey <[email protected]>\n> wrote:\n>\n> You don't mention the size of your database. Does it fit in memory? If so\n> your disks aren't going to matter a whole lot outside of potentially being\n> i/o bound on the writes. Otherwise getting your data into SSDs absolutely\n> can have a few multiples of performance impact. The NVME M.2 drives can\n> really pump out the data. Maybe push your WAL onto those (as few\n> motherboards have more than two connectors) and use regular SSDs for your\n> data if you have high write rates.\n>\n> Meanwhile, if you're looking for strong cloud hosting for Postgres but the\n> speed of physical hardware, feel free to contact me as my company does this\n> for some companies who found i/o limits on regular cloud providers to be\n> way too slow for their needs.\n>\n> good luck (and pardon the crass commercial comments!),\n>\n> -- Ben Scherrey\n>\n> On Tue, Apr 10, 2018 at 9:36 AM, Craig James <[email protected]>\n> wrote:\n>\n>> One of our four \"big iron\" (spinning disks) servers went belly up today.\n>> (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to\n>> a cloud service at the end of the year, so bad timing on this. We didn't\n>> want to buy any more hardware, but now it looks like we have to.\n>>\n>> I followed the discussions about SSD drives when they were first becoming\n>> mainstream; at that time, the Intel devices were king. Can anyone recommend\n>> what's a good SSD configuration these days? I don't think we want to buy a\n>> new server with spinning disks.\n>>\n>> We're replacing:\n>> 8 core (Intel)\n>> 48GB memory\n>> 12-drive 7200 RPM 500GB\n>> RAID1 (2 disks, OS and WAL log)\n>> RAID10 (8 disks, postgres data dir)\n>> 2 spares\n>> Ubuntu 16.04\n>> Postgres 9.6\n>>\n>> The current system peaks at about 7000 TPS from pgbench.\n>>\n>> Our system is a mix of non-transactional searching (customers) and\n>> transactional data loading (us).\n>>\n>> Thanks!\n>> Craig\n>>\n>> --\n>> ---------------------------------\n>> Craig A. James\n>> Chief Technology Officer\n>> eMolecules, Inc.\n>> ---------------------------------\n>>\n>\n>\n\n> SSDs are generally slower than spinning at sequential IO and way faster at random. Unreleased yet Seagate HDD boasts 480MB/s sequential read speed [1], and no HDD now can achieve that.Even SATA-3 SSD's could be faster than that for years now (550MB/s are quite typical), and NVME ones could be easily faster than 1GB/s and up to 3GB/s+.I'm curious to know where are you drawing these conclusions from?1. https://blog.seagate.com/enterprises/mach2-and-hamr-breakthrough-ocp/Dmitry Shalashov, relap.io & surfingbird.ru\n2018-04-10 22:00 GMT+03:00 Aaron <[email protected]>:RDBMS such as pg are beasts that turn random IO requests, traditionally slow in spinning drives, into sequential. WAL is a good example of this. SSDs are generally slower than spinning at sequential IO and way faster at random.Expect therefore for SSD to help if you are random IO bound. (Some cloud vendors offer SSD as a way to get dedicated local io and bandwidth - so sometimes it helps stablize performance vs. virtualized shared io.)A REASONABLE PERSON SHOULD ASSUME THAT UNBENCHMARKED AND UNRESEARCHED MIGRATION FROM TUNED SPINNING TO SSD WILL SLOW YOU DOWN/Aaron On Apr 10, 2018, at 12:54 PM, Benjamin Scherrey <[email protected]> wrote:You don't mention the size of your database. Does it fit in memory? If so your disks aren't going to matter a whole lot outside of potentially being i/o bound on the writes. Otherwise getting your data into SSDs absolutely can have a few multiples of performance impact. The NVME M.2 drives can really pump out the data. Maybe push your WAL onto those (as few motherboards have more than two connectors) and use regular SSDs for your data if you have high write rates.Meanwhile, if you're looking for strong cloud hosting for Postgres but the speed of physical hardware, feel free to contact me as my company does this for some companies who found i/o limits on regular cloud providers to be way too slow for their needs. good luck (and pardon the crass commercial comments!),  -- Ben ScherreyOn Tue, Apr 10, 2018 at 9:36 AM, Craig James <[email protected]> wrote:One of our four \"big iron\" (spinning disks) servers went belly up today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a cloud service at the end of the year, so bad timing on this. We didn't want to buy any more hardware, but now it looks like we have to.I followed the discussions about SSD drives when they were first becoming mainstream; at that time, the Intel devices were king. Can anyone recommend what's a good SSD configuration these days? I don't think we want to buy a new server with spinning disks.We're replacing:  8 core (Intel)  48GB memory  12-drive 7200 RPM 500GB     RAID1 (2 disks, OS and WAL log)     RAID10 (8 disks, postgres data dir)     2 spares  Ubuntu 16.04  Postgres 9.6The current system peaks at about 7000 TPS from pgbench.Our system is a mix of non-transactional searching (customers) and transactional data loading (us).Thanks!Craig-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Tue, 10 Apr 2018 22:11:54 +0300", "msg_from": "Dmitry Shalashov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Latest advice on SSD?" }, { "msg_contents": "> On Apr 10, 2018, at 3:11 PM, Dmitry Shalashov <[email protected]> wrote:\n> \n> > SSDs are generally slower than spinning at sequential IO and way faster at random. \n> \n> Unreleased yet Seagate HDD boasts 480MB/s sequential read speed [1], and no HDD now can achieve that.\n> Even SATA-3 SSD's could be faster than that for years now (550MB/s are quite typical), and NVME ones could be easily faster than 1GB/s and up to 3GB/s+.\n> \n> I'm curious to know where are you drawing these conclusions from?\n\nYeah, that sequential info sounds weird.\n\nI’m only chiming in because I just setup one of those SoHo NAS boxes (Qnap) and it had both SSDs and HDDs installed. This was to be used for video editing, so it’s almost all sequential reads/writes. On 10Gb/s ethernet sequential reads off the cached content on the SSDs was somewhere around 800MB/s. These were non-enterprise SSDs.\n\nCharles\n\n> \n> 1. https://blog.seagate.com/enterprises/mach2-and-hamr-breakthrough-ocp/ <https://blog.seagate.com/enterprises/mach2-and-hamr-breakthrough-ocp/>\n> \n> \n> Dmitry Shalashov, relap.io <http://relap.io/> & surfingbird.ru <http://surfingbird.ru/>\n> 2018-04-10 22:00 GMT+03:00 Aaron <[email protected] <mailto:[email protected]>>:\n> RDBMS such as pg are beasts that turn random IO requests, traditionally slow in spinning drives, into sequential. WAL is a good example of this. \n> \n> SSDs are generally slower than spinning at sequential IO and way faster at random.\n> \n> Expect therefore for SSD to help if you are random IO bound. (Some cloud vendors offer SSD as a way to get dedicated local io and bandwidth - so sometimes it helps stablize performance vs. virtualized shared io.)\n> \n> A REASONABLE PERSON SHOULD ASSUME THAT UNBENCHMARKED AND UNRESEARCHED MIGRATION FROM TUNED SPINNING TO SSD WILL SLOW YOU DOWN\n> \n> /Aaron \n> \n> \n> On Apr 10, 2018, at 12:54 PM, Benjamin Scherrey <[email protected] <mailto:[email protected]>> wrote:\n> \n>> You don't mention the size of your database. Does it fit in memory? If so your disks aren't going to matter a whole lot outside of potentially being i/o bound on the writes. Otherwise getting your data into SSDs absolutely can have a few multiples of performance impact. The NVME M.2 drives can really pump out the data. Maybe push your WAL onto those (as few motherboards have more than two connectors) and use regular SSDs for your data if you have high write rates.\n>> \n>> Meanwhile, if you're looking for strong cloud hosting for Postgres but the speed of physical hardware, feel free to contact me as my company does this for some companies who found i/o limits on regular cloud providers to be way too slow for their needs. \n>> \n>> good luck (and pardon the crass commercial comments!),\n>> \n>> -- Ben Scherrey\n>> \n>> On Tue, Apr 10, 2018 at 9:36 AM, Craig James <[email protected] <mailto:[email protected]>> wrote:\n>> One of our four \"big iron\" (spinning disks) servers went belly up today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a cloud service at the end of the year, so bad timing on this. We didn't want to buy any more hardware, but now it looks like we have to.\n>> \n>> I followed the discussions about SSD drives when they were first becoming mainstream; at that time, the Intel devices were king. Can anyone recommend what's a good SSD configuration these days? I don't think we want to buy a new server with spinning disks.\n>> \n>> We're replacing:\n>> 8 core (Intel)\n>> 48GB memory\n>> 12-drive 7200 RPM 500GB\n>> RAID1 (2 disks, OS and WAL log)\n>> RAID10 (8 disks, postgres data dir)\n>> 2 spares\n>> Ubuntu 16.04\n>> Postgres 9.6\n>> \n>> The current system peaks at about 7000 TPS from pgbench.\n>> \n>> Our system is a mix of non-transactional searching (customers) and transactional data loading (us).\n>> \n>> Thanks!\n>> Craig\n>> \n>> -- \n>> ---------------------------------\n>> Craig A. James\n>> Chief Technology Officer\n>> eMolecules, Inc.\n>> ---------------------------------\n>> \n> \n\n\nOn Apr 10, 2018, at 3:11 PM, Dmitry Shalashov <[email protected]> wrote:> SSDs are generally slower than spinning at sequential IO and way faster at random. Unreleased yet Seagate HDD boasts 480MB/s sequential read speed [1], and no HDD now can achieve that.Even SATA-3 SSD's could be faster than that for years now (550MB/s are quite typical), and NVME ones could be easily faster than 1GB/s and up to 3GB/s+.I'm curious to know where are you drawing these conclusions from?Yeah, that sequential info sounds weird.I’m only chiming in because I just setup one of those SoHo NAS boxes (Qnap) and it had both SSDs and HDDs installed.  This was to be used for video editing, so it’s almost all sequential reads/writes.  On 10Gb/s ethernet sequential reads off the cached content on the SSDs was somewhere around 800MB/s.  These were non-enterprise SSDs.Charles1. https://blog.seagate.com/enterprises/mach2-and-hamr-breakthrough-ocp/Dmitry Shalashov, relap.io & surfingbird.ru\n2018-04-10 22:00 GMT+03:00 Aaron <[email protected]>:RDBMS such as pg are beasts that turn random IO requests, traditionally slow in spinning drives, into sequential. WAL is a good example of this. SSDs are generally slower than spinning at sequential IO and way faster at random.Expect therefore for SSD to help if you are random IO bound. (Some cloud vendors offer SSD as a way to get dedicated local io and bandwidth - so sometimes it helps stablize performance vs. virtualized shared io.)A REASONABLE PERSON SHOULD ASSUME THAT UNBENCHMARKED AND UNRESEARCHED MIGRATION FROM TUNED SPINNING TO SSD WILL SLOW YOU DOWN/Aaron On Apr 10, 2018, at 12:54 PM, Benjamin Scherrey <[email protected]> wrote:You don't mention the size of your database. Does it fit in memory? If so your disks aren't going to matter a whole lot outside of potentially being i/o bound on the writes. Otherwise getting your data into SSDs absolutely can have a few multiples of performance impact. The NVME M.2 drives can really pump out the data. Maybe push your WAL onto those (as few motherboards have more than two connectors) and use regular SSDs for your data if you have high write rates.Meanwhile, if you're looking for strong cloud hosting for Postgres but the speed of physical hardware, feel free to contact me as my company does this for some companies who found i/o limits on regular cloud providers to be way too slow for their needs. good luck (and pardon the crass commercial comments!),  -- Ben ScherreyOn Tue, Apr 10, 2018 at 9:36 AM, Craig James <[email protected]> wrote:One of our four \"big iron\" (spinning disks) servers went belly up today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a cloud service at the end of the year, so bad timing on this. We didn't want to buy any more hardware, but now it looks like we have to.I followed the discussions about SSD drives when they were first becoming mainstream; at that time, the Intel devices were king. Can anyone recommend what's a good SSD configuration these days? I don't think we want to buy a new server with spinning disks.We're replacing:  8 core (Intel)  48GB memory  12-drive 7200 RPM 500GB     RAID1 (2 disks, OS and WAL log)     RAID10 (8 disks, postgres data dir)     2 spares  Ubuntu 16.04  Postgres 9.6The current system peaks at about 7000 TPS from pgbench.Our system is a mix of non-transactional searching (customers) and transactional data loading (us).Thanks!Craig-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Tue, 10 Apr 2018 15:58:06 -0400", "msg_from": "Charles Sprickman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Latest advice on SSD?" }, { "msg_contents": "Well, I can give a measurement on my home PC, a Linux box running Ubuntu\n17.10 with a Samsung 960 EVO 512GB NVME disk containing Postgres 10. Using\nyour pgbench init I got for example:\n\npgbench -c 10 -t 10000 test\nstarting vacuum...end.\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 100\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 1\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\nlatency average = 0.679 ms\ntps = 14730.402329 (including connections establishing)\ntps = 14733.000950 (excluding connections establishing)\n\nI will try to run a test on our production system which has a pair of Intel\nDC P4600 2TB in RAID0 tomorrow.\n\nOn Tue, Apr 10, 2018 at 9:58 PM Charles Sprickman <[email protected]> wrote:\n\n> On Apr 10, 2018, at 3:11 PM, Dmitry Shalashov <[email protected]> wrote:\n>\n> > SSDs are generally slower than spinning at sequential IO and way faster\n> at random.\n>\n> Unreleased yet Seagate HDD boasts 480MB/s sequential read speed [1], and\n> no HDD now can achieve that.\n> Even SATA-3 SSD's could be faster than that for years now (550MB/s are\n> quite typical), and NVME ones could be easily faster than 1GB/s and up to\n> 3GB/s+.\n>\n> I'm curious to know where are you drawing these conclusions from?\n>\n>\n> Yeah, that sequential info sounds weird.\n>\n> I’m only chiming in because I just setup one of those SoHo NAS boxes\n> (Qnap) and it had both SSDs and HDDs installed. This was to be used for\n> video editing, so it’s almost all sequential reads/writes. On 10Gb/s\n> ethernet sequential reads off the cached content on the SSDs was somewhere\n> around 800MB/s. These were non-enterprise SSDs.\n>\n> Charles\n>\n>\n> 1. https://blog.seagate.com/enterprises/mach2-and-hamr-breakthrough-ocp/\n>\n>\n> Dmitry Shalashov, relap.io & surfingbird.ru\n>\n> 2018-04-10 22:00 GMT+03:00 Aaron <[email protected]>:\n>\n>> RDBMS such as pg are beasts that turn random IO requests, traditionally\n>> slow in spinning drives, into sequential. WAL is a good example of this.\n>>\n>> SSDs are generally slower than spinning at sequential IO and way faster\n>> at random.\n>>\n>> Expect therefore for SSD to help if you are random IO bound. (Some cloud\n>> vendors offer SSD as a way to get dedicated local io and bandwidth - so\n>> sometimes it helps stablize performance vs. virtualized shared io.)\n>>\n>> A REASONABLE PERSON SHOULD ASSUME THAT UNBENCHMARKED AND UNRESEARCHED\n>> MIGRATION FROM TUNED SPINNING TO SSD WILL SLOW YOU DOWN\n>>\n>> /Aaron\n>>\n>>\n>> On Apr 10, 2018, at 12:54 PM, Benjamin Scherrey <\n>> [email protected]> wrote:\n>>\n>> You don't mention the size of your database. Does it fit in memory? If so\n>> your disks aren't going to matter a whole lot outside of potentially being\n>> i/o bound on the writes. Otherwise getting your data into SSDs absolutely\n>> can have a few multiples of performance impact. The NVME M.2 drives can\n>> really pump out the data. Maybe push your WAL onto those (as few\n>> motherboards have more than two connectors) and use regular SSDs for your\n>> data if you have high write rates.\n>>\n>> Meanwhile, if you're looking for strong cloud hosting for Postgres but\n>> the speed of physical hardware, feel free to contact me as my company does\n>> this for some companies who found i/o limits on regular cloud providers to\n>> be way too slow for their needs.\n>>\n>> good luck (and pardon the crass commercial comments!),\n>>\n>> -- Ben Scherrey\n>>\n>> On Tue, Apr 10, 2018 at 9:36 AM, Craig James <[email protected]>\n>> wrote:\n>>\n>>> One of our four \"big iron\" (spinning disks) servers went belly up today.\n>>> (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to\n>>> a cloud service at the end of the year, so bad timing on this. We didn't\n>>> want to buy any more hardware, but now it looks like we have to.\n>>>\n>>> I followed the discussions about SSD drives when they were first\n>>> becoming mainstream; at that time, the Intel devices were king. Can anyone\n>>> recommend what's a good SSD configuration these days? I don't think we want\n>>> to buy a new server with spinning disks.\n>>>\n>>> We're replacing:\n>>> 8 core (Intel)\n>>> 48GB memory\n>>> 12-drive 7200 RPM 500GB\n>>> RAID1 (2 disks, OS and WAL log)\n>>> RAID10 (8 disks, postgres data dir)\n>>> 2 spares\n>>> Ubuntu 16.04\n>>> Postgres 9.6\n>>>\n>>> The current system peaks at about 7000 TPS from pgbench.\n>>>\n>>> Our system is a mix of non-transactional searching (customers) and\n>>> transactional data loading (us).\n>>>\n>>> Thanks!\n>>> Craig\n>>>\n>>> --\n>>> ---------------------------------\n>>> Craig A. James\n>>> Chief Technology Officer\n>>> eMolecules, Inc.\n>>> ---------------------------------\n>>>\n>>\n>>\n>\n>\n\nWell, I can give a measurement on my home PC, a Linux box running Ubuntu 17.10 with a Samsung 960 EVO 512GB NVME disk containing Postgres 10. Using your pgbench init I got for example:pgbench -c 10 -t 10000 teststarting vacuum...end.transaction type: <builtin: TPC-B (sort of)>scaling factor: 100query mode: simplenumber of clients: 10number of threads: 1number of transactions per client: 10000number of transactions actually processed: 100000/100000latency average = 0.679 mstps = 14730.402329 (including connections establishing)tps = 14733.000950 (excluding connections establishing)I will try to run a test on our production system which has a pair of Intel DC P4600 2TB in RAID0 tomorrow.On Tue, Apr 10, 2018 at 9:58 PM Charles Sprickman <[email protected]> wrote:On Apr 10, 2018, at 3:11 PM, Dmitry Shalashov <[email protected]> wrote:> SSDs are generally slower than spinning at sequential IO and way faster at random. Unreleased yet Seagate HDD boasts 480MB/s sequential read speed [1], and no HDD now can achieve that.Even SATA-3 SSD's could be faster than that for years now (550MB/s are quite typical), and NVME ones could be easily faster than 1GB/s and up to 3GB/s+.I'm curious to know where are you drawing these conclusions from?Yeah, that sequential info sounds weird.I’m only chiming in because I just setup one of those SoHo NAS boxes (Qnap) and it had both SSDs and HDDs installed.  This was to be used for video editing, so it’s almost all sequential reads/writes.  On 10Gb/s ethernet sequential reads off the cached content on the SSDs was somewhere around 800MB/s.  These were non-enterprise SSDs.Charles1. https://blog.seagate.com/enterprises/mach2-and-hamr-breakthrough-ocp/Dmitry Shalashov, relap.io & surfingbird.ru\n2018-04-10 22:00 GMT+03:00 Aaron <[email protected]>:RDBMS such as pg are beasts that turn random IO requests, traditionally slow in spinning drives, into sequential. WAL is a good example of this. SSDs are generally slower than spinning at sequential IO and way faster at random.Expect therefore for SSD to help if you are random IO bound. (Some cloud vendors offer SSD as a way to get dedicated local io and bandwidth - so sometimes it helps stablize performance vs. virtualized shared io.)A REASONABLE PERSON SHOULD ASSUME THAT UNBENCHMARKED AND UNRESEARCHED MIGRATION FROM TUNED SPINNING TO SSD WILL SLOW YOU DOWN/Aaron On Apr 10, 2018, at 12:54 PM, Benjamin Scherrey <[email protected]> wrote:You don't mention the size of your database. Does it fit in memory? If so your disks aren't going to matter a whole lot outside of potentially being i/o bound on the writes. Otherwise getting your data into SSDs absolutely can have a few multiples of performance impact. The NVME M.2 drives can really pump out the data. Maybe push your WAL onto those (as few motherboards have more than two connectors) and use regular SSDs for your data if you have high write rates.Meanwhile, if you're looking for strong cloud hosting for Postgres but the speed of physical hardware, feel free to contact me as my company does this for some companies who found i/o limits on regular cloud providers to be way too slow for their needs. good luck (and pardon the crass commercial comments!),  -- Ben ScherreyOn Tue, Apr 10, 2018 at 9:36 AM, Craig James <[email protected]> wrote:One of our four \"big iron\" (spinning disks) servers went belly up today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a cloud service at the end of the year, so bad timing on this. We didn't want to buy any more hardware, but now it looks like we have to.I followed the discussions about SSD drives when they were first becoming mainstream; at that time, the Intel devices were king. Can anyone recommend what's a good SSD configuration these days? I don't think we want to buy a new server with spinning disks.We're replacing:  8 core (Intel)  48GB memory  12-drive 7200 RPM 500GB     RAID1 (2 disks, OS and WAL log)     RAID10 (8 disks, postgres data dir)     2 spares  Ubuntu 16.04  Postgres 9.6The current system peaks at about 7000 TPS from pgbench.Our system is a mix of non-transactional searching (customers) and transactional data loading (us).Thanks!Craig-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Tue, 10 Apr 2018 20:15:33 +0000", "msg_from": "Frits Jalvingh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Latest advice on SSD?" }, { "msg_contents": "We have been using the Intel S3710 (or minor model variations thereof).\nThey have been great (consistent performance, power off safe and good\nexpected lifetime). Also 2 of them in RAID1 easily outperform a\nreasonably large number of 10K spinners in RAID10.\n\nNow you *can* still buy the S37xx series, but eventually I guess we'll\nhave to look at something more modern like the S45xx series. But I'm not\nso keen on them (they use TLC NAND which may give less consistent\nperformance, plus they appear to have slightly lower expected lifetime).\nI think there was a thread a year or more ago on this list specifically\nabout this very issue that might be worth searching for.\n\nThe TLC NAND seems like a big deal - most modern SSD are built using\nit...they solve the high latency problem with SLC caches. So you get\nbrilliant performance until the cache is full, then it drops off a\ncliff. Bigger/more expensive drives have bigger caches, so it is well\nworth finding in depth reviews of the exact models you might wish to\nevaluate!\n\nregards\nMark\n\nOn 10/04/18 14:36, Craig James wrote:\n> One of our four \"big iron\" (spinning disks) servers went belly up\n> today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're\n> planning to move to a cloud service at the end of the year, so bad\n> timing on this. We didn't want to buy any more hardware, but now it\n> looks like we have to.\n>\n> I followed the discussions about SSD drives when they were first\n> becoming mainstream; at that time, the Intel devices were king. Can\n> anyone recommend what's a good SSD configuration these days? I don't\n> think we want to buy a new server with spinning disks.\n>\n> We're replacing:\n>   8 core (Intel)\n>   48GB memory\n>   12-drive 7200 RPM 500GB\n>      RAID1 (2 disks, OS and WAL log)\n>      RAID10 (8 disks, postgres data dir)\n>      2 spares\n>   Ubuntu 16.04\n>   Postgres 9.6\n>\n> The current system peaks at about 7000 TPS from pgbench.\n>\n> Our system is a mix of non-transactional searching (customers) and\n> transactional data loading (us).\n>\n> Thanks!\n> Craig\n>\n\n\n\n\n\n\n\nWe have been using the Intel S3710 (or\n minor model variations thereof). They have been great (consistent\n performance, power off safe and good expected lifetime). Also 2 of\n them in RAID1 easily outperform a reasonably large number of 10K\n spinners in RAID10.\n\n Now you *can* still buy the S37xx series, but eventually I guess\n we'll have to look at something more modern like the S45xx series.\n But I'm not so keen on them (they use TLC NAND which may give less\n consistent performance, plus they appear to have slightly lower\n expected lifetime). I think there was a thread a year or more ago\n on this list specifically about this very issue that might be\n worth searching for.\n\n The TLC NAND seems like a big deal - most modern SSD are built\n using it...they solve the high latency problem with SLC caches. So\n you get brilliant performance until the cache is full, then it\n drops off a cliff. Bigger/more expensive drives have bigger\n caches, so it is well worth finding in depth reviews of the exact\n models you might wish to evaluate!\n\n regards\n Mark \n\n On 10/04/18 14:36, Craig James wrote:\n\n\nOne of our four \"big iron\" (spinning disks) servers\n went belly up today. (Thanks, Postgres and pgbackrest! Easy\n recovery.) We're planning to move to a cloud service at the end\n of the year, so bad timing on this. We didn't want to buy any\n more hardware, but now it looks like we have to.\n \n\nI followed the discussions about SSD drives when they were\n first becoming mainstream; at that time, the Intel devices\n were king. Can anyone recommend what's a good SSD\n configuration these days? I don't think we want to buy a new\n server with spinning disks.\n\n\nWe're replacing:\n  8 core (Intel)\n\n \n 48GB memory\n   12-drive 7200 RPM 500GB\n     RAID1 (2 disks, OS and WAL log)\n     RAID10 (8 disks, postgres data dir)\n     2 spares\n  Ubuntu 16.04\n\n  Postgres 9.6\n\n\nThe current system peaks at about 7000 TPS from pgbench.\n\n\nOur system is a mix of non-transactional searching\n (customers) and transactional data loading (us).\n\n\nThanks!\nCraig", "msg_date": "Wed, 11 Apr 2018 10:56:44 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Latest advice on SSD?" }, { "msg_contents": "The most critical bit of advice I've found is setting this preference:\n\nhttps://amplitude.engineering/how-a-single-postgresql-config-change-improved-slow-query-performance-by-50x-85593b8991b0\n\nI'm using 4 512GB Samsung 850 EVOs in a hardware RAID 10 on a 1U server with about 144 GB RAM and 8 Xeon cores. I usually burn up CPU more than I burn up disks or RAM as compared to using magnetic where I had horrible IO wait percentages, so it seems to be performing quite well so far. \n\nMatthew Hall\n\n> On Apr 9, 2018, at 7:36 PM, Craig James <[email protected]> wrote:\n> \n> One of our four \"big iron\" (spinning disks) servers went belly up today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a cloud service at the end of the year, so bad timing on this. We didn't want to buy any more hardware, but now it looks like we have to.\n> \n> I followed the discussions about SSD drives when they were first becoming mainstream; at that time, the Intel devices were king. Can anyone recommend what's a good SSD configuration these days? I don't think we want to buy a new server with spinning disks.\n> \n> We're replacing:\n> 8 core (Intel)\n> 48GB memory\n> 12-drive 7200 RPM 500GB\n> RAID1 (2 disks, OS and WAL log)\n> RAID10 (8 disks, postgres data dir)\n> 2 spares\n> Ubuntu 16.04\n> Postgres 9.6\n> \n> The current system peaks at about 7000 TPS from pgbench.\n> \n> Our system is a mix of non-transactional searching (customers) and transactional data loading (us).\n> \n> Thanks!\n> Craig\n> \n> -- \n> ---------------------------------\n> Craig A. James\n> Chief Technology Officer\n> eMolecules, Inc.\n> ---------------------------------\n\nThe most critical bit of advice I've found is setting this preference:https://amplitude.engineering/how-a-single-postgresql-config-change-improved-slow-query-performance-by-50x-85593b8991b0I'm using 4 512GB Samsung 850 EVOs in a hardware RAID 10 on a 1U server with about 144 GB RAM and 8 Xeon cores. I usually burn up CPU more than I burn up disks or RAM as compared to using magnetic where I had horrible IO wait percentages, so it seems to be performing quite well so far. Matthew HallOn Apr 9, 2018, at 7:36 PM, Craig James <[email protected]> wrote:One of our four \"big iron\" (spinning disks) servers went belly up today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a cloud service at the end of the year, so bad timing on this. We didn't want to buy any more hardware, but now it looks like we have to.I followed the discussions about SSD drives when they were first becoming mainstream; at that time, the Intel devices were king. Can anyone recommend what's a good SSD configuration these days? I don't think we want to buy a new server with spinning disks.We're replacing:  8 core (Intel)  48GB memory  12-drive 7200 RPM 500GB     RAID1 (2 disks, OS and WAL log)     RAID10 (8 disks, postgres data dir)     2 spares  Ubuntu 16.04  Postgres 9.6The current system peaks at about 7000 TPS from pgbench.Our system is a mix of non-transactional searching (customers) and transactional data loading (us).Thanks!Craig-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Tue, 10 Apr 2018 18:39:21 -0700", "msg_from": "Matthew Hall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Latest advice on SSD?" }, { "msg_contents": "The 512 Gb model is big enough that the SLC cache and performance is \ngonna be ok. What would worry me is the lifetime: individual 512 Gb 850 \nEVOs are rated at 150 Tb over 5 years. Compare that to the Intel S3710 - \n400 Gb is rated at 8 Pb over 5 years. These drives are fast enough so \nthat you *might* write more than 4x 150 = 600 Tb over 5 years...\n\n\nIn addition - Samsung are real cagey about the power loss reliability of \nthese drives - I suspect that if you do lose power unexpectedly then \ndata corruption will result (no capacitors to keep RAM cache in sync).\n\n\nregards\n\nMark\n\n\nOn 11/04/18 13:39, Matthew Hall wrote:\n> The most critical bit of advice I've found is setting this preference:\n>\n> https://amplitude.engineering/how-a-single-postgresql-config-change-improved-slow-query-performance-by-50x-85593b8991b0\n>\n> I'm using 4 512GB Samsung 850 EVOs in a hardware RAID 10 on a 1U \n> server with about 144 GB RAM and 8 Xeon cores. I usually burn up CPU \n> more than I burn up disks or RAM as compared to using magnetic where I \n> had horrible IO wait percentages, so it seems to be performing quite \n> well so far.\n>\n> Matthew Hall\n>\n> On Apr 9, 2018, at 7:36 PM, Craig James <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n>> One of our four \"big iron\" (spinning disks) servers went belly up \n>> today. (Thanks, Postgres and pgbackrest! Easy recovery.) We're \n>> planning to move to a cloud service at the end of the year, so bad \n>> timing on this. We didn't want to buy any more hardware, but now it \n>> looks like we have to.\n>>\n>> I followed the discussions about SSD drives when they were first \n>> becoming mainstream; at that time, the Intel devices were king. Can \n>> anyone recommend what's a good SSD configuration these days? I don't \n>> think we want to buy a new server with spinning disks.\n>>\n>> We're replacing:\n>>   8 core (Intel)\n>> 48GB memory\n>>   12-drive 7200 RPM 500GB\n>>      RAID1 (2 disks, OS and WAL log)\n>>      RAID10 (8 disks, postgres data dir)\n>>      2 spares\n>>   Ubuntu 16.04\n>>   Postgres 9.6\n>>\n>> The current system peaks at about 7000 TPS from pgbench.\n>>\n>> Our system is a mix of non-transactional searching (customers) and \n>> transactional data loading (us).\n>>\n>> Thanks!\n>> Craig\n>>\n>> -- \n>> ---------------------------------\n>> Craig A. James\n>> Chief Technology Officer\n>> eMolecules, Inc.\n>> ---------------------------------\n\n\n", "msg_date": "Thu, 12 Apr 2018 17:11:08 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Latest advice on SSD?" }, { "msg_contents": "On Thu, Apr 12, 2018 at 8:11 AM, Mark Kirkwood\n<[email protected]> wrote:\n> The 512 Gb model is big enough that the SLC cache and performance is gonna\n> be ok. What would worry me is the lifetime: individual 512 Gb 850 EVOs are\n> rated at 150 Tb over 5 years. Compare that to the Intel S3710 - 400 Gb is\n> rated at 8 Pb over 5 years. These drives are fast enough so that you *might*\n> write more than 4x 150 = 600 Tb over 5 years...\n>\n>\n> In addition - Samsung are real cagey about the power loss reliability of\n> these drives - I suspect that if you do lose power unexpectedly then data\n> corruption will result (no capacitors to keep RAM cache in sync).\n\nI have done a lot of pull-the-plug testing on Samsung 850 M2 drives as\na side effect of a HA demo setup. I haven't kept any numbers, but on a\ntiny database with a smallish 100tps workload I am seeing data\ncorruption in about 1% of cases. Things like empty pg_control files,\nsections of WAL replaced with zeroes and/or old data. OS level write\ncache tuning is not enough to get rid of it.\n\nBased on that and the fact that interrupting SSD garbage collection\nmight also cause data loss, my recommendation is to either avoid\nconsumer drives for important databases. Or if you are adventurous\nhave multiple replicas in different power domains and have operational\nprocedures in place to reimage hosts on power loss.\n\n--\nAnts Aasma\nCybertec Schönig & Schönig GmbH\nGröhrmühlgasse 26, A-2700 Wiener Neustadt\nWeb: https://www.cybertec-postgresql.com\n\n", "msg_date": "Fri, 13 Apr 2018 12:55:03 +0300", "msg_from": "Ants Aasma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Latest advice on SSD?" }, { "msg_contents": "På tirsdag 10. april 2018 kl. 19:41:59, skrev Craig James <[email protected]\n <mailto:[email protected]>>:\n    On Tue, Apr 10, 2018 at 12:21 AM, Andreas Joseph Krogh <[email protected] \n<mailto:[email protected]>> wrote: På tirsdag 10. april 2018 kl. 04:36:27, \nskrev Craig James <[email protected] <mailto:[email protected]>>:\nOne of our four \"big iron\" (spinning disks) servers went belly up today. \n(Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a \ncloud service at the end of the year, so bad timing on this. We didn't want to \nbuy any more hardware, but now it looks like we have to. \nI followed the discussions about SSD drives when they were first becoming \nmainstream; at that time, the Intel devices were king. Can anyone recommend \nwhat's a good SSD configuration these days? I don't think we want to buy a new \nserver with spinning disks.\n \nWe're replacing:\n  8 core (Intel)\n  48GB memory\n   12-drive 7200 RPM 500GB\n     RAID1 (2 disks, OS and WAL log)\n     RAID10 (8 disks, postgres data dir)\n     2 spares\n  Ubuntu 16.04\n  Postgres 9.6\n \nThe current system peaks at about 7000 TPS from pgbench.\n\n \nWith what arguments (also initialization)?\n \n \npgbench -i -s 100 -U test\npgbench -U test -c ... -t ...\n\n \n-c  -t     TPS\n5   20000  5202\n10  10000  7916\n20  5000   7924\n30  3333   7270\n40  2500   5020\n50  2000   6417\n\n\n\n \nFWIW; We're testing \nthis: https://www.supermicro.nl/products/system/1U/1029/SYS-1029U-TN10RT.cfm\nwith 4 x Micron NVMe 9200 PRO NVMe 3.84TB U.2 in RAID-10:\n \n$ pgbench -s 100 -c 64 -t 10000 pgbench\n scale option ignored, using count from pgbench_branches table (100)\n starting vacuum...end.\n transaction type: <builtin: TPC-B (sort of)>\n scaling factor: 100\n query mode: simple\n number of clients: 64\n number of threads: 1\n number of transactions per client: 10000\n number of transactions actually processed: 640000/640000\n latency average = 2.867 ms\n tps = 22320.942063 (including connections establishing)\n tps = 22326.370955 (excluding connections establishing)\n \n \n--\nAndreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Wed, 9 May 2018 22:00:16 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Sv: Re: Latest advice on SSD?" }, { "msg_contents": "På onsdag 09. mai 2018 kl. 22:00:16, skrev Andreas Joseph Krogh <\[email protected] <mailto:[email protected]>>:\nPå tirsdag 10. april 2018 kl. 19:41:59, skrev Craig James <\[email protected] <mailto:[email protected]>>:\n    On Tue, Apr 10, 2018 at 12:21 AM, Andreas Joseph Krogh <[email protected] \n<mailto:[email protected]>> wrote: På tirsdag 10. april 2018 kl. 04:36:27, \nskrev Craig James <[email protected] <mailto:[email protected]>>:\nOne of our four \"big iron\" (spinning disks) servers went belly up today. \n(Thanks, Postgres and pgbackrest! Easy recovery.) We're planning to move to a \ncloud service at the end of the year, so bad timing on this. We didn't want to \nbuy any more hardware, but now it looks like we have to. \nI followed the discussions about SSD drives when they were first becoming \nmainstream; at that time, the Intel devices were king. Can anyone recommend \nwhat's a good SSD configuration these days? I don't think we want to buy a new \nserver with spinning disks.\n \nWe're replacing:\n  8 core (Intel)\n  48GB memory\n   12-drive 7200 RPM 500GB\n     RAID1 (2 disks, OS and WAL log)\n     RAID10 (8 disks, postgres data dir)\n     2 spares\n  Ubuntu 16.04\n  Postgres 9.6\n \nThe current system peaks at about 7000 TPS from pgbench.\n\n \nWith what arguments (also initialization)?\n \n \npgbench -i -s 100 -U test\npgbench -U test -c ... -t ...\n\n \n-c  -t     TPS\n5   20000  5202\n10  10000  7916\n20  5000   7924\n30  3333   7270\n40  2500   5020\n50  2000   6417\n\n\n\n \nFWIW; We're testing \nthis: https://www.supermicro.nl/products/system/1U/1029/SYS-1029U-TN10RT.cfm\nwith 4 x Micron NVMe 9200 PRO NVMe 3.84TB U.2 in RAID-10:\n \n$ pgbench -s 100 -c 64 -t 10000 pgbench\n scale option ignored, using count from pgbench_branches table (100)\n starting vacuum...end.\n transaction type: <builtin: TPC-B (sort of)>\n scaling factor: 100\n query mode: simple\n number of clients: 64\n number of threads: 1\n number of transactions per client: 10000\n number of transactions actually processed: 640000/640000\n latency average = 2.867 ms\n tps = 22320.942063 (including connections establishing)\n tps = 22326.370955 (excluding connections establishing)\n \nSorry, wrong disks; this is correct:\n \n48 clients:\npgbench -s 100 -c 48 -t 10000 pgbench \n scale option ignored, using count from pgbench_branches table (100)\n starting vacuum...end.\n transaction type: <builtin: TPC-B (sort of)>\n scaling factor: 100\n query mode: simple\n number of clients: 48\n number of threads: 1\n number of transactions per client: 10000\n number of transactions actually processed: 480000/480000\n latency average = 1.608 ms\n tps = 29846.511054 (including connections establishing)\n tps = 29859.483666 (excluding connections establishing)\n  \n \n64 clients:\npgbench -s 100 -c 64 -t 10000 pgbench \n scale option ignored, using count from pgbench_branches table (100)\n starting vacuum...end.\n transaction type: <builtin: TPC-B (sort of)>\n scaling factor: 100\n query mode: simple\n number of clients: 64\n number of threads: 1\n number of transactions per client: 10000\n number of transactions actually processed: 640000/640000\n latency average = 2.279 ms\n tps = 28077.261708 (including connections establishing)\n tps = 28085.730160 (excluding connections establishing)\n\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Fri, 11 May 2018 13:23:54 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Sv: Sv: Re: Latest advice on SSD?" }, { "msg_contents": "On 11/05/18 23:23, Andreas Joseph Krogh wrote:\n\n> På onsdag 09. mai 2018 kl. 22:00:16, skrev Andreas Joseph Krogh \n> <[email protected] <mailto:[email protected]>>:\n>\n> På tirsdag 10. april 2018 kl. 19:41:59, skrev Craig James\n> <[email protected] <mailto:[email protected]>>:\n>\n> On Tue, Apr 10, 2018 at 12:21 AM, Andreas Joseph Krogh\n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> På tirsdag 10. april 2018 kl. 04:36:27, skrev Craig James\n> <[email protected] <mailto:[email protected]>>:\n>\n> One of our four \"big iron\" (spinning disks) servers\n> went belly up today. (Thanks, Postgres and pgbackrest!\n> Easy recovery.) We're planning to move to a cloud\n> service at the end of the year, so bad timing on this.\n> We didn't want to buy any more hardware, but now it\n> looks like we have to.\n> I followed the discussions about SSD drives when they\n> were first becoming mainstream; at that time, the\n> Intel devices were king. Can anyone recommend what's a\n> good SSD configuration these days? I don't think we\n> want to buy a new server with spinning disks.\n> We're replacing:\n>   8 core (Intel)\n> 48GB memory\n>   12-drive 7200 RPM 500GB\n>      RAID1 (2 disks, OS and WAL log)\n>      RAID10 (8 disks, postgres data dir)\n>      2 spares\n>   Ubuntu 16.04\n>   Postgres 9.6\n> The current system peaks at about 7000 TPS from pgbench.\n>\n> With what arguments (also initialization)?\n>\n> pgbench -i -s 100 -U test\n> pgbench -U test -c ... -t ...\n> -c  -t     TPS\n> 5   20000  5202\n> 10  10000  7916\n> 20  5000   7924\n> 30  3333   7270\n> 40  2500   5020\n> 50  2000   6417\n>\n> FWIW; We're testing\n> this: https://www.supermicro.nl/products/system/1U/1029/SYS-1029U-TN10RT.cfm\n> with 4 x Micron NVMe 9200 PRO NVMe 3.84TB U.2 in RAID-10:\n> $ pgbench -s 100 -c 64 -t 10000 pgbench\n> scale option ignored, using count from pgbench_branches table (100)\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 1\n> number of transactions per client: 10000\n> number of transactions actually processed: 640000/640000\n> latency average = 2.867 ms\n> tps = 22320.942063 (including connections establishing)\n> tps = 22326.370955 (excluding connections establishing)\n>\n> Sorry, wrong disks; this is correct:\n> 48 clients:\n> pgbench -s 100 -c 48 -t 10000 pgbench\n> scale option ignored, using count from pgbench_branches table (100)\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 100\n> query mode: simple\n> number of clients: 48\n> number of threads: 1\n> number of transactions per client: 10000\n> number of transactions actually processed: 480000/480000\n> latency average = 1.608 ms\n> tps = 29846.511054 (including connections establishing)\n> tps = 29859.483666 (excluding connections establishing)\n> 64 clients:\n> pgbench -s 100 -c 64 -t 10000 pgbench\n> scale option ignored, using count from pgbench_branches table (100)\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 1\n> number of transactions per client: 10000\n> number of transactions actually processed: 640000/640000\n> latency average = 2.279 ms\n> tps = 28077.261708 (including connections establishing)\n> tps = 28085.730160 (excluding connections establishing)\n>\nIf I'm doing the math properly, then these runs are very short (i.e \nabout 20s). It would be interesting to specify a time limit (e.g -T600 \nor similar) so we see the effect of at least one checkpoint - i.e the \ndisks are actually forced to write and sync the transaction data.\n\nThese Micron disks look interesting (pretty good IOPS and lifetime \nnumbers). However (as usual with Micron, sadly) no data about power off \nsafety. Do you know if the the circuit board has capacitors?\n\nregards\nMark\n\n", "msg_date": "Sat, 12 May 2018 00:11:39 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sv: Sv: Re: Latest advice on SSD?" }, { "msg_contents": "På fredag 11. mai 2018 kl. 14:11:39, skrev Mark Kirkwood <\[email protected] <mailto:[email protected]>>:\nOn 11/05/18 23:23, Andreas Joseph Krogh wrote:\n\n > På onsdag 09. mai 2018 kl. 22:00:16, skrev Andreas Joseph Krogh\n > <[email protected] <mailto:[email protected]>>:\n >\n >     På tirsdag 10. april 2018 kl. 19:41:59, skrev Craig James\n >     <[email protected] <mailto:[email protected]>>:\n >\n >         On Tue, Apr 10, 2018 at 12:21 AM, Andreas Joseph Krogh\n >         <[email protected] <mailto:[email protected]>> wrote:\n >\n >             På tirsdag 10. april 2018 kl. 04:36:27, skrev Craig James\n >             <[email protected] <mailto:[email protected]>>:\n >\n >                 One of our four \"big iron\" (spinning disks) servers\n >                 went belly up today. (Thanks, Postgres and pgbackrest!\n >                 Easy recovery.) We're planning to move to a cloud\n >                 service at the end of the year, so bad timing on this.\n >                 We didn't want to buy any more hardware, but now it\n >                 looks like we have to.\n >                 I followed the discussions about SSD drives when they\n >                 were first becoming mainstream; at that time, the\n >                 Intel devices were king. Can anyone recommend what's a\n >                 good SSD configuration these days? I don't think we\n >                 want to buy a new server with spinning disks.\n >                 We're replacing:\n >                   8 core (Intel)\n >                 48GB memory\n >                   12-drive 7200 RPM 500GB\n >                      RAID1 (2 disks, OS and WAL log)\n >                      RAID10 (8 disks, postgres data dir)\n >                      2 spares\n >                   Ubuntu 16.04\n >                   Postgres 9.6\n >                 The current system peaks at about 7000 TPS from pgbench.\n >\n >             With what arguments (also initialization)?\n >\n >         pgbench -i -s 100 -U test\n >         pgbench -U test -c ... -t ...\n >         -c  -t     TPS\n >         5   20000  5202\n >         10  10000  7916\n >         20  5000   7924\n >         30  3333   7270\n >         40  2500   5020\n >         50  2000   6417\n >\n >     FWIW; We're testing\n >    \n this: https://www.supermicro.nl/products/system/1U/1029/SYS-1029U-TN10RT.cfm\n >     with 4 x Micron NVMe 9200 PRO NVMe 3.84TB U.2 in RAID-10:\n >     $ pgbench -s 100 -c 64 -t 10000 pgbench\n >     scale option ignored, using count from pgbench_branches table (100)\n >     starting vacuum...end.\n >     transaction type: <builtin: TPC-B (sort of)>\n >     scaling factor: 100\n >     query mode: simple\n >     number of clients: 64\n >     number of threads: 1\n >     number of transactions per client: 10000\n >     number of transactions actually processed: 640000/640000\n >     latency average = 2.867 ms\n >     tps = 22320.942063 (including connections establishing)\n >     tps = 22326.370955 (excluding connections establishing)\n >\n > Sorry, wrong disks; this is correct:\n > 48 clients:\n > pgbench -s 100 -c 48 -t 10000 pgbench\n > scale option ignored, using count from pgbench_branches table (100)\n > starting vacuum...end.\n > transaction type: <builtin: TPC-B (sort of)>\n > scaling factor: 100\n > query mode: simple\n > number of clients: 48\n > number of threads: 1\n > number of transactions per client: 10000\n > number of transactions actually processed: 480000/480000\n > latency average = 1.608 ms\n > tps = 29846.511054 (including connections establishing)\n > tps = 29859.483666 (excluding connections establishing)\n > 64 clients:\n > pgbench -s 100 -c 64 -t 10000 pgbench\n > scale option ignored, using count from pgbench_branches table (100)\n > starting vacuum...end.\n > transaction type: <builtin: TPC-B (sort of)>\n > scaling factor: 100\n > query mode: simple\n > number of clients: 64\n > number of threads: 1\n > number of transactions per client: 10000\n > number of transactions actually processed: 640000/640000\n > latency average = 2.279 ms\n > tps = 28077.261708 (including connections establishing)\n > tps = 28085.730160 (excluding connections establishing)\n >\n If I'm doing the math properly, then these runs are very short (i.e\n about 20s). It would be interesting to specify a time limit (e.g -T600\n or similar) so we see the effect of at least one checkpoint - i.e the\n disks are actually forced to write and sync the transaction data.\n\n These Micron disks look interesting (pretty good IOPS and lifetime\n numbers). However (as usual with Micron, sadly) no data about power off\n safety. Do you know if the the circuit board has capacitors?\n\n regards\n Mark\n \n$ pgbench -s 100 -c 64 -T600 pgbench\n scale option ignored, using count from pgbench_branches table (100)\n starting vacuum...end.\n transaction type: <builtin: TPC-B (sort of)>\n scaling factor: 100\n query mode: simple\n number of clients: 64\n number of threads: 1\n duration: 600 s\n number of transactions actually processed: 16979208\n latency average = 2.262 ms\n tps = 28298.582988 (including connections establishing)\n tps = 28298.926331 (excluding connections establishing)\n  \n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Fri, 11 May 2018 16:00:49 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Sv: Re: Sv: Sv: Re: Latest advice on SSD?" }, { "msg_contents": "På fredag 11. mai 2018 kl. 14:11:39, skrev Mark Kirkwood <\[email protected] <mailto:[email protected]>>:\n[snip]\n These Micron disks look interesting (pretty good IOPS and lifetime\n numbers). However (as usual with Micron, sadly) no data about power off\n safety. Do you know if the the circuit board has capacitors?\n \nDon't know, sorry...\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Fri, 11 May 2018 16:17:59 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Sv: Re: Sv: Sv: Re: Latest advice on SSD?" }, { "msg_contents": "> On May 11, 2018, at 15:11, Mark Kirkwood <[email protected]> wrote:\n> \n> On 11/05/18 23:23, Andreas Joseph Krogh wrote:\n> \n>> På onsdag 09. mai 2018 kl. 22:00:16, skrev Andreas Joseph Krogh <[email protected] <mailto:[email protected]>>:\n>> \n>> FWIW; We're testing\n>> this: https://www.supermicro.nl/products/system/1U/1029/SYS-1029U-TN10RT.cfm\n>> with 4 x Micron NVMe 9200 PRO NVMe 3.84TB U.2 in RAID-10:\n>> \n> These Micron disks look interesting (pretty good IOPS and lifetime numbers). However (as usual with Micron, sadly) no data about power off safety. Do you know if the the circuit board has capacitors?\n\nAccording to https://www.micron.com/~/media/documents/products/data-sheet/ssd/9200_u_2_pcie_ssd.pdf <https://www.micron.com/~/media/documents/products/data-sheet/ssd/9200_u_2_pcie_ssd.pdf>\n\nThe SSD supports an unexpected power loss with a power-backed write cache. No userdata is lost during an unexpected power loss. When power is subsequently restored, theSSD returns to a ready state within a maximum of 60 seconds.\nOn May 11, 2018, at 15:11, Mark Kirkwood <[email protected]> wrote:On 11/05/18 23:23, Andreas Joseph Krogh wrote:På onsdag 09. mai 2018 kl. 22:00:16, skrev Andreas Joseph Krogh <[email protected] <mailto:[email protected]>>:    FWIW; We're testing    this: https://www.supermicro.nl/products/system/1U/1029/SYS-1029U-TN10RT.cfm    with 4 x Micron NVMe 9200 PRO NVMe 3.84TB U.2 in RAID-10:   These Micron disks look interesting (pretty good IOPS and lifetime numbers). However (as usual with Micron, sadly) no data about power off safety. Do you know if the the circuit board has capacitors?According to https://www.micron.com/~/media/documents/products/data-sheet/ssd/9200_u_2_pcie_ssd.pdfThe SSD supports an unexpected power loss with a power-backed write cache. No userdata is lost during an unexpected power loss. When power is subsequently restored, theSSD returns to a ready state within a maximum of 60 seconds.", "msg_date": "Fri, 11 May 2018 17:48:25 +0300", "msg_from": "Evgeniy Shishkin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Latest advice on SSD?" }, { "msg_contents": "On 12/05/18 02:48, Evgeniy Shishkin wrote:\n\n>\n>\n>> On May 11, 2018, at 15:11, Mark Kirkwood \n>> <[email protected] \n>> <mailto:[email protected]>> wrote:\n>>\n>> On 11/05/18 23:23, Andreas Joseph Krogh wrote:\n>>\n>>> På onsdag 09. mai 2018 kl. 22:00:16, skrev Andreas Joseph Krogh \n>>> <[email protected] <mailto:[email protected]> \n>>> <mailto:[email protected]>>:\n>>>\n>>>    FWIW; We're testing\n>>>    this: \n>>> https://www.supermicro.nl/products/system/1U/1029/SYS-1029U-TN10RT.cfm\n>>>    with 4 x Micron NVMe 9200 PRO NVMe 3.84TB U.2 in RAID-10:\n>> These Micron disks look interesting (pretty good IOPS and lifetime \n>> numbers). However (as usual with Micron, sadly) no data about power \n>> off safety. Do you know if the the circuit board has capacitors?\n>\n> According to \n> https://www.micron.com/~/media/documents/products/data-sheet/ssd/9200_u_2_pcie_ssd.pdf \n> <https://www.micron.com/%7E/media/documents/products/data-sheet/ssd/9200_u_2_pcie_ssd.pdf>\n>\n> The SSD supports an unexpected power loss with a power-backed write \n> cache. No userdata is lost during an unexpected power loss. When power \n> is subsequently restored, theSSD returns to a ready state within a \n> maximum of 60 seconds.\n\nExcellent, and thanks for finding the details - note the document \nexplicitly states that they have capacitor backed power loss protection. \nSo looking good as a viable alternative to Intel's S4500, S4600, P4500, \nP4600 range. One point to note - we've been here before with Micron \nclaiming power loss protection and having to retract it later (Crucial \nM550 range...I have 2 of these BTW) - but to be fair to Micron the \nCrucial range is purely consumer and this Micron 9200 is obviously an \nenterprise targeted product. But some power loss testing might be advised!\n\nCheers\nMark\n\n\n", "msg_date": "Sat, 12 May 2018 17:19:13 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Latest advice on SSD?" } ]
[ { "msg_contents": "I have a strange performance issue, i think that is not possible:\n\nGiven this statement:\n\nSELECT *several_fields* FROM A, B, C WHERE *conditions*\nA, B are tables with several LEFT JOINS but they act as one subquery.\n\nIf I execute the select above:\n\nSELECT *several_fields* FROM A, B, C WHERE *conditions*\n*Time: 30 secs*\n*Cost: 1M*\n\n\nIf I execute the same select (same parameters) but swapping A and B in the\nfrom clause:\n\nSELECT *several_fields* FROM B, A, C WHERE *conditions*\n*Time: 19ms*\n*Cost: 10k*\n\nThe plan changes dramatically: I can't see why the order of FROM clause\nimpacts directly on the query cost and plan. If this is possible, where i\ncan read about it? I need to know how the order of FROM clause modifies the\nquery plan.\n\nThanks in advance. This is my first post.\n\nEduard Català\n\nI have a strange performance issue, i think that is not possible:Given this statement:SELECT several_fields  FROM A, B, C WHERE  conditionsA, B are tables with several LEFT JOINS but they act as one subquery.If I execute the select above:SELECT several_fields  FROM A, B, C WHERE  conditionsTime: 30 secsCost: 1MIf I execute the same select (same parameters) but swapping A and B in the from clause:SELECT several_fields  FROM B, A, C WHERE  conditionsTime: 19msCost: 10kThe plan changes dramatically: I can't see why the order of FROM clause impacts directly on the query cost and plan. If this is possible, where i can read about it? I need to know how the order of FROM clause modifies the query plan.Thanks in advance. This is my first post.Eduard Català", "msg_date": "Thu, 12 Apr 2018 16:35:09 +0200", "msg_from": "=?UTF-8?Q?Eduard_Catal=C3=A0?= <[email protected]>", "msg_from_op": true, "msg_subject": "Table order at FROM clause affects performance?" }, { "msg_contents": "=?UTF-8?Q?Eduard_Catal=C3=A0?= <[email protected]> writes:\n> Given this statement:\n\n> SELECT *several_fields* FROM A, B, C WHERE *conditions*\n> A, B are tables with several LEFT JOINS but they act as one subquery.\n\nYou really can't expect useful help if you are going to pose questions\nthat abstract. You have removed details that count, and made assumptions\nthat don't necessarily hold (e.g., what does \"act as one subquery\" mean?)\n\nProbably the most likely bet, on this limited information, is that there\nare enough base tables hidden inside your query that you're running into\njoin_collapse_limit and/or from_collapse_limit, resulting in the planner\nfailing to investigate the best available plan in one case. Raising those\nlimits would help, if so. But it could easily be something else.\n\nThere's some suggestions here about how to ask useful questions:\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 12 Apr 2018 11:30:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table order at FROM clause affects performance?" }, { "msg_contents": "Yes... i can't exepct useful help with my poor explanation\nbut your aproach is the right answer!\n\nWe were limited with from_collapse_limit.\n\nCost now is: 112\n\nMany many thanks.\n\n\n\n\n\nOn Thu, Apr 12, 2018 at 5:30 PM, Tom Lane <[email protected]> wrote:\n\n> =?UTF-8?Q?Eduard_Catal=C3=A0?= <[email protected]> writes:\n> > Given this statement:\n>\n> > SELECT *several_fields* FROM A, B, C WHERE *conditions*\n> > A, B are tables with several LEFT JOINS but they act as one subquery.\n>\n> You really can't expect useful help if you are going to pose questions\n> that abstract. You have removed details that count, and made assumptions\n> that don't necessarily hold (e.g., what does \"act as one subquery\" mean?)\n>\n> Probably the most likely bet, on this limited information, is that there\n> are enough base tables hidden inside your query that you're running into\n> join_collapse_limit and/or from_collapse_limit, resulting in the planner\n> failing to investigate the best available plan in one case. Raising those\n> limits would help, if so. But it could easily be something else.\n>\n> There's some suggestions here about how to ask useful questions:\n>\n> https://wiki.postgresql.org/wiki/Slow_Query_Questions\n>\n> regards, tom lane\n>\n\nYes... i can't exepct useful help with my poor explanation but your aproach is the right answer! We were limited with from_collapse_limit. Cost now is: 112Many many thanks.On Thu, Apr 12, 2018 at 5:30 PM, Tom Lane <[email protected]> wrote:=?UTF-8?Q?Eduard_Catal=C3=A0?= <[email protected]> writes:\n> Given this statement:\n\n> SELECT *several_fields*  FROM A, B, C WHERE  *conditions*\n> A, B are tables with several LEFT JOINS but they act as one subquery.\n\nYou really can't expect useful help if you are going to pose questions\nthat abstract.  You have removed details that count, and made assumptions\nthat don't necessarily hold (e.g., what does \"act as one subquery\" mean?)\n\nProbably the most likely bet, on this limited information, is that there\nare enough base tables hidden inside your query that you're running into\njoin_collapse_limit and/or from_collapse_limit, resulting in the planner\nfailing to investigate the best available plan in one case.  Raising those\nlimits would help, if so.  But it could easily be something else.\n\nThere's some suggestions here about how to ask useful questions:\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\n                        regards, tom lane", "msg_date": "Thu, 12 Apr 2018 18:25:07 +0200", "msg_from": "=?UTF-8?Q?Eduard_Catal=C3=A0?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table order at FROM clause affects performance?" } ]
[ { "msg_contents": "Hello, \nI need help in using postgresql 8.4 data in postgres 9.4 version. Do I \nneed to run any tool to achieve the same?\n\nSteps i followed is ran postgresql 8.4 and 9.4, copied data from 8.4 \ninstance to 9.4 and try to start postgresql 9.4 but no luck, getting below \nerror.\n\n[root@ms-esmon esm-data]# su - postgres -c \n\"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\n[root@ms-esmon esm-data]# LOG: skipping missing configuration file \n\"/var/ericsson/esm-data/postgresql-data/postgresql.auto.conf\"\n2018-04-16 06:52:01.546 GMT FATAL: database files are incompatible with \nserver\n2018-04-16 06:52:01.546 GMT DETAIL: The data directory was initialized \nby PostgreSQL version 8.4, which is not compatible with this version \n9.4.9.\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty. IT Services\n Business Solutions\n Consulting\n____________________________________________\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\n\n\nHello, \nI need help in using postgresql 8.4\ndata in postgres 9.4 version. Do I need to run any tool to achieve the\nsame?\n\nSteps i followed is ran postgresql 8.4\nand 9.4, copied data from 8.4 instance to 9.4 and try to start postgresql\n9.4 but no luck, getting below error.\n\n[root@ms-esmon esm-data]# su - postgres\n-c \"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data/\n2>&1 &\"\n[root@ms-esmon esm-data]# LOG:  skipping\nmissing configuration file \"/var/ericsson/esm-data/postgresql-data/postgresql.auto.conf\"\n2018-04-16 06:52:01.546 GMT  FATAL:\n database files are incompatible with server\n2018-04-16 06:52:01.546 GMT  DETAIL:\n The data directory was initialized by PostgreSQL version 8.4, which\nis not compatible with this version 9.4.9.\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty.        IT Services\n                \n       Business Solutions\n                \n       Consulting\n____________________________________________\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you", "msg_date": "Mon, 16 Apr 2018 12:33:12 +0530", "msg_from": "Akshay Ballarpure <[email protected]>", "msg_from_op": true, "msg_subject": "Data migration from postgres 8.4 to 9.4" }, { "msg_contents": "On Mon, Apr 16, 2018 at 12:33 PM, Akshay Ballarpure <\[email protected]> wrote:\n\n> Hello,\n> I need help in using postgresql 8.4 data in postgres 9.4 version. Do I\n> need to run any tool to achieve the same?\n>\n> Steps i followed is ran postgresql 8.4 and 9.4, copied data from 8.4\n> instance to 9.4 and try to start postgresql 9.4 but no luck, getting below\n> error.\n>\n> [root@ms-esmon esm-data]# su - postgres -c \"/opt/rh/rh-postgresql94/root/usr/bin/postgres\n> -D /var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\n> [root@ms-esmon esm-data]# LOG: skipping missing configuration file\n> \"/var/ericsson/esm-data/postgresql-data/postgresql.auto.conf\"\n> 2018-04-16 06:52:01.546 GMT *FATAL*: database files are incompatible\n> with server\n> 2018-04-16 06:52:01.546 GMT *DETAIL*: The data directory was\n> initialized by PostgreSQL version 8.4, which is not compatible with this\n> version 9.4.9.\n>\n>\n> With Best Regards\n> Akshay\n> Ericsson OSS MON\n> Tata Consultancy Services\n> Mailto: [email protected]\n> Website: http://www.tcs.com\n\n\nYou cannot simply copy data between major versions. Look into pg_upgrade\nutility to upgrade your database, or you could use pg_dump/pg_restore to\nmigrate between major versions.\n\nAmitabh\n\nOn Mon, Apr 16, 2018 at 12:33 PM, Akshay Ballarpure <[email protected]> wrote:Hello, \nI need help in using postgresql 8.4\ndata in postgres 9.4 version. Do I need to run any tool to achieve the\nsame?\n\nSteps i followed is ran postgresql 8.4\nand 9.4, copied data from 8.4 instance to 9.4 and try to start postgresql\n9.4 but no luck, getting below error.\n\n[root@ms-esmon esm-data]# su - postgres\n-c \"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data/\n2>&1 &\"\n[root@ms-esmon esm-data]# LOG:  skipping\nmissing configuration file \"/var/ericsson/esm-data/postgresql-data/postgresql.auto.conf\"\n2018-04-16 06:52:01.546 GMT  FATAL:\n database files are incompatible with server\n2018-04-16 06:52:01.546 GMT  DETAIL:\n The data directory was initialized by PostgreSQL version 8.4, which\nis not compatible with this version 9.4.9.\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.comYou cannot simply copy data between major versions. Look into pg_upgrade utility to upgrade your database, or you could use pg_dump/pg_restore to migrate between major versions.Amitabh", "msg_date": "Mon, 16 Apr 2018 12:41:12 +0530", "msg_from": "Amitabh Kant <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data migration from postgres 8.4 to 9.4" }, { "msg_contents": "Am 16.04.2018 um 09:03 schrieb Akshay Ballarpure:\n> Hello,\n\nHi Akshay,\n\n> I need help in using postgresql 8.4 data in postgres 9.4 version. Do I\n> need to run any tool to achieve the same?\n\nYes. (-performance is probably the wrong place to ask though, please try\n-general or -admin next time)\n\nPlease check the release notes before doing *any* upgrade, esp. when\nskipping 4 major releases. They (among other very important information)\ncontain instructions how to upgrade:\nhttps://www.postgresql.org/docs/current/static/release-9-4.html#id-1.11.6.48.4\nYou'll probably end up doing a pg_upgrade run (which is linked from the\nabove).\n\n> \n> Steps i followed is ran postgresql 8.4 and 9.4, copied data from 8.4\n> instance to 9.4 and try to start postgresql 9.4 but no luck, getting\n> below error.\n> \n> [root@ms-esmon esm-data]# su - postgres -c\n> \"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D\n> /var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\n> [root@ms-esmon esm-data]# LOG:  skipping missing configuration file\n> \"/var/ericsson/esm-data/postgresql-data/postgresql.auto.conf\"\n> 2018-04-16 06:52:01.546 GMT  *FATAL*:  database files are incompatible\n> with server\n> 2018-04-16 06:52:01.546 GMT  *DETAIL*:  The data directory was\n> initialized by PostgreSQL version 8.4, which is not compatible with this\n> version 9.4.9.\n\nThat's exactly what's supposed to happen. The reasons are explained in\nthe pg_upgrade documentation.\n\nBTW: Are you sure you want to go to 9.4? It is already rather outdated\nand will go out of support \"soon\" (given that you're running 8.4, I have\nto assume that your organisation requires quite some time to get an\nupgrade cycle through the red band jungle). Unless you have very good\nreasons not to, please consider going straight to 10, which will get you\nalmost 5 years of community support.\n\nBest regards,\n-- \nGunnar \"Nick\" Bluth\nDBA ELSTER\nExtern im Auftrag der Hays AG\n\nTel: +49 911/991-4665\nMobil: +49 172/8853339\n\n", "msg_date": "Mon, 16 Apr 2018 10:09:08 +0200", "msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data migration from postgres 8.4 to 9.4" }, { "msg_contents": "Thank you for detailed info. much appreciated. May i know how to install \npg_upgrade ?\n\n\nWith Best Regards\nAkshay\n\n\n\n\nFrom: \"Albin, Lloyd P\" <[email protected]>\nTo: Akshay Ballarpure <[email protected]>\nDate: 04/16/2018 08:38 PM\nSubject: RE: Data migration from postgres 8.4 to 9.4\n\n\n\nAkshay ,\n\nThere are several Official ways to upgrade PostgreSQL.\n\n1) Use pg_upgrade (Faster) Postgres 8.4 to Postgres 9.4. Use the Postgres \n9.4 version of pg_upgrade.\nhttps://www.postgresql.org/docs/9.4/static/pgupgrade.html\n\n2) Dump and Restore your database into a new server (Slower) Postgres 8.4 \nto Postgres 9.4 Use pg_dump with pg_restore or pg_dumpall with psql from \nPostgres 9.4 against your Postgres 8.4 Server. You need to use this method \nif you wish to change your initdb settings, such as the default encoding, \nturn on checksums, etc.\nhttps://www.postgresql.org/docs/9.4/static/app-pgdump.html\nhttps://www.postgresql.org/docs/9.4/static/app-pgrestore.html\nhttps://www.postgresql.org/docs/9.4/static/app-pg-dumpall.html\n\n3) Swap out the binaries. This can only be done using the same Postgres \nversion (8.4.x or 9.4.x or 10.x) This means that you can upgrade from \n9.4.9 to 9.4.12 by just swapping out the binaries.\n\n4) Unofficially you can use things like slony, etc to do a live migration \nwithout downtime.\n\nLloyd\n\n\n\nFrom: Akshay Ballarpure [[email protected]]\nSent: Monday, April 16, 2018 12:03 AM\nSubject: Data migration from postgres 8.4 to 9.4\n\nHello, \nI need help in using postgresql 8.4 data in postgres 9.4 version. Do I \nneed to run any tool to achieve the same? \n\nSteps i followed is ran postgresql 8.4 and 9.4, copied data from 8.4 \ninstance to 9.4 and try to start postgresql 9.4 but no luck, getting below \nerror. \n\n[root@ms-esmon esm-data]# su - postgres -c \n\"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data/ 2>&1 &\" \n[root@ms-esmon esm-data]# LOG: skipping missing configuration file \n\"/var/ericsson/esm-data/postgresql-data/postgresql.auto.conf\" \n2018-04-16 06:52:01.546 GMT FATAL: database files are incompatible with \nserver \n2018-04-16 06:52:01.546 GMT DETAIL: The data directory was initialized \nby PostgreSQL version 8.4, which is not compatible with this version \n9.4.9. \n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty. IT Services\n Business Solutions\n Consulting\n____________________________________________\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\nThank you for detailed info. much appreciated.\nMay i know how to install pg_upgrade ?\n\n\nWith Best Regards\nAkshay\n\n\n\n\nFrom:      \n \"Albin, Lloyd\nP\" <[email protected]>\nTo:      \n Akshay Ballarpure <[email protected]>\nDate:      \n 04/16/2018 08:38 PM\nSubject:    \n   RE: Data migration\nfrom postgres 8.4 to 9.4\n\n\n\n\nAkshay ,\n\nThere are several Official ways to upgrade PostgreSQL.\n\n1) Use pg_upgrade (Faster) Postgres 8.4 to Postgres 9.4. Use the Postgres\n9.4 version of pg_upgrade.\nhttps://www.postgresql.org/docs/9.4/static/pgupgrade.html\n\n2) Dump and Restore your database into a new server (Slower) Postgres 8.4\nto Postgres 9.4 Use pg_dump with pg_restore or pg_dumpall with psql from\nPostgres 9.4 against your Postgres 8.4 Server. You need to use this method\nif you wish to change your initdb settings, such as the default encoding,\nturn on checksums, etc.\nhttps://www.postgresql.org/docs/9.4/static/app-pgdump.html\nhttps://www.postgresql.org/docs/9.4/static/app-pgrestore.html\nhttps://www.postgresql.org/docs/9.4/static/app-pg-dumpall.html\n\n3) Swap out the binaries. This can only be done using the same Postgres\nversion (8.4.x or 9.4.x or 10.x) This means that you can upgrade from 9.4.9\nto 9.4.12 by just swapping out the binaries.\n\n4) Unofficially you can use things like slony, etc to do a live migration\nwithout downtime.\n\nLloyd\n\n\n\n\nFrom: Akshay Ballarpure [[email protected]]\nSent: Monday, April 16, 2018 12:03 AM\nSubject: Data migration from postgres 8.4 to 9.4\n\nHello, \nI need help in using postgresql 8.4 data in postgres 9.4 version. Do I\nneed to run any tool to achieve the same?\n\n\nSteps i followed is ran postgresql 8.4 and 9.4, copied data from 8.4 instance\nto 9.4 and try to start postgresql 9.4 but no luck, getting below error.\n\n\n[root@ms-esmon esm-data]# su - postgres -c \"/opt/rh/rh-postgresql94/root/usr/bin/postgres\n-D /var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\n\n[root@ms-esmon esm-data]# LOG:  skipping missing configuration file\n\"/var/ericsson/esm-data/postgresql-data/postgresql.auto.conf\"\n\n2018-04-16 06:52:01.546 GMT  FATAL:\n database files are incompatible with server\n\n2018-04-16 06:52:01.546 GMT  DETAIL:\n The data directory was initialized by PostgreSQL version 8.4, which\nis not compatible with this version 9.4.9.\n\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty.        IT Services\n                    \n  Business Solutions\n                    \n  Consulting\n____________________________________________\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you", "msg_date": "Tue, 17 Apr 2018 17:54:12 +0530", "msg_from": "Akshay Ballarpure <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Data migration from postgres 8.4 to 9.4" }, { "msg_contents": "Hi,\r\n\r\npg_upgrade does not need installation, it comes with a PostgreSQL installation.\r\n\r\nYou will find it in the bin directory of your 9.4 PostgreSQL installation.\r\n\r\n\r\nBest Regards,\r\n\r\nNawaz Ahmed\r\nSoftware Development Engineer\r\n\r\nFujitsu Australia Software Technology Pty Ltd\r\n14 Rodborough Road, Frenchs Forest NSW 2086, Australia\r\nT +61 2 9452 9027\r\[email protected]<mailto:[email protected]>\r\nfastware.com.au<http://fastware.com.au/>\r\n\r\n\r\n\r\nFrom: Akshay Ballarpure [mailto:[email protected]]\r\nSent: Tuesday, 17 April 2018 10:24 PM\r\nTo: Albin, Lloyd P <[email protected]>; [email protected]; [email protected]\r\nSubject: RE: Data migration from postgres 8.4 to 9.4\r\n\r\nThank you for detailed info. much appreciated. May i know how to install pg_upgrade ?\r\n\r\n\r\nWith Best Regards\r\nAkshay\r\n\r\n\r\n\r\n\r\nFrom: \"Albin, Lloyd P\" <[email protected]<mailto:[email protected]>>\r\nTo: Akshay Ballarpure <[email protected]<mailto:[email protected]>>\r\nDate: 04/16/2018 08:38 PM\r\nSubject: RE: Data migration from postgres 8.4 to 9.4\r\n________________________________\r\n\r\n\r\n\r\nAkshay ,\r\n\r\nThere are several Official ways to upgrade PostgreSQL.\r\n\r\n1) Use pg_upgrade (Faster) Postgres 8.4 to Postgres 9.4. Use the Postgres 9.4 version of pg_upgrade.\r\nhttps://www.postgresql.org/docs/9.4/static/pgupgrade.html\r\n\r\n2) Dump and Restore your database into a new server (Slower) Postgres 8.4 to Postgres 9.4 Use pg_dump with pg_restore or pg_dumpall with psql from Postgres 9.4 against your Postgres 8.4 Server. You need to use this method if you wish to change your initdb settings, such as the default encoding, turn on checksums, etc.\r\nhttps://www.postgresql.org/docs/9.4/static/app-pgdump.html\r\nhttps://www.postgresql.org/docs/9.4/static/app-pgrestore.html\r\nhttps://www.postgresql.org/docs/9.4/static/app-pg-dumpall.html\r\n\r\n3) Swap out the binaries. This can only be done using the same Postgres version (8.4.x or 9.4.x or 10.x) This means that you can upgrade from 9.4.9 to 9.4.12 by just swapping out the binaries.\r\n\r\n4) Unofficially you can use things like slony, etc to do a live migration without downtime.\r\n\r\nLloyd\r\n\r\n________________________________\r\n\r\nFrom: Akshay Ballarpure [[email protected]]\r\nSent: Monday, April 16, 2018 12:03 AM\r\nSubject: Data migration from postgres 8.4 to 9.4\r\n\r\nHello,\r\nI need help in using postgresql 8.4 data in postgres 9.4 version. Do I need to run any tool to achieve the same?\r\n\r\nSteps i followed is ran postgresql 8.4 and 9.4, copied data from 8.4 instance to 9.4 and try to start postgresql 9.4 but no luck, getting below error.\r\n\r\n[root@ms-esmon esm-data]# su - postgres -c \"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\r\n[root@ms-esmon esm-data]# LOG: skipping missing configuration file \"/var/ericsson/esm-data/postgresql-data/postgresql.auto.conf\"\r\n2018-04-16 06:52:01.546 GMT FATAL: database files are incompatible with server\r\n2018-04-16 06:52:01.546 GMT DETAIL: The data directory was initialized by PostgreSQL version 8.4, which is not compatible with this version 9.4.9.\r\n\r\n\r\nWith Best Regards\r\nAkshay\r\nEricsson OSS MON\r\nTata Consultancy Services\r\nMailto: [email protected]<mailto:[email protected]>\r\nWebsite: http://www.tcs.com<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.tcs.com_&d=DwMBAg&c=eRAMFD45gAfqt84VtBcfhQ&r=_Ld6CwmrKpJ5kYWOAdC16g&m=cDYWxCJTTi_EPg44JRzRSlNIN2t8gWyoEj7pBPTEd1g&s=DwZUm6m0lu4rIIDh08es1EVB9SR0Bq52G1F9-qPmf5k&e=>\r\n____________________________________________\r\nExperience certainty. IT Services\r\n Business Solutions\r\n Consulting\r\n____________________________________________\r\n=====-----=====-----=====\r\nNotice: The information contained in this e-mail\r\nmessage and/or attachments to it may contain\r\nconfidential or privileged information. If you are\r\nnot the intended recipient, any dissemination, use,\r\nreview, distribution, printing or copying of the\r\ninformation contained in this e-mail message\r\nand/or attachments to it are strictly prohibited. If\r\nyou have received this communication in error,\r\nplease notify us by reply e-mail or telephone and\r\nimmediately and permanently delete the message\r\nand any attachments. Thank you\r\nDisclaimer\r\n\r\nThe information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.\r\n\r\n\r\nWhereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.\r\n\r\n\r\nIf you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email [email protected]\r\n\n\n\n\n\n\n\n\n\nHi,\n \npg_upgrade does not need installation, it comes with a PostgreSQL installation.\r\n\n \nYou will find it in  the bin directory of your 9.4 PostgreSQL installation.\n \n \nBest Regards,\n \nNawaz Ahmed\r\nSoftware Development Engineer\n\r\nFujitsu Australia Software Technology Pty Ltd\r\n14 Rodborough Road, Frenchs Forest NSW 2086, Australia\nT +61 2 9452 9027 \[email protected]\nfastware.com.au\n\n\n\n \nFrom: Akshay Ballarpure [mailto:[email protected]]\r\n\nSent: Tuesday, 17 April 2018 10:24 PM\nTo: Albin, Lloyd P <[email protected]>; [email protected]; [email protected]\nSubject: RE: Data migration from postgres 8.4 to 9.4\n \nThank you for detailed info. much appreciated. May i know how to install pg_upgrade ?\n\n\n\nWith Best Regards\r\nAkshay\n\n\n\n\nFrom:        \"Albin, Lloyd P\" <[email protected]>\n\nTo:        Akshay Ballarpure <[email protected]>\n\nDate:        04/16/2018 08:38 PM\n\nSubject:        RE: Data migration from postgres 8.4 to 9.4\n\n\n\n\n\n\n\nAkshay ,\n\r\nThere are several Official ways to upgrade PostgreSQL.\n\r\n1) Use pg_upgrade (Faster) Postgres 8.4 to Postgres 9.4. Use the Postgres 9.4 version of pg_upgrade.\nhttps://www.postgresql.org/docs/9.4/static/pgupgrade.html\n\r\n2) Dump and Restore your database into a new server (Slower) Postgres 8.4 to Postgres 9.4 Use pg_dump with pg_restore or pg_dumpall with psql from Postgres 9.4 against your Postgres 8.4 Server. You need to use this method if you wish to change your initdb settings,\r\n such as the default encoding, turn on checksums, etc.\nhttps://www.postgresql.org/docs/9.4/static/app-pgdump.html\nhttps://www.postgresql.org/docs/9.4/static/app-pgrestore.html\nhttps://www.postgresql.org/docs/9.4/static/app-pg-dumpall.html\n\r\n3) Swap out the binaries. This can only be done using the same Postgres version (8.4.x or 9.4.x or 10.x) This means that you can upgrade from 9.4.9 to 9.4.12 by just swapping out the binaries.\n\r\n4) Unofficially you can use things like slony, etc to do a live migration without downtime.\n\r\nLloyd\n\n\n\n\n\n\nFrom: Akshay Ballarpure [[email protected]]\r\nSent: Monday, April 16, 2018 12:03 AM\r\nSubject: Data migration from postgres 8.4 to 9.4\n\nHello, \r\nI need help in using postgresql 8.4 data in postgres 9.4 version. Do I need to run any tool to achieve the same?\n\n\r\nSteps i followed is ran postgresql 8.4 and 9.4, copied data from 8.4 instance to 9.4 and try to start postgresql 9.4 but no luck, getting below error.\n\n\r\n[root@ms-esmon esm-data]# su - postgres -c \"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\n\r\n[root@ms-esmon esm-data]# LOG:  skipping missing configuration file \"/var/ericsson/esm-data/postgresql-data/postgresql.auto.conf\"\n\r\n2018-04-16 06:52:01.546 GMT  FATAL:  database files are incompatible with server\n\r\n2018-04-16 06:52:01.546 GMT  DETAIL:  The data directory was initialized by PostgreSQL version 8.4, which is not compatible with this version 9.4.9.\n\n\n\r\nWith Best Regards\r\nAkshay\r\nEricsson OSS MON\r\nTata Consultancy Services\r\nMailto: [email protected]\r\nWebsite: http://www.tcs.com\r\n____________________________________________\r\nExperience certainty.        IT Services\r\n                      Business Solutions\r\n                      Consulting\r\n____________________________________________ \r\n=====-----=====-----=====\r\nNotice: The information contained in this e-mail\r\nmessage and/or attachments to it may contain \r\nconfidential or privileged information. If you are \r\nnot the intended recipient, any dissemination, use, \r\nreview, distribution, printing or copying of the \r\ninformation contained in this e-mail message \r\nand/or attachments to it are strictly prohibited. If \r\nyou have received this communication in error, \r\nplease notify us by reply e-mail or telephone and \r\nimmediately and permanently delete the message \r\nand any attachments. Thank you \n\nDisclaimer\nThe information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified\r\n that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document\r\n and all copies thereof.\n\nWhereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu\r\n Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication\r\n or any files attached.\n\nIf you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email [email protected]", "msg_date": "Wed, 18 Apr 2018 08:06:15 +0000", "msg_from": "\"Ahmed, Nawaz\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Data migration from postgres 8.4 to 9.4" } ]
[ { "msg_contents": "*A description of what you are trying to achieve and what results you\nexpect.:*\n\nMy end goal was to test the execution time difference between using an\nIF(SELECT COUNT(*)...) and an IF EXISTS() when no indexes were used and\nwhen a string match was not found. My expectation was that my 2 functions\nwould behave fairly similarly, but they most certainly did not. Here are\nthe table, functions, test queries, and test query results I received, as\nwell as comments as I present the pieces and talk about the results from my\nperspective.\n\nThis is the table and data that I used for my tests. A table with 1\nmillion sequenced records. No indexing on any columns. I ran ANALYZE on\nthis table and a VACUUM on the entire database, just to be sure.\n\nCREATE TABLE zz_noidx1 AS SELECT generate_series(0, 999999) AS\nint_distinct, 'Test'::text || generate_series(0, 999999)::text AS\ntext_distinct;\n\nThese are the 2 functions that I ran my final tests with. My goal was to\ndetermine which function would perform the fastest and my expectation was\nthat they would still be somewhat close in execution time comparison.\n\n--Test Function #1\nCREATE OR REPLACE FUNCTION zz_spx_ifcount_noidx(p_findme text)\n RETURNS text\n LANGUAGE 'plpgsql'\n STABLE\nAS $$\n\nBEGIN\n IF (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\nLOWER(p_findme)) > 0 THEN\n RETURN 'Found';\n ELSE\n RETURN 'Not Found';\n END IF;\nEND;\n$$;\n\n--Test Function #2\nCREATE OR REPLACE FUNCTION zz_spx_ifexists_noidx(p_findme text)\n RETURNS text\n LANGUAGE 'plpgsql'\n STABLE\nAS $$\n\nBEGIN\n IF EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\nLOWER(p_findme)) THEN\n RETURN 'Found';\n ELSE\n RETURN 'Not Found';\n END IF;\nEND;\n$$;\n\nThe first thing I did was to run some baseline tests using the basic\nqueries inside of the IF() checks found in each of the functions to see how\nthe query planner handled them. I ran the following two queries.\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM zz_noidx1 WHERE\nLOWER(text_distinct) = LOWER('Test5000001');\nEXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\nLOWER(text_distinct) = LOWER('Test5000001');\n\nThe execution time results and query plans for these two were very similar,\nas expected. In the results I can see that 2 workers were employed for\neach query plan.\n\n--Results for the SELECT COUNT(*) query.\nQUERY\nPLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------\nFinalize Aggregate (cost=12661.42..12661.43 rows=1 width=8) (actual\ntime=172.105..172.105 rows=1 loops=1)\n Buffers: shared\nread=1912\n\n -> Gather (cost=12661.21..12661.42 rows=2 width=8) (actual\ntime=172.020..172.099 rows=3 loops=1)\n Workers Planned:\n2\n\n Workers Launched:\n2\n\n Buffers: shared\nread=1912\n\n -> Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8) (actual\ntime=155.123..155.123 rows=1 loops=3)\n Buffers: shared\nread=5406\n\n -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\nwidth=0) (actual time=155.103..155.103 rows=0 loops=3)\n Filter: (lower(text_distinct) =\n'test5000001'::text)\n\n Rows Removed by Filter:\n333333\n\n Buffers: shared\nread=5406\n\nPlanning time: 0.718\nms\n\nExecution time: 187.601 ms\n\n--Results for the SELECT 1 query.\nQUERY\nPLAN\n\n----------------------------------------------------------------------------------------------------------------------------\nGather (cost=1000.00..13156.00 rows=5000 width=4) (actual\ntime=175.682..175.682 rows=0 loops=1)\n Workers Planned:\n2\n\n Workers Launched:\n2\n\n Buffers: shared\nread=2021\n\n -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\nwidth=4) (actual time=159.769..159.769 rows=0 loops=3)\n Filter: (lower(text_distinct) =\n'test5000001'::text)\n\n Rows Removed by Filter:\n333333\n\n Buffers: shared\nread=5406\n\nPlanning time: 0.874\nms\n\nExecution time: 192.045 ms\n\nAfter running these baseline tests and viewing the fairly similar results,\nright or wrong, I expected my queries that tested the functions to behave\nsimilarly. I started with the following query...\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\nzz_spx_ifcount_noidx('Test5000001');\n\nand I got the following \"auto_explain\" results...\n\n2018-04-16 14:57:22.624 EDT [17812] LOG: duration: 155.239 ms plan:\n Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE\nLOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\n Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8) (actual\ntime=155.230..155.230 rows=1 loops=1)\n Buffers: shared read=1682\n -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\nwidth=0) (actual time=155.222..155.222 rows=0 loops=1)\n Filter: (lower(text_distinct) = 'test5000001'::text)\n Rows Removed by Filter: 311170\n Buffers: shared read=1682\n2018-04-16 14:57:22.624 EDT [9096] LOG: duration: 154.603 ms plan:\n Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE\nLOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\n Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8) (actual\ntime=154.576..154.576 rows=1 loops=1)\n Buffers: shared read=1682\n -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\nwidth=0) (actual time=154.570..154.570 rows=0 loops=1)\n Filter: (lower(text_distinct) = 'test5000001'::text)\n Rows Removed by Filter: 311061\n Buffers: shared read=1682\n2018-04-16 14:57:22.642 EDT [15132] LOG: duration: 197.260 ms plan:\n Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE\nLOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\n Result (cost=12661.43..12661.45 rows=1 width=1) (actual\ntime=179.561..179.561 rows=1 loops=1)\n Buffers: shared read=2042\n InitPlan 1 (returns $1)\n -> Finalize Aggregate (cost=12661.42..12661.43 rows=1 width=8) (actual\ntime=179.559..179.559 rows=1 loops=1)\n Buffers: shared read=2042\n -> Gather (cost=12661.21..12661.42 rows=2 width=8) (actual\ntime=179.529..179.556 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared read=2042\n -> Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8)\n(actual time=162.831..162.831 rows=1 loops=3)\n Buffers: shared read=5406\n -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\nwidth=0) (actual time=162.824..162.824 rows=0 loops=3)\n Filter: (lower(text_distinct) = 'test5000001'::text)\n Rows Removed by Filter: 333333\n Buffers: shared read=5406\n2018-04-16 14:57:22.642 EDT [15132] CONTEXT: SQL statement \"SELECT (SELECT\nCOUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\nLOWER(p_findme)) > 0\"\n PL/pgSQL function zz_spx_ifcount_noidx(text) line 4 at IF\n2018-04-16 14:57:22.642 EDT [15132] LOG: duration: 199.371 ms plan:\n Query Text: explain (analyze, buffers) select * from\nzz_spx_ifcount_noidx('Test5000001')\n Function Scan on zz_spx_ifcount_noidx (cost=0.25..0.26 rows=1 width=32)\n(actual time=199.370..199.370 rows=1 loops=1)\n Buffers: shared hit=218 read=5446\n\nHere I could see that the 2 workers were getting employed again, which is\ngreat. Just what I expected. And the execution time was in the same\nballpark as my first baseline test using just the query found inside of the\nIF() check. 199 milliseonds. Okay.\n\nI moved on to test the other function with the following query...\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\nzz_spx_ifcount_noidx('Test5000001');\n\nand I got the following \"auto_explain\" results...\n\n2018-04-16 14:58:34.134 EDT [12616] LOG: duration: 426.279 ms plan:\n Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\nLOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n Result (cost=4.08..4.09 rows=1 width=1) (actual time=426.274..426.274\nrows=1 loops=1)\n Buffers: shared read=5406\n InitPlan 1 (returns $0)\n -> Seq Scan on zz_noidx1 (cost=0.00..20406.00 rows=5000 width=0)\n(actual time=426.273..426.273 rows=0 loops=1)\n Filter: (lower(text_distinct) = 'test5000001'::text)\n Rows Removed by Filter: 1000000\n Buffers: shared read=5406\n2018-04-16 14:58:34.134 EDT [12616] CONTEXT: SQL statement \"SELECT EXISTS\n(SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\nLOWER(p_findme))\"\n PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n2018-04-16 14:58:34.134 EDT [12616] LOG: duration: 428.077 ms plan:\n Query Text: explain (analyze, buffers) select * from\nzz_spx_ifexists_noidx('Test5000001')\n Function Scan on zz_spx_ifexists_noidx (cost=0.25..0.26 rows=1 width=32)\n(actual time=428.076..428.076 rows=1 loops=1)\n Buffers: shared hit=30 read=5438\n\nDefinitely not the execution time, or query plan, results I was expecting.\nAs we can see, no workers were employed here and my guess was that this was\nthe reason or the large execution time difference between these 2 tests?\n199 milliseconds versus 428 milliseconds, which is a big difference. Why\nare workers not being employed here like they were when I tested the query\nfound inside of the IF() check in a standalone manner? But then I ran\nanother test and the results made even less sense to me.\n\nWhen I ran the above query the first 5 times after starting my Postgres\nservice, I got the same results each time (around 428 milliseconds), but\nwhen running the query 6 or more times, the execution time jumps up to\nalmost double that. Here are the \"auto_explain\" results running this query\na 6th time...\n\n--\"auto_explain\" results after running the same query 6 or more times.\n2018-04-16 15:01:51.635 EDT [12616] LOG: duration: 761.847 ms plan:\n Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\nLOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n Result (cost=4.58..4.59 rows=1 width=1) (actual time=761.843..761.843\nrows=1 loops=1)\n Buffers: shared hit=160 read=5246\n InitPlan 1 (returns $0)\n -> Seq Scan on zz_noidx1 (cost=0.00..22906.00 rows=5000 width=0)\n(actual time=761.841..761.841 rows=0 loops=1)\n Filter: (lower(text_distinct) = lower($1))\n Rows Removed by Filter: 1000000\n Buffers: shared hit=160 read=5246\n2018-04-16 15:01:51.635 EDT [12616] CONTEXT: SQL statement \"SELECT EXISTS\n(SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\nLOWER(p_findme))\"\n PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n2018-04-16 15:01:51.635 EDT [12616] LOG: duration: 762.156 ms plan:\n Query Text: explain (analyze, buffers) select * from\nzz_spx_ifexists_noidx('Test5000001')\n Function Scan on zz_spx_ifexists_noidx (cost=0.25..0.26 rows=1 width=32)\n(actual time=762.154..762.155 rows=1 loops=1)\n Buffers: shared hit=160 read=5246\n\nAs you can see, the execution time jumps up to about 762 milliseonds. I\ncan see in the sequence scan node that the LOWER() function shows up on the\nright side of the equal operator, whereas in the first 5 runs of this test\nquery the plan did not show this. Why is this?\n\nI tried increasing the \"work_mem\" setting to 1GB to see if this made any\ndifference, but the results were the same.\n\nSo those were the tests that I performed and the results I received, which\nleft me with many questions. If anyone is able to help me understand this\nbehavior, I'd greatly appreciate it. This is my first post to the email\nlist, so I hope I did a good enough job providing all the information\nneeded.\n\nThanks!\nRyan\n\n*PostgreSQL version number you are running:*\n\nPostgreSQL 10.2, compiled by Visual C++ build 1800, 64-bit\n\n*How you installed PostgreSQL:*\n\nUsing the Enterprise DB installer.\n\nI have also installed Enterprise DB's Postgres Enterprise Manager (PEM)\n7.2.0 software and Enterprise DB's SQL Profiler PG10-7.2.0 software. The\nPEM Agent service that gets installed is currently turned off.\n\n*Changes made to the settings in the postgresql.conf file: see Server\nConfiguration for a quick way to list them all.*\n\nname |current_setting\n|source\n-----------------------------------|---------------------------------------|---------------------\napplication_name |DBeaver 5.0.3 - Main\n|session\nauto_explain.log_analyze |on\n|configuration file\nauto_explain.log_buffers |on\n|configuration file\nauto_explain.log_min_duration |0\n|configuration file\nauto_explain.log_nested_statements |on\n|configuration file\nauto_explain.log_triggers |on\n|configuration file\nclient_encoding |UTF8\n|client\nDateStyle |ISO, MDY\n|client\ndefault_text_search_config |pg_catalog.english\n|configuration file\ndynamic_shared_memory_type |windows\n|configuration file\nextra_float_digits |3\n|session\nlc_messages |English_United States.1252\n|configuration file\nlc_monetary |English_United States.1252\n|configuration file\nlc_numeric |English_United States.1252\n|configuration file\nlc_time |English_United States.1252\n|configuration file\nlisten_addresses |*\n|configuration file\nlog_destination |stderr\n|configuration file\nlog_timezone |US/Eastern\n|configuration file\nlogging_collector |on\n|configuration file\nmax_connections |100\n|configuration file\nmax_stack_depth |2MB\n|environment variable\nport |5432\n|configuration file\nshared_buffers |128MB\n|configuration file\nshared_preload_libraries |$libdir/sql-profiler.dll, auto_explain\n|configuration file\nssl |on\n|configuration file\nssl_ca_file |root.crt\n|configuration file\nssl_cert_file |server.crt\n|configuration file\nssl_crl_file |root.crl\n|configuration file\nssl_key_file |server.key\n|configuration file\nTimeZone |America/New_York\n|client\n\n*Operating system and version:*\n\nWindows 10 Pro 64-bit, Version 1709 (Build 16299.309)\n\n*Hardware:*\n\nProcessor - Intel Core i7-7820HQ @ 2.90GHz\nRAM - 16GB\nRAID? - No\nHard Drive - Samsung 512 GB SSD M.2 PCIe NVMe Opal2\n\n*What program you're using to connect to PostgreSQL:*\n\nDBeaver Community Edition v5.0.3\n\n*Is there anything relevant or unusual in the PostgreSQL server logs?:*\n\nNot that I noticed.\n\n*For questions about any kind of error:*\n\nN/A\n\n*What you were doing when the error happened / how to cause the error:*\n\nN/A\n\n*The EXACT TEXT of the error message you're getting, if there is one: (Copy\nand paste the message to the email, do not send a screenshot)*\n\nN/A\n\nA description of what you are trying to achieve and what results you expect.:My end goal was to test the execution time difference between using an IF(SELECT COUNT(*)...) and an IF EXISTS() when no indexes were used and when a string match was not found.  My expectation was that my 2 functions would behave fairly similarly, but they most certainly did not.  Here are the table, functions, test queries, and test query results I received, as well as comments as I present the pieces and talk about the results from my perspective.This is the table and data that I used for my tests.  A table with 1 million sequenced records.  No indexing on any columns.  I ran ANALYZE on this table and a VACUUM on the entire database, just to be sure.CREATE TABLE zz_noidx1 AS SELECT generate_series(0, 999999) AS int_distinct, 'Test'::text || generate_series(0, 999999)::text AS text_distinct;These are the 2 functions that I ran my final tests with.  My goal was to determine which function would perform the fastest and my expectation was that they would still be somewhat close in execution time comparison.--Test Function #1CREATE OR REPLACE FUNCTION zz_spx_ifcount_noidx(p_findme text) RETURNS text LANGUAGE 'plpgsql' STABLEAS $$ BEGIN IF (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 THEN  RETURN 'Found'; ELSE  RETURN 'Not Found'; END IF;END;$$;--Test Function #2CREATE OR REPLACE FUNCTION zz_spx_ifexists_noidx(p_findme text) RETURNS text LANGUAGE 'plpgsql' STABLEAS $$ BEGIN IF EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) THEN  RETURN 'Found'; ELSE  RETURN 'Not Found'; END IF;END;$$;The first thing I did was to run some baseline tests using the basic queries inside of the IF() checks found in each of the functions to see how the query planner handled them.  I ran the following two queries.EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(text_distinct) = LOWER('Test5000001');EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE LOWER(text_distinct) = LOWER('Test5000001');The execution time results and query plans for these two were very similar, as expected.  In the results I can see that 2 workers were employed for each query plan.--Results for the SELECT COUNT(*) query.QUERY PLAN                                                                                                                              ----------------------------------------------------------------------------------------------------------------------------------------Finalize Aggregate  (cost=12661.42..12661.43 rows=1 width=8) (actual time=172.105..172.105 rows=1 loops=1)                                Buffers: shared read=1912                                                                                                               ->  Gather  (cost=12661.21..12661.42 rows=2 width=8) (actual time=172.020..172.099 rows=3 loops=1)                                      Workers Planned: 2                                                                                                                Workers Launched: 2                                                                                                               Buffers: shared read=1912                                                                                                         ->  Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=155.123..155.123 rows=1 loops=3)                        Buffers: shared read=5406                                                                                                      ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=155.103..155.103 rows=0 loops=3)      Filter: (lower(text_distinct) = 'test5000001'::text)                                                                     Rows Removed by Filter: 333333                                                                                           Buffers: shared read=5406                                                                                           Planning time: 0.718 ms                                                                                                                 Execution time: 187.601 ms--Results for the SELECT 1 query.QUERY PLAN                                                                                                                  ----------------------------------------------------------------------------------------------------------------------------Gather  (cost=1000.00..13156.00 rows=5000 width=4) (actual time=175.682..175.682 rows=0 loops=1)                              Workers Planned: 2                                                                                                          Workers Launched: 2                                                                                                         Buffers: shared read=2021                                                                                                   ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=4) (actual time=159.769..159.769 rows=0 loops=3)   Filter: (lower(text_distinct) = 'test5000001'::text)                                                                  Rows Removed by Filter: 333333                                                                                        Buffers: shared read=5406                                                                                           Planning time: 0.874 ms                                                                                                     Execution time: 192.045 ms  After running these baseline tests and viewing the fairly similar results, right or wrong, I expected my queries that tested the functions to behave similarly.  I started with the following query...EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM zz_spx_ifcount_noidx('Test5000001');and I got the following \"auto_explain\" results...2018-04-16 14:57:22.624 EDT [17812] LOG:  duration: 155.239 ms  plan: Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=155.230..155.230 rows=1 loops=1)   Buffers: shared read=1682   ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=155.222..155.222 rows=0 loops=1)   Filter: (lower(text_distinct) = 'test5000001'::text)   Rows Removed by Filter: 311170   Buffers: shared read=16822018-04-16 14:57:22.624 EDT [9096] LOG:  duration: 154.603 ms  plan: Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=154.576..154.576 rows=1 loops=1)   Buffers: shared read=1682   ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=154.570..154.570 rows=0 loops=1)   Filter: (lower(text_distinct) = 'test5000001'::text)   Rows Removed by Filter: 311061   Buffers: shared read=16822018-04-16 14:57:22.642 EDT [15132] LOG:  duration: 197.260 ms  plan: Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 Result  (cost=12661.43..12661.45 rows=1 width=1) (actual time=179.561..179.561 rows=1 loops=1)   Buffers: shared read=2042   InitPlan 1 (returns $1)  ->  Finalize Aggregate  (cost=12661.42..12661.43 rows=1 width=8) (actual time=179.559..179.559 rows=1 loops=1)     Buffers: shared read=2042     ->  Gather  (cost=12661.21..12661.42 rows=2 width=8) (actual time=179.529..179.556 rows=3 loops=1)     Workers Planned: 2     Workers Launched: 2     Buffers: shared read=2042     ->  Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=162.831..162.831 rows=1 loops=3)        Buffers: shared read=5406        ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=162.824..162.824 rows=0 loops=3)        Filter: (lower(text_distinct) = 'test5000001'::text)        Rows Removed by Filter: 333333        Buffers: shared read=54062018-04-16 14:57:22.642 EDT [15132] CONTEXT:  SQL statement \"SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\" PL/pgSQL function zz_spx_ifcount_noidx(text) line 4 at IF2018-04-16 14:57:22.642 EDT [15132] LOG:  duration: 199.371 ms  plan: Query Text: explain (analyze, buffers) select * from zz_spx_ifcount_noidx('Test5000001') Function Scan on zz_spx_ifcount_noidx  (cost=0.25..0.26 rows=1 width=32) (actual time=199.370..199.370 rows=1 loops=1)   Buffers: shared hit=218 read=5446Here I could see that the 2 workers were getting employed again, which is great.  Just what I expected.  And the execution time was in the same ballpark as my first baseline test using just the query found inside of the IF() check.  199 milliseonds.  Okay.I moved on to test the other function with the following query...EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM zz_spx_ifcount_noidx('Test5000001');and I got the following \"auto_explain\" results...2018-04-16 14:58:34.134 EDT [12616] LOG:  duration: 426.279 ms  plan: Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) Result  (cost=4.08..4.09 rows=1 width=1) (actual time=426.274..426.274 rows=1 loops=1)   Buffers: shared read=5406   InitPlan 1 (returns $0)  ->  Seq Scan on zz_noidx1  (cost=0.00..20406.00 rows=5000 width=0) (actual time=426.273..426.273 rows=0 loops=1)     Filter: (lower(text_distinct) = 'test5000001'::text)     Rows Removed by Filter: 1000000     Buffers: shared read=54062018-04-16 14:58:34.134 EDT [12616] CONTEXT:  SQL statement \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\" PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF2018-04-16 14:58:34.134 EDT [12616] LOG:  duration: 428.077 ms  plan: Query Text: explain (analyze, buffers) select * from zz_spx_ifexists_noidx('Test5000001') Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26 rows=1 width=32) (actual time=428.076..428.076 rows=1 loops=1)   Buffers: shared hit=30 read=5438Definitely not the execution time, or query plan, results I was expecting.  As we can see, no workers were employed here and my guess was that this was the reason or the large execution time difference between these 2 tests?  199 milliseconds versus 428 milliseconds, which is a big difference.  Why are workers not being employed here like they were when I tested the query found inside of the IF() check in a standalone manner?  But then I ran another test and the results made even less sense to me.When I ran the above query the first 5 times after starting my Postgres service, I got the same results each time (around 428 milliseconds), but when running the query 6 or more times, the execution time jumps up to almost double that.  Here are the \"auto_explain\" results running this query a 6th time...--\"auto_explain\" results after running the same query 6 or more times.2018-04-16 15:01:51.635 EDT [12616] LOG:  duration: 761.847 ms  plan: Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) Result  (cost=4.58..4.59 rows=1 width=1) (actual time=761.843..761.843 rows=1 loops=1)   Buffers: shared hit=160 read=5246   InitPlan 1 (returns $0)  ->  Seq Scan on zz_noidx1  (cost=0.00..22906.00 rows=5000 width=0) (actual time=761.841..761.841 rows=0 loops=1)     Filter: (lower(text_distinct) = lower($1))     Rows Removed by Filter: 1000000     Buffers: shared hit=160 read=52462018-04-16 15:01:51.635 EDT [12616] CONTEXT:  SQL statement \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\" PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF2018-04-16 15:01:51.635 EDT [12616] LOG:  duration: 762.156 ms  plan: Query Text: explain (analyze, buffers) select * from zz_spx_ifexists_noidx('Test5000001') Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26 rows=1 width=32) (actual time=762.154..762.155 rows=1 loops=1)   Buffers: shared hit=160 read=5246As you can see, the execution time jumps up to about 762 milliseonds.  I can see in the sequence scan node that the LOWER() function shows up on the right side of the equal operator, whereas in the first 5 runs of this test query the plan did not show this.  Why is this?I tried increasing the \"work_mem\" setting to 1GB to see if this made any difference, but the results were the same.So those were the tests that I performed and the results I received, which left me with many questions.  If anyone is able to help me understand this behavior, I'd greatly appreciate it.  This is my first post to the email list, so I hope I did a good enough job providing all the information needed.Thanks!RyanPostgreSQL version number you are running:PostgreSQL 10.2, compiled by Visual C++ build 1800, 64-bitHow you installed PostgreSQL:Using the Enterprise DB installer.I have also installed Enterprise DB's Postgres Enterprise Manager (PEM) 7.2.0 software and Enterprise DB's SQL Profiler PG10-7.2.0 software.  The PEM Agent service that gets installed is currently turned off.Changes made to the settings in the postgresql.conf file:  see Server Configuration for a quick way to list them all.name                               |current_setting                        |source               -----------------------------------|---------------------------------------|---------------------application_name                   |DBeaver 5.0.3 - Main                   |session              auto_explain.log_analyze           |on                                     |configuration file   auto_explain.log_buffers           |on                                     |configuration file   auto_explain.log_min_duration      |0                                      |configuration file   auto_explain.log_nested_statements |on                                     |configuration file   auto_explain.log_triggers          |on                                     |configuration file   client_encoding                    |UTF8                                   |client               DateStyle                          |ISO, MDY                               |client               default_text_search_config         |pg_catalog.english                     |configuration file   dynamic_shared_memory_type         |windows                                |configuration file   extra_float_digits                 |3                                      |session              lc_messages                        |English_United States.1252             |configuration file   lc_monetary                        |English_United States.1252             |configuration file   lc_numeric                         |English_United States.1252             |configuration file   lc_time                            |English_United States.1252             |configuration file   listen_addresses                   |*                                      |configuration file   log_destination                    |stderr                                 |configuration file   log_timezone                       |US/Eastern                             |configuration file   logging_collector                  |on                                     |configuration file   max_connections                    |100                                    |configuration file   max_stack_depth                    |2MB                                    |environment variable port                               |5432                                   |configuration file   shared_buffers                     |128MB                                  |configuration file   shared_preload_libraries           |$libdir/sql-profiler.dll, auto_explain |configuration file   ssl                                |on                                     |configuration file   ssl_ca_file                        |root.crt                               |configuration file   ssl_cert_file                      |server.crt                             |configuration file   ssl_crl_file                       |root.crl                               |configuration file   ssl_key_file                       |server.key                             |configuration file   TimeZone                           |America/New_York                       |client               Operating system and version:Windows 10 Pro 64-bit, Version 1709 (Build 16299.309)Hardware:Processor - Intel Core i7-7820HQ @ 2.90GHzRAM - 16GBRAID? - NoHard Drive - Samsung 512 GB SSD M.2 PCIe NVMe Opal2What program you're using to connect to PostgreSQL:DBeaver Community Edition v5.0.3Is there anything relevant or unusual in the PostgreSQL server logs?:Not that I noticed.For questions about any kind of error:N/AWhat you were doing when the error happened / how to cause the error:N/AThe EXACT TEXT of the error message you're getting, if there is one: (Copy and paste the message to the email, do not send a screenshot)N/A", "msg_date": "Mon, 16 Apr 2018 16:42:15 -0400", "msg_from": "Hackety Man <[email protected]>", "msg_from_op": true, "msg_subject": "Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": "Hi\n\n2018-04-16 22:42 GMT+02:00 Hackety Man <[email protected]>:\n\n> *A description of what you are trying to achieve and what results you\n> expect.:*\n>\n> My end goal was to test the execution time difference between using an\n> IF(SELECT COUNT(*)...) and an IF EXISTS() when no indexes were used and\n> when a string match was not found. My expectation was that my 2 functions\n> would behave fairly similarly, but they most certainly did not. Here are\n> the table, functions, test queries, and test query results I received, as\n> well as comments as I present the pieces and talk about the results from my\n> perspective.\n>\n> This is the table and data that I used for my tests. A table with 1\n> million sequenced records. No indexing on any columns. I ran ANALYZE on\n> this table and a VACUUM on the entire database, just to be sure.\n>\n> CREATE TABLE zz_noidx1 AS SELECT generate_series(0, 999999) AS\n> int_distinct, 'Test'::text || generate_series(0, 999999)::text AS\n> text_distinct;\n>\n> These are the 2 functions that I ran my final tests with. My goal was to\n> determine which function would perform the fastest and my expectation was\n> that they would still be somewhat close in execution time comparison.\n>\n> --Test Function #1\n> CREATE OR REPLACE FUNCTION zz_spx_ifcount_noidx(p_findme text)\n> RETURNS text\n> LANGUAGE 'plpgsql'\n> STABLE\n> AS $$\n>\n> BEGIN\n> IF (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\n> LOWER(p_findme)) > 0 THEN\n> RETURN 'Found';\n> ELSE\n> RETURN 'Not Found';\n> END IF;\n> END;\n> $$;\n>\n> --Test Function #2\n> CREATE OR REPLACE FUNCTION zz_spx_ifexists_noidx(p_findme text)\n> RETURNS text\n> LANGUAGE 'plpgsql'\n> STABLE\n> AS $$\n>\n> BEGIN\n> IF EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\n> LOWER(p_findme)) THEN\n> RETURN 'Found';\n> ELSE\n> RETURN 'Not Found';\n> END IF;\n> END;\n> $$;\n>\n> The first thing I did was to run some baseline tests using the basic\n> queries inside of the IF() checks found in each of the functions to see how\n> the query planner handled them. I ran the following two queries.\n>\n> EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM zz_noidx1 WHERE\n> LOWER(text_distinct) = LOWER('Test5000001');\n> EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n> LOWER(text_distinct) = LOWER('Test5000001');\n>\n> The execution time results and query plans for these two were very\n> similar, as expected. In the results I can see that 2 workers were\n> employed for each query plan.\n>\n> --Results for the SELECT COUNT(*) query.\n> QUERY PLAN\n>\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ----------------\n> Finalize Aggregate (cost=12661.42..12661.43 rows=1 width=8) (actual\n> time=172.105..172.105 rows=1 loops=1)\n> Buffers: shared read=1912\n>\n>\n> -> Gather (cost=12661.21..12661.42 rows=2 width=8) (actual\n> time=172.020..172.099 rows=3 loops=1)\n> Workers Planned: 2\n>\n>\n> Workers Launched: 2\n>\n>\n> Buffers: shared read=1912\n>\n>\n> -> Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8) (actual\n> time=155.123..155.123 rows=1 loops=3)\n> Buffers: shared read=5406\n>\n>\n> -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\n> width=0) (actual time=155.103..155.103 rows=0 loops=3)\n> Filter: (lower(text_distinct) = 'test5000001'::text)\n>\n> Rows Removed by Filter: 333333\n>\n> Buffers: shared read=5406\n>\n> Planning time: 0.718 ms\n>\n>\n> Execution time: 187.601 ms\n>\n> --Results for the SELECT 1 query.\n> QUERY PLAN\n>\n> ------------------------------------------------------------\n> ----------------------------------------------------------------\n> Gather (cost=1000.00..13156.00 rows=5000 width=4) (actual\n> time=175.682..175.682 rows=0 loops=1)\n> Workers Planned: 2\n>\n>\n> Workers Launched: 2\n>\n> Buffers: shared read=2021\n>\n>\n> -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\n> width=4) (actual time=159.769..159.769 rows=0 loops=3)\n> Filter: (lower(text_distinct) = 'test5000001'::text)\n>\n> Rows Removed by Filter: 333333\n>\n> Buffers: shared read=5406\n>\n> Planning time: 0.874 ms\n>\n> Execution time: 192.045 ms\n>\n> After running these baseline tests and viewing the fairly similar results,\n> right or wrong, I expected my queries that tested the functions to behave\n> similarly. I started with the following query...\n>\n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM zz_spx_ifcount_noidx('\n> Test5000001');\n>\n> and I got the following \"auto_explain\" results...\n>\n> 2018-04-16 14:57:22.624 EDT [17812] LOG: duration: 155.239 ms plan:\n> Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE\n> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\n> Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8) (actual\n> time=155.230..155.230 rows=1 loops=1)\n> Buffers: shared read=1682\n> -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\n> width=0) (actual time=155.222..155.222 rows=0 loops=1)\n> Filter: (lower(text_distinct) = 'test5000001'::text)\n> Rows Removed by Filter: 311170\n> Buffers: shared read=1682\n> 2018-04-16 14:57:22.624 EDT [9096] LOG: duration: 154.603 ms plan:\n> Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE\n> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\n> Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8) (actual\n> time=154.576..154.576 rows=1 loops=1)\n> Buffers: shared read=1682\n> -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\n> width=0) (actual time=154.570..154.570 rows=0 loops=1)\n> Filter: (lower(text_distinct) = 'test5000001'::text)\n> Rows Removed by Filter: 311061\n> Buffers: shared read=1682\n> 2018-04-16 14:57:22.642 EDT [15132] LOG: duration: 197.260 ms plan:\n> Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE\n> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\n> Result (cost=12661.43..12661.45 rows=1 width=1) (actual\n> time=179.561..179.561 rows=1 loops=1)\n> Buffers: shared read=2042\n> InitPlan 1 (returns $1)\n> -> Finalize Aggregate (cost=12661.42..12661.43 rows=1 width=8) (actual\n> time=179.559..179.559 rows=1 loops=1)\n> Buffers: shared read=2042\n> -> Gather (cost=12661.21..12661.42 rows=2 width=8) (actual\n> time=179.529..179.556 rows=3 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared read=2042\n> -> Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8)\n> (actual time=162.831..162.831 rows=1 loops=3)\n> Buffers: shared read=5406\n> -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\n> width=0) (actual time=162.824..162.824 rows=0 loops=3)\n> Filter: (lower(text_distinct) = 'test5000001'::text)\n> Rows Removed by Filter: 333333\n> Buffers: shared read=5406\n> 2018-04-16 14:57:22.642 EDT [15132] CONTEXT: SQL statement \"SELECT\n> (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\n> LOWER(p_findme)) > 0\"\n> PL/pgSQL function zz_spx_ifcount_noidx(text) line 4 at IF\n> 2018-04-16 14:57:22.642 EDT [15132] LOG: duration: 199.371 ms plan:\n> Query Text: explain (analyze, buffers) select * from\n> zz_spx_ifcount_noidx('Test5000001')\n> Function Scan on zz_spx_ifcount_noidx (cost=0.25..0.26 rows=1 width=32)\n> (actual time=199.370..199.370 rows=1 loops=1)\n> Buffers: shared hit=218 read=5446\n>\n> Here I could see that the 2 workers were getting employed again, which is\n> great. Just what I expected. And the execution time was in the same\n> ballpark as my first baseline test using just the query found inside of the\n> IF() check. 199 milliseonds. Okay.\n>\n> I moved on to test the other function with the following query...\n>\n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM zz_spx_ifcount_noidx('\n> Test5000001');\n>\n> and I got the following \"auto_explain\" results...\n>\n> 2018-04-16 14:58:34.134 EDT [12616] LOG: duration: 426.279 ms plan:\n> Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n> Result (cost=4.08..4.09 rows=1 width=1) (actual time=426.274..426.274\n> rows=1 loops=1)\n> Buffers: shared read=5406\n> InitPlan 1 (returns $0)\n> -> Seq Scan on zz_noidx1 (cost=0.00..20406.00 rows=5000 width=0)\n> (actual time=426.273..426.273 rows=0 loops=1)\n> Filter: (lower(text_distinct) = 'test5000001'::text)\n> Rows Removed by Filter: 1000000\n> Buffers: shared read=5406\n> 2018-04-16 14:58:34.134 EDT [12616] CONTEXT: SQL statement \"SELECT EXISTS\n> (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\n> LOWER(p_findme))\"\n> PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n> 2018-04-16 14:58:34.134 EDT [12616] LOG: duration: 428.077 ms plan:\n> Query Text: explain (analyze, buffers) select * from\n> zz_spx_ifexists_noidx('Test5000001')\n> Function Scan on zz_spx_ifexists_noidx (cost=0.25..0.26 rows=1 width=32)\n> (actual time=428.076..428.076 rows=1 loops=1)\n> Buffers: shared hit=30 read=5438\n>\n> Definitely not the execution time, or query plan, results I was\n> expecting. As we can see, no workers were employed here and my guess was\n> that this was the reason or the large execution time difference between\n> these 2 tests? 199 milliseconds versus 428 milliseconds, which is a big\n> difference. Why are workers not being employed here like they were when I\n> tested the query found inside of the IF() check in a standalone manner?\n> But then I ran another test and the results made even less sense to me.\n>\n> When I ran the above query the first 5 times after starting my Postgres\n> service, I got the same results each time (around 428 milliseconds), but\n> when running the query 6 or more times, the execution time jumps up to\n> almost double that. Here are the \"auto_explain\" results running this query\n> a 6th time...\n>\n> --\"auto_explain\" results after running the same query 6 or more times.\n> 2018-04-16 15:01:51.635 EDT [12616] LOG: duration: 761.847 ms plan:\n> Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n> Result (cost=4.58..4.59 rows=1 width=1) (actual time=761.843..761.843\n> rows=1 loops=1)\n> Buffers: shared hit=160 read=5246\n> InitPlan 1 (returns $0)\n> -> Seq Scan on zz_noidx1 (cost=0.00..22906.00 rows=5000 width=0)\n> (actual time=761.841..761.841 rows=0 loops=1)\n> Filter: (lower(text_distinct) = lower($1))\n> Rows Removed by Filter: 1000000\n> Buffers: shared hit=160 read=5246\n> 2018-04-16 15:01:51.635 EDT [12616] CONTEXT: SQL statement \"SELECT EXISTS\n> (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\n> LOWER(p_findme))\"\n> PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n> 2018-04-16 15:01:51.635 EDT [12616] LOG: duration: 762.156 ms plan:\n> Query Text: explain (analyze, buffers) select * from\n> zz_spx_ifexists_noidx('Test5000001')\n> Function Scan on zz_spx_ifexists_noidx (cost=0.25..0.26 rows=1 width=32)\n> (actual time=762.154..762.155 rows=1 loops=1)\n> Buffers: shared hit=160 read=5246\n>\n> As you can see, the execution time jumps up to about 762 milliseonds. I\n> can see in the sequence scan node that the LOWER() function shows up on the\n> right side of the equal operator, whereas in the first 5 runs of this test\n> query the plan did not show this. Why is this?\n>\n> I tried increasing the \"work_mem\" setting to 1GB to see if this made any\n> difference, but the results were the same.\n>\n> So those were the tests that I performed and the results I received, which\n> left me with many questions. If anyone is able to help me understand this\n> behavior, I'd greatly appreciate it. This is my first post to the email\n> list, so I hope I did a good enough job providing all the information\n> needed.\n>\n> Thanks!\n> Ryan\n>\n> *PostgreSQL version number you are running:*\n>\n> PostgreSQL 10.2, compiled by Visual C++ build 1800, 64-bit\n>\n> *How you installed PostgreSQL:*\n>\n> Using the Enterprise DB installer.\n>\n> I have also installed Enterprise DB's Postgres Enterprise Manager (PEM)\n> 7.2.0 software and Enterprise DB's SQL Profiler PG10-7.2.0 software. The\n> PEM Agent service that gets installed is currently turned off.\n>\n> *Changes made to the settings in the postgresql.conf file: see Server\n> Configuration for a quick way to list them all.*\n>\n> name |current_setting\n> |source\n> -----------------------------------|------------------------\n> ---------------|---------------------\n> application_name |DBeaver 5.0.3 -\n> Main |session\n> auto_explain.log_analyze |on\n> |configuration file\n> auto_explain.log_buffers |on\n> |configuration file\n> auto_explain.log_min_duration |0\n> |configuration file\n> auto_explain.log_nested_statements |on\n> |configuration file\n> auto_explain.log_triggers |on\n> |configuration file\n> client_encoding |UTF8\n> |client\n> DateStyle |ISO, MDY\n> |client\n> default_text_search_config |pg_catalog.english\n> |configuration file\n> dynamic_shared_memory_type |windows\n> |configuration file\n> extra_float_digits |3\n> |session\n> lc_messages |English_United\n> States.1252 |configuration file\n> lc_monetary |English_United\n> States.1252 |configuration file\n> lc_numeric |English_United\n> States.1252 |configuration file\n> lc_time |English_United\n> States.1252 |configuration file\n> listen_addresses |*\n> |configuration file\n> log_destination |stderr\n> |configuration file\n> log_timezone |US/Eastern\n> |configuration file\n> logging_collector |on\n> |configuration file\n> max_connections |100\n> |configuration file\n> max_stack_depth |2MB\n> |environment variable\n> port |5432\n> |configuration file\n> shared_buffers |128MB\n> |configuration file\n> shared_preload_libraries |$libdir/sql-profiler.dll,\n> auto_explain |configuration file\n> ssl |on\n> |configuration file\n> ssl_ca_file |root.crt\n> |configuration file\n> ssl_cert_file |server.crt\n> |configuration file\n> ssl_crl_file |root.crl\n> |configuration file\n> ssl_key_file |server.key\n> |configuration file\n> TimeZone |America/New_York\n> |client\n>\n> *Operating system and version:*\n>\n> Windows 10 Pro 64-bit, Version 1709 (Build 16299.309)\n>\n> *Hardware:*\n>\n> Processor - Intel Core i7-7820HQ @ 2.90GHz\n> RAM - 16GB\n> RAID? - No\n> Hard Drive - Samsung 512 GB SSD M.2 PCIe NVMe Opal2\n>\n> *What program you're using to connect to PostgreSQL:*\n>\n> DBeaver Community Edition v5.0.3\n>\n> *Is there anything relevant or unusual in the PostgreSQL server logs?:*\n>\n> Not that I noticed.\n>\n> *For questions about any kind of error:*\n>\n> N/A\n>\n> *What you were doing when the error happened / how to cause the error:*\n>\n> N/A\n>\n> *The EXACT TEXT of the error message you're getting, if there is one:\n> (Copy and paste the message to the email, do not send a screenshot)*\n>\n> N/A\n>\n>\nA support of parallel query execution is not complete - it doesn't work in\nPostgreSQL 11 too. So although EXISTS variant can be faster (but can be -\nthe worst case of EXISTS is same like COUNT), then due disabled parallel\nexecution the COUNT(*) is faster now. It is unfortunate, because I believe\nso this issue will be fixed in few years.\n\nRegards\n\nPavel\n\nHi2018-04-16 22:42 GMT+02:00 Hackety Man <[email protected]>:A description of what you are trying to achieve and what results you expect.:My end goal was to test the execution time difference between using an IF(SELECT COUNT(*)...) and an IF EXISTS() when no indexes were used and when a string match was not found.  My expectation was that my 2 functions would behave fairly similarly, but they most certainly did not.  Here are the table, functions, test queries, and test query results I received, as well as comments as I present the pieces and talk about the results from my perspective.This is the table and data that I used for my tests.  A table with 1 million sequenced records.  No indexing on any columns.  I ran ANALYZE on this table and a VACUUM on the entire database, just to be sure.CREATE TABLE zz_noidx1 AS SELECT generate_series(0, 999999) AS int_distinct, 'Test'::text || generate_series(0, 999999)::text AS text_distinct;These are the 2 functions that I ran my final tests with.  My goal was to determine which function would perform the fastest and my expectation was that they would still be somewhat close in execution time comparison.--Test Function #1CREATE OR REPLACE FUNCTION zz_spx_ifcount_noidx(p_findme text) RETURNS text LANGUAGE 'plpgsql' STABLEAS $$ BEGIN IF (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 THEN  RETURN 'Found'; ELSE  RETURN 'Not Found'; END IF;END;$$;--Test Function #2CREATE OR REPLACE FUNCTION zz_spx_ifexists_noidx(p_findme text) RETURNS text LANGUAGE 'plpgsql' STABLEAS $$ BEGIN IF EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) THEN  RETURN 'Found'; ELSE  RETURN 'Not Found'; END IF;END;$$;The first thing I did was to run some baseline tests using the basic queries inside of the IF() checks found in each of the functions to see how the query planner handled them.  I ran the following two queries.EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(text_distinct) = LOWER('Test5000001');EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE LOWER(text_distinct) = LOWER('Test5000001');The execution time results and query plans for these two were very similar, as expected.  In the results I can see that 2 workers were employed for each query plan.--Results for the SELECT COUNT(*) query.QUERY PLAN                                                                                                                              ----------------------------------------------------------------------------------------------------------------------------------------Finalize Aggregate  (cost=12661.42..12661.43 rows=1 width=8) (actual time=172.105..172.105 rows=1 loops=1)                                Buffers: shared read=1912                                                                                                               ->  Gather  (cost=12661.21..12661.42 rows=2 width=8) (actual time=172.020..172.099 rows=3 loops=1)                                      Workers Planned: 2                                                                                                                Workers Launched: 2                                                                                                               Buffers: shared read=1912                                                                                                         ->  Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=155.123..155.123 rows=1 loops=3)                        Buffers: shared read=5406                                                                                                      ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=155.103..155.103 rows=0 loops=3)      Filter: (lower(text_distinct) = 'test5000001'::text)                                                                     Rows Removed by Filter: 333333                                                                                           Buffers: shared read=5406                                                                                           Planning time: 0.718 ms                                                                                                                 Execution time: 187.601 ms--Results for the SELECT 1 query.QUERY PLAN                                                                                                                  ----------------------------------------------------------------------------------------------------------------------------Gather  (cost=1000.00..13156.00 rows=5000 width=4) (actual time=175.682..175.682 rows=0 loops=1)                              Workers Planned: 2                                                                                                          Workers Launched: 2                                                                                                         Buffers: shared read=2021                                                                                                   ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=4) (actual time=159.769..159.769 rows=0 loops=3)   Filter: (lower(text_distinct) = 'test5000001'::text)                                                                  Rows Removed by Filter: 333333                                                                                        Buffers: shared read=5406                                                                                           Planning time: 0.874 ms                                                                                                     Execution time: 192.045 ms  After running these baseline tests and viewing the fairly similar results, right or wrong, I expected my queries that tested the functions to behave similarly.  I started with the following query...EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM zz_spx_ifcount_noidx('Test5000001');and I got the following \"auto_explain\" results...2018-04-16 14:57:22.624 EDT [17812] LOG:  duration: 155.239 ms  plan: Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=155.230..155.230 rows=1 loops=1)   Buffers: shared read=1682   ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=155.222..155.222 rows=0 loops=1)   Filter: (lower(text_distinct) = 'test5000001'::text)   Rows Removed by Filter: 311170   Buffers: shared read=16822018-04-16 14:57:22.624 EDT [9096] LOG:  duration: 154.603 ms  plan: Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=154.576..154.576 rows=1 loops=1)   Buffers: shared read=1682   ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=154.570..154.570 rows=0 loops=1)   Filter: (lower(text_distinct) = 'test5000001'::text)   Rows Removed by Filter: 311061   Buffers: shared read=16822018-04-16 14:57:22.642 EDT [15132] LOG:  duration: 197.260 ms  plan: Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 Result  (cost=12661.43..12661.45 rows=1 width=1) (actual time=179.561..179.561 rows=1 loops=1)   Buffers: shared read=2042   InitPlan 1 (returns $1)  ->  Finalize Aggregate  (cost=12661.42..12661.43 rows=1 width=8) (actual time=179.559..179.559 rows=1 loops=1)     Buffers: shared read=2042     ->  Gather  (cost=12661.21..12661.42 rows=2 width=8) (actual time=179.529..179.556 rows=3 loops=1)     Workers Planned: 2     Workers Launched: 2     Buffers: shared read=2042     ->  Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=162.831..162.831 rows=1 loops=3)        Buffers: shared read=5406        ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=162.824..162.824 rows=0 loops=3)        Filter: (lower(text_distinct) = 'test5000001'::text)        Rows Removed by Filter: 333333        Buffers: shared read=54062018-04-16 14:57:22.642 EDT [15132] CONTEXT:  SQL statement \"SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\" PL/pgSQL function zz_spx_ifcount_noidx(text) line 4 at IF2018-04-16 14:57:22.642 EDT [15132] LOG:  duration: 199.371 ms  plan: Query Text: explain (analyze, buffers) select * from zz_spx_ifcount_noidx('Test5000001') Function Scan on zz_spx_ifcount_noidx  (cost=0.25..0.26 rows=1 width=32) (actual time=199.370..199.370 rows=1 loops=1)   Buffers: shared hit=218 read=5446Here I could see that the 2 workers were getting employed again, which is great.  Just what I expected.  And the execution time was in the same ballpark as my first baseline test using just the query found inside of the IF() check.  199 milliseonds.  Okay.I moved on to test the other function with the following query...EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM zz_spx_ifcount_noidx('Test5000001');and I got the following \"auto_explain\" results...2018-04-16 14:58:34.134 EDT [12616] LOG:  duration: 426.279 ms  plan: Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) Result  (cost=4.08..4.09 rows=1 width=1) (actual time=426.274..426.274 rows=1 loops=1)   Buffers: shared read=5406   InitPlan 1 (returns $0)  ->  Seq Scan on zz_noidx1  (cost=0.00..20406.00 rows=5000 width=0) (actual time=426.273..426.273 rows=0 loops=1)     Filter: (lower(text_distinct) = 'test5000001'::text)     Rows Removed by Filter: 1000000     Buffers: shared read=54062018-04-16 14:58:34.134 EDT [12616] CONTEXT:  SQL statement \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\" PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF2018-04-16 14:58:34.134 EDT [12616] LOG:  duration: 428.077 ms  plan: Query Text: explain (analyze, buffers) select * from zz_spx_ifexists_noidx('Test5000001') Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26 rows=1 width=32) (actual time=428.076..428.076 rows=1 loops=1)   Buffers: shared hit=30 read=5438Definitely not the execution time, or query plan, results I was expecting.  As we can see, no workers were employed here and my guess was that this was the reason or the large execution time difference between these 2 tests?  199 milliseconds versus 428 milliseconds, which is a big difference.  Why are workers not being employed here like they were when I tested the query found inside of the IF() check in a standalone manner?  But then I ran another test and the results made even less sense to me.When I ran the above query the first 5 times after starting my Postgres service, I got the same results each time (around 428 milliseconds), but when running the query 6 or more times, the execution time jumps up to almost double that.  Here are the \"auto_explain\" results running this query a 6th time...--\"auto_explain\" results after running the same query 6 or more times.2018-04-16 15:01:51.635 EDT [12616] LOG:  duration: 761.847 ms  plan: Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) Result  (cost=4.58..4.59 rows=1 width=1) (actual time=761.843..761.843 rows=1 loops=1)   Buffers: shared hit=160 read=5246   InitPlan 1 (returns $0)  ->  Seq Scan on zz_noidx1  (cost=0.00..22906.00 rows=5000 width=0) (actual time=761.841..761.841 rows=0 loops=1)     Filter: (lower(text_distinct) = lower($1))     Rows Removed by Filter: 1000000     Buffers: shared hit=160 read=52462018-04-16 15:01:51.635 EDT [12616] CONTEXT:  SQL statement \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\" PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF2018-04-16 15:01:51.635 EDT [12616] LOG:  duration: 762.156 ms  plan: Query Text: explain (analyze, buffers) select * from zz_spx_ifexists_noidx('Test5000001') Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26 rows=1 width=32) (actual time=762.154..762.155 rows=1 loops=1)   Buffers: shared hit=160 read=5246As you can see, the execution time jumps up to about 762 milliseonds.  I can see in the sequence scan node that the LOWER() function shows up on the right side of the equal operator, whereas in the first 5 runs of this test query the plan did not show this.  Why is this?I tried increasing the \"work_mem\" setting to 1GB to see if this made any difference, but the results were the same.So those were the tests that I performed and the results I received, which left me with many questions.  If anyone is able to help me understand this behavior, I'd greatly appreciate it.  This is my first post to the email list, so I hope I did a good enough job providing all the information needed.Thanks!RyanPostgreSQL version number you are running:PostgreSQL 10.2, compiled by Visual C++ build 1800, 64-bitHow you installed PostgreSQL:Using the Enterprise DB installer.I have also installed Enterprise DB's Postgres Enterprise Manager (PEM) 7.2.0 software and Enterprise DB's SQL Profiler PG10-7.2.0 software.  The PEM Agent service that gets installed is currently turned off.Changes made to the settings in the postgresql.conf file:  see Server Configuration for a quick way to list them all.name                               |current_setting                        |source               -----------------------------------|---------------------------------------|---------------------application_name                   |DBeaver 5.0.3 - Main                   |session              auto_explain.log_analyze           |on                                     |configuration file   auto_explain.log_buffers           |on                                     |configuration file   auto_explain.log_min_duration      |0                                      |configuration file   auto_explain.log_nested_statements |on                                     |configuration file   auto_explain.log_triggers          |on                                     |configuration file   client_encoding                    |UTF8                                   |client               DateStyle                          |ISO, MDY                               |client               default_text_search_config         |pg_catalog.english                     |configuration file   dynamic_shared_memory_type         |windows                                |configuration file   extra_float_digits                 |3                                      |session              lc_messages                        |English_United States.1252             |configuration file   lc_monetary                        |English_United States.1252             |configuration file   lc_numeric                         |English_United States.1252             |configuration file   lc_time                            |English_United States.1252             |configuration file   listen_addresses                   |*                                      |configuration file   log_destination                    |stderr                                 |configuration file   log_timezone                       |US/Eastern                             |configuration file   logging_collector                  |on                                     |configuration file   max_connections                    |100                                    |configuration file   max_stack_depth                    |2MB                                    |environment variable port                               |5432                                   |configuration file   shared_buffers                     |128MB                                  |configuration file   shared_preload_libraries           |$libdir/sql-profiler.dll, auto_explain |configuration file   ssl                                |on                                     |configuration file   ssl_ca_file                        |root.crt                               |configuration file   ssl_cert_file                      |server.crt                             |configuration file   ssl_crl_file                       |root.crl                               |configuration file   ssl_key_file                       |server.key                             |configuration file   TimeZone                           |America/New_York                       |client               Operating system and version:Windows 10 Pro 64-bit, Version 1709 (Build 16299.309)Hardware:Processor - Intel Core i7-7820HQ @ 2.90GHzRAM - 16GBRAID? - NoHard Drive - Samsung 512 GB SSD M.2 PCIe NVMe Opal2What program you're using to connect to PostgreSQL:DBeaver Community Edition v5.0.3Is there anything relevant or unusual in the PostgreSQL server logs?:Not that I noticed.For questions about any kind of error:N/AWhat you were doing when the error happened / how to cause the error:N/AThe EXACT TEXT of the error message you're getting, if there is one: (Copy and paste the message to the email, do not send a screenshot)N/AA support of parallel query execution is not complete -  it doesn't work in PostgreSQL 11 too. So although EXISTS variant can be faster (but can be - the worst case of EXISTS is same like COUNT), then due disabled parallel execution the COUNT(*) is faster now. It is unfortunate, because I believe so this issue will be fixed in few years. RegardsPavel", "msg_date": "Tue, 17 Apr 2018 07:17:13 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": "\n\nOn 04/16/2018 10:42 PM, Hackety Man wrote:\n> ...\n> The first thing I did was to run some baseline tests using the basic\n> queries inside of the IF() checks found in each of the functions to\n> see how the query planner handled them.  I ran the following two\n> queries.\n> \n> EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM zz_noidx1 WHERE\n> LOWER(text_distinct) = LOWER('Test5000001');\n> EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n> LOWER(text_distinct) = LOWER('Test5000001');\n> \n\nThose are not the interesting plans, though. The EXISTS only cares about \nthe first row, so you should be looking at\n\n EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n LOWER(text_distinct) = LOWER('Test5000001') LIMIT 1;\n\n> I moved on to test the other function with the following query...\n> \n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\n> zz_spx_ifcount_noidx('Test5000001');\n> \n> and I got the following \"auto_explain\" results...\n> \n> 2018-04-16 14:58:34.134 EDT [12616] LOG:  duration: 426.279 ms \n> plan:\n>  Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n>  Result  (cost=4.08..4.09 rows=1 width=1) (actual\n> time=426.274..426.274 rows=1 loops=1)\n>    Buffers: shared read=5406\n>    InitPlan 1 (returns $0)\n>   ->  Seq Scan on zz_noidx1  (cost=0.00..20406.00 rows=5000\n> width=0) (actual time=426.273..426.273 rows=0 loops=1)\n>      Filter: (lower(text_distinct) = 'test5000001'::text)\n>      Rows Removed by Filter: 1000000\n>      Buffers: shared read=5406\n> 2018-04-16 14:58:34.134 EDT [12616] CONTEXT:  SQL statement\n> \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n>  PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n> 2018-04-16 14:58:34.134 EDT [12616] LOG:  duration: 428.077 ms \n> plan:\n>  Query Text: explain (analyze, buffers) select * from\n> zz_spx_ifexists_noidx('Test5000001')\n>  Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26\n> rows=1 width=32) (actual time=428.076..428.076 rows=1 loops=1)\n>    Buffers: shared hit=30 read=5438\n> \n> Definitely not the execution time, or query plan, results I was\n> expecting.  As we can see, no workers were employed here and my\n> guess was that this was the reason or the large execution time\n> difference between these 2 tests?  199 milliseconds versus 428\n> milliseconds, which is a big difference.  Why are workers not being\n> employed here like they were when I tested the query found inside of\n> the IF() check in a standalone manner?  But then I ran another test\n> and the results made even less sense to me.\n> \n\nThe plan difference is due to not realizing the EXISTS essentially \nimplies LIMIT 1. Secondly, it expects about 5000 rows matching the \ncondition, uniformly spread through the table. But it apparently takes \nmuch longer to find the first one, hence the increased duration.\n\nHow did you generate the data?\n\n> When I ran the above query the first 5 times after starting my\n> Postgres service, I got the same results each time (around 428\n> milliseconds), but when running the query 6 or more times, the\n> execution time jumps up to almost double that.  Here are the\n> \"auto_explain\" results running this query a 6th time...\n> \n\nThis is likely due to generating a generic plan after the fifth \nexecution. There seems to be only small difference in costs, though.\n\n> --\"auto_explain\" results after running the same query 6 or more\n> times.\n> 2018-04-16 15:01:51.635 EDT [12616] LOG:  duration: 761.847 ms \n> plan:\n>  Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n>  Result  (cost=4.58..4.59 rows=1 width=1) (actual\n> time=761.843..761.843 rows=1 loops=1)\n>    Buffers: shared hit=160 read=5246\n>    InitPlan 1 (returns $0)\n>   ->  Seq Scan on zz_noidx1  (cost=0.00..22906.00 rows=5000\n> width=0) (actual time=761.841..761.841 rows=0 loops=1)\n>      Filter: (lower(text_distinct) = lower($1))\n>      Rows Removed by Filter: 1000000\n>      Buffers: shared hit=160 read=5246\n> 2018-04-16 15:01:51.635 EDT [12616] CONTEXT:  SQL statement\n> \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n>  PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n> 2018-04-16 15:01:51.635 EDT [12616] LOG:  duration: 762.156 ms \n> plan:\n>  Query Text: explain (analyze, buffers) select * from\n> zz_spx_ifexists_noidx('Test5000001')\n>  Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26\n> rows=1 width=32) (actual time=762.154..762.155 rows=1 loops=1)\n>    Buffers: shared hit=160 read=5246\n> \n> As you can see, the execution time jumps up to about 762\n> milliseonds.  I can see in the sequence scan node that the LOWER()\n> function shows up on the right side of the equal operator, whereas\n> in the first 5 runs of this test query the plan did not show this. \n> Why is this?\n> \n\nIt doesn't really matter on which side it shows, it's more about a \ngeneric plan built without knowledge of the parameter value.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 17 Apr 2018 12:49:29 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": "\n\nOn 04/17/2018 07:17 AM, Pavel Stehule wrote:\n> Hi\n> \n> 2018-04-16 22:42 GMT+02:00 Hackety Man <[email protected] \n> <mailto:[email protected]>>:\n> \n> ...\n >\n> A support of parallel query execution is not complete -  it doesn't work \n> in PostgreSQL 11 too. So although EXISTS variant can be faster (but can \n> be - the worst case of EXISTS is same like COUNT), then due disabled \n> parallel execution the COUNT(*) is faster now. It is unfortunate, \n> because I believe so this issue will be fixed in few years.\n> \n\nNone of the issues seems to be particularly related to parallel query. \nIt's much more likely a general issue with planning EXISTS / LIMIT and \nnon-uniform data distribution.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 17 Apr 2018 12:52:45 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": "On Tue, Apr 17, 2018 at 6:49 AM, Tomas Vondra <[email protected]>\nwrote:\n\n>\n>\n> On 04/16/2018 10:42 PM, Hackety Man wrote:\n>\n>> ...\n>> The first thing I did was to run some baseline tests using the basic\n>> queries inside of the IF() checks found in each of the functions to\n>> see how the query planner handled them. I ran the following two\n>> queries.\n>>\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM zz_noidx1 WHERE\n>> LOWER(text_distinct) = LOWER('Test5000001');\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(text_distinct) = LOWER('Test5000001');\n>>\n>>\n> Those are not the interesting plans, though. The EXISTS only cares about\n> the first row, so you should be looking at\n>\n> EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n> LOWER(text_distinct) = LOWER('Test5000001') LIMIT 1;\n\n\n\nOkay. I tested this query and it did return an execution time on par with\nmy tests of the \"zz_spx_ifexists_noidx\" function.\n\n\n\n>\n>\n> I moved on to test the other function with the following query...\n>>\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\n>> zz_spx_ifcount_noidx('Test5000001');\n>>\n>> and I got the following \"auto_explain\" results...\n>>\n>> 2018-04-16 14:58:34.134 EDT [12616] LOG: duration: 426.279 ms\n>> plan:\n>> Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n>> Result (cost=4.08..4.09 rows=1 width=1) (actual\n>> time=426.274..426.274 rows=1 loops=1)\n>> Buffers: shared read=5406\n>> InitPlan 1 (returns $0)\n>> -> Seq Scan on zz_noidx1 (cost=0.00..20406.00 rows=5000\n>> width=0) (actual time=426.273..426.273 rows=0 loops=1)\n>> Filter: (lower(text_distinct) = 'test5000001'::text)\n>> Rows Removed by Filter: 1000000\n>> Buffers: shared read=5406\n>> 2018-04-16 14:58:34.134 EDT [12616] CONTEXT: SQL statement\n>> \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n>> PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n>> 2018-04-16 14:58:34.134 EDT [12616] LOG: duration: 428.077 ms\n>> plan:\n>> Query Text: explain (analyze, buffers) select * from\n>> zz_spx_ifexists_noidx('Test5000001')\n>> Function Scan on zz_spx_ifexists_noidx (cost=0.25..0.26\n>> rows=1 width=32) (actual time=428.076..428.076 rows=1 loops=1)\n>> Buffers: shared hit=30 read=5438\n>>\n>> Definitely not the execution time, or query plan, results I was\n>> expecting. As we can see, no workers were employed here and my\n>> guess was that this was the reason or the large execution time\n>> difference between these 2 tests? 199 milliseconds versus 428\n>> milliseconds, which is a big difference. Why are workers not being\n>> employed here like they were when I tested the query found inside of\n>> the IF() check in a standalone manner? But then I ran another test\n>> and the results made even less sense to me.\n>>\n>>\n> The plan difference is due to not realizing the EXISTS essentially implies\n> LIMIT 1. Secondly, it expects about 5000 rows matching the condition,\n> uniformly spread through the table. But it apparently takes much longer to\n> find the first one, hence the increased duration.\n>\n\n\nAh. I did not know that. So EXISTS inherently applies a LIMIT 1, even\nthough it doesn't show in the query plan, correct? Is it not possible for\nparallel scans to be implemented while applying an implicit, or explicit,\nLIMIT 1?\n\n\n\n>\n> How did you generate the data?\n\n\n\nI used generate_series() to create 1 million records in sequence at the\nsame time that I created the table using the following script...\n\nCREATE TABLE zz_noidx1 AS SELECT generate_series(0, 999999) AS\nint_distinct, 'Test'::text || generate_series(0, 999999)::text AS\ntext_distinct;\n\n\n>\n> When I ran the above query the first 5 times after starting my\n>> Postgres service, I got the same results each time (around 428\n>> milliseconds), but when running the query 6 or more times, the\n>> execution time jumps up to almost double that. Here are the\n>> \"auto_explain\" results running this query a 6th time...\n>>\n>>\n> This is likely due to generating a generic plan after the fifth execution.\n> There seems to be only small difference in costs, though.\n>\n>\n> --\"auto_explain\" results after running the same query 6 or more\n>> times.\n>> 2018-04-16 15:01:51.635 EDT [12616] LOG: duration: 761.847 ms\n>> plan:\n>> Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n>> Result (cost=4.58..4.59 rows=1 width=1) (actual\n>> time=761.843..761.843 rows=1 loops=1)\n>> Buffers: shared hit=160 read=5246\n>> InitPlan 1 (returns $0)\n>> -> Seq Scan on zz_noidx1 (cost=0.00..22906.00 rows=5000\n>> width=0) (actual time=761.841..761.841 rows=0 loops=1)\n>> Filter: (lower(text_distinct) = lower($1))\n>> Rows Removed by Filter: 1000000\n>> Buffers: shared hit=160 read=5246\n>> 2018-04-16 15:01:51.635 EDT [12616] CONTEXT: SQL statement\n>> \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n>> PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n>> 2018-04-16 15:01:51.635 EDT [12616] LOG: duration: 762.156 ms\n>> plan:\n>> Query Text: explain (analyze, buffers) select * from\n>> zz_spx_ifexists_noidx('Test5000001')\n>> Function Scan on zz_spx_ifexists_noidx (cost=0.25..0.26\n>> rows=1 width=32) (actual time=762.154..762.155 rows=1 loops=1)\n>> Buffers: shared hit=160 read=5246\n>>\n>> As you can see, the execution time jumps up to about 762\n>> milliseonds. I can see in the sequence scan node that the LOWER()\n>> function shows up on the right side of the equal operator, whereas\n>> in the first 5 runs of this test query the plan did not show this.\n>> Why is this?\n>>\n>>\n> It doesn't really matter on which side it shows, it's more about a generic\n> plan built without knowledge of the parameter value.\n>\n\n\nRight. I was more wondering why it switched over to a generic plan, as\nyou've stated, like clockwork starting with the 6th execution run.\n\n\n\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nOn Tue, Apr 17, 2018 at 6:49 AM, Tomas Vondra <[email protected]> wrote:\n\nOn 04/16/2018 10:42 PM, Hackety Man wrote:\n\n...\n    The first thing I did was to run some baseline tests using the basic\n    queries inside of the IF() checks found in each of the functions to\n    see how the query planner handled them.  I ran the following two\n    queries.\n\n        EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM zz_noidx1 WHERE\n        LOWER(text_distinct) = LOWER('Test5000001');\n        EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n        LOWER(text_distinct) = LOWER('Test5000001');\n\n\n\nThose are not the interesting plans, though. The EXISTS only cares about the first row, so you should be looking at\n\n    EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n    LOWER(text_distinct) = LOWER('Test5000001') LIMIT 1;Okay.  I tested this query and it did return an execution time on par with my tests of the \"zz_spx_ifexists_noidx\" function. \n\n\n    I moved on to test the other function with the following query...\n\n        EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\n        zz_spx_ifcount_noidx('Test5000001');\n\n    and I got the following \"auto_explain\" results...\n\n        2018-04-16 14:58:34.134 EDT [12616] LOG:  duration: 426.279 ms         plan:\n          Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n        LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n          Result  (cost=4.08..4.09 rows=1 width=1) (actual\n        time=426.274..426.274 rows=1 loops=1)\n            Buffers: shared read=5406\n            InitPlan 1 (returns $0)\n           ->  Seq Scan on zz_noidx1  (cost=0.00..20406.00 rows=5000\n        width=0) (actual time=426.273..426.273 rows=0 loops=1)\n              Filter: (lower(text_distinct) = 'test5000001'::text)\n              Rows Removed by Filter: 1000000\n              Buffers: shared read=5406\n        2018-04-16 14:58:34.134 EDT [12616] CONTEXT:  SQL statement\n        \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n        LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n          PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n        2018-04-16 14:58:34.134 EDT [12616] LOG:  duration: 428.077 ms         plan:\n          Query Text: explain (analyze, buffers) select * from\n        zz_spx_ifexists_noidx('Test5000001')\n          Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26\n        rows=1 width=32) (actual time=428.076..428.076 rows=1 loops=1)\n            Buffers: shared hit=30 read=5438\n\n    Definitely not the execution time, or query plan, results I was\n    expecting.  As we can see, no workers were employed here and my\n    guess was that this was the reason or the large execution time\n    difference between these 2 tests?  199 milliseconds versus 428\n    milliseconds, which is a big difference.  Why are workers not being\n    employed here like they were when I tested the query found inside of\n    the IF() check in a standalone manner?  But then I ran another test\n    and the results made even less sense to me.\n\n\n\nThe plan difference is due to not realizing the EXISTS essentially implies LIMIT 1. Secondly, it expects about 5000 rows matching the condition,  uniformly spread through the table. But it apparently takes much longer to find the first one, hence the increased duration.Ah.  I did not know that.  So EXISTS inherently applies a LIMIT 1, even though it doesn't show in the query plan, correct?  Is it not possible for parallel scans to be implemented while applying an implicit, or explicit, LIMIT 1? \n\nHow did you generate the data?I used generate_series() to create 1 million records in sequence at the same time that I created the table using the following script...CREATE TABLE zz_noidx1 AS SELECT generate_series(0, 999999) AS int_distinct, 'Test'::text || generate_series(0, 999999)::text AS text_distinct; \n\n\n    When I ran the above query the first 5 times after starting my\n    Postgres service, I got the same results each time (around 428\n    milliseconds), but when running the query 6 or more times, the\n    execution time jumps up to almost double that.  Here are the\n    \"auto_explain\" results running this query a 6th time...\n\n\n\nThis is likely due to generating a generic plan after the fifth execution. There seems to be only small difference in costs, though.\n\n\n        --\"auto_explain\" results after running the same query 6 or more\n        times.\n        2018-04-16 15:01:51.635 EDT [12616] LOG:  duration: 761.847 ms         plan:\n          Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n        LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n          Result  (cost=4.58..4.59 rows=1 width=1) (actual\n        time=761.843..761.843 rows=1 loops=1)\n            Buffers: shared hit=160 read=5246\n            InitPlan 1 (returns $0)\n           ->  Seq Scan on zz_noidx1  (cost=0.00..22906.00 rows=5000\n        width=0) (actual time=761.841..761.841 rows=0 loops=1)\n              Filter: (lower(text_distinct) = lower($1))\n              Rows Removed by Filter: 1000000\n              Buffers: shared hit=160 read=5246\n        2018-04-16 15:01:51.635 EDT [12616] CONTEXT:  SQL statement\n        \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n        LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n          PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n        2018-04-16 15:01:51.635 EDT [12616] LOG:  duration: 762.156 ms         plan:\n          Query Text: explain (analyze, buffers) select * from\n        zz_spx_ifexists_noidx('Test5000001')\n          Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26\n        rows=1 width=32) (actual time=762.154..762.155 rows=1 loops=1)\n            Buffers: shared hit=160 read=5246\n\n    As you can see, the execution time jumps up to about 762\n    milliseonds.  I can see in the sequence scan node that the LOWER()\n    function shows up on the right side of the equal operator, whereas\n    in the first 5 runs of this test query the plan did not show this.     Why is this?\n\n\n\nIt doesn't really matter on which side it shows, it's more about a generic plan built without knowledge of the parameter value.Right.  I was more wondering why it switched over to a generic plan, as you've stated, like clockwork starting with the 6th execution run. \n\n\nregards\n\n-- \nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 17 Apr 2018 10:01:14 -0400", "msg_from": "Hackety Man <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": "Hi Pavel,\n\nThanks for sharing that information. I was not aware that the parallel\nquery functionality was not yet fully implemented.\n\nThanks,\nRyan\n\nOn Tue, Apr 17, 2018 at 1:17 AM, Pavel Stehule <[email protected]>\nwrote:\n\n> Hi\n>\n> 2018-04-16 22:42 GMT+02:00 Hackety Man <[email protected]>:\n>\n>> *A description of what you are trying to achieve and what results you\n>> expect.:*\n>>\n>> My end goal was to test the execution time difference between using an\n>> IF(SELECT COUNT(*)...) and an IF EXISTS() when no indexes were used and\n>> when a string match was not found. My expectation was that my 2 functions\n>> would behave fairly similarly, but they most certainly did not. Here are\n>> the table, functions, test queries, and test query results I received, as\n>> well as comments as I present the pieces and talk about the results from my\n>> perspective.\n>>\n>> This is the table and data that I used for my tests. A table with 1\n>> million sequenced records. No indexing on any columns. I ran ANALYZE on\n>> this table and a VACUUM on the entire database, just to be sure.\n>>\n>> CREATE TABLE zz_noidx1 AS SELECT generate_series(0, 999999) AS\n>> int_distinct, 'Test'::text || generate_series(0, 999999)::text AS\n>> text_distinct;\n>>\n>> These are the 2 functions that I ran my final tests with. My goal was to\n>> determine which function would perform the fastest and my expectation was\n>> that they would still be somewhat close in execution time comparison.\n>>\n>> --Test Function #1\n>> CREATE OR REPLACE FUNCTION zz_spx_ifcount_noidx(p_findme text)\n>> RETURNS text\n>> LANGUAGE 'plpgsql'\n>> STABLE\n>> AS $$\n>>\n>> BEGIN\n>> IF (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct)\n>> = LOWER(p_findme)) > 0 THEN\n>> RETURN 'Found';\n>> ELSE\n>> RETURN 'Not Found';\n>> END IF;\n>> END;\n>> $$;\n>>\n>> --Test Function #2\n>> CREATE OR REPLACE FUNCTION zz_spx_ifexists_noidx(p_findme text)\n>> RETURNS text\n>> LANGUAGE 'plpgsql'\n>> STABLE\n>> AS $$\n>>\n>> BEGIN\n>> IF EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct)\n>> = LOWER(p_findme)) THEN\n>> RETURN 'Found';\n>> ELSE\n>> RETURN 'Not Found';\n>> END IF;\n>> END;\n>> $$;\n>>\n>> The first thing I did was to run some baseline tests using the basic\n>> queries inside of the IF() checks found in each of the functions to see how\n>> the query planner handled them. I ran the following two queries.\n>>\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM zz_noidx1 WHERE\n>> LOWER(text_distinct) = LOWER('Test5000001');\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(text_distinct) = LOWER('Test5000001');\n>>\n>> The execution time results and query plans for these two were very\n>> similar, as expected. In the results I can see that 2 workers were\n>> employed for each query plan.\n>>\n>> --Results for the SELECT COUNT(*) query.\n>> QUERY PLAN\n>>\n>> ------------------------------------------------------------\n>> ------------------------------------------------------------\n>> ----------------\n>> Finalize Aggregate (cost=12661.42..12661.43 rows=1 width=8) (actual\n>> time=172.105..172.105 rows=1 loops=1)\n>> Buffers: shared read=1912\n>>\n>>\n>> -> Gather (cost=12661.21..12661.42 rows=2 width=8) (actual\n>> time=172.020..172.099 rows=3 loops=1)\n>> Workers Planned: 2\n>>\n>>\n>> Workers Launched: 2\n>>\n>>\n>> Buffers: shared read=1912\n>>\n>>\n>> -> Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8) (actual\n>> time=155.123..155.123 rows=1 loops=3)\n>> Buffers: shared read=5406\n>>\n>>\n>> -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\n>> width=0) (actual time=155.103..155.103 rows=0 loops=3)\n>> Filter: (lower(text_distinct) = 'test5000001'::text)\n>>\n>> Rows Removed by Filter: 333333\n>>\n>> Buffers: shared read=5406\n>>\n>> Planning time: 0.718 ms\n>>\n>>\n>> Execution time: 187.601 ms\n>>\n>> --Results for the SELECT 1 query.\n>> QUERY PLAN\n>>\n>> ------------------------------------------------------------\n>> ----------------------------------------------------------------\n>> Gather (cost=1000.00..13156.00 rows=5000 width=4) (actual\n>> time=175.682..175.682 rows=0 loops=1)\n>> Workers Planned: 2\n>>\n>>\n>> Workers Launched: 2\n>>\n>>\n>> Buffers: shared read=2021\n>>\n>>\n>> -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\n>> width=4) (actual time=159.769..159.769 rows=0 loops=3)\n>> Filter: (lower(text_distinct) = 'test5000001'::text)\n>>\n>> Rows Removed by Filter: 333333\n>>\n>> Buffers: shared read=5406\n>>\n>> Planning time: 0.874 ms\n>>\n>> Execution time: 192.045 ms\n>>\n>> After running these baseline tests and viewing the fairly similar\n>> results, right or wrong, I expected my queries that tested the functions to\n>> behave similarly. I started with the following query...\n>>\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM zz_spx_ifcount_noidx('Test5000\n>> 001');\n>>\n>> and I got the following \"auto_explain\" results...\n>>\n>> 2018-04-16 14:57:22.624 EDT [17812] LOG: duration: 155.239 ms plan:\n>> Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\n>> Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8) (actual\n>> time=155.230..155.230 rows=1 loops=1)\n>> Buffers: shared read=1682\n>> -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\n>> width=0) (actual time=155.222..155.222 rows=0 loops=1)\n>> Filter: (lower(text_distinct) = 'test5000001'::text)\n>> Rows Removed by Filter: 311170\n>> Buffers: shared read=1682\n>> 2018-04-16 14:57:22.624 EDT [9096] LOG: duration: 154.603 ms plan:\n>> Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\n>> Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8) (actual\n>> time=154.576..154.576 rows=1 loops=1)\n>> Buffers: shared read=1682\n>> -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00 rows=2083\n>> width=0) (actual time=154.570..154.570 rows=0 loops=1)\n>> Filter: (lower(text_distinct) = 'test5000001'::text)\n>> Rows Removed by Filter: 311061\n>> Buffers: shared read=1682\n>> 2018-04-16 14:57:22.642 EDT [15132] LOG: duration: 197.260 ms plan:\n>> Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\n>> Result (cost=12661.43..12661.45 rows=1 width=1) (actual\n>> time=179.561..179.561 rows=1 loops=1)\n>> Buffers: shared read=2042\n>> InitPlan 1 (returns $1)\n>> -> Finalize Aggregate (cost=12661.42..12661.43 rows=1 width=8)\n>> (actual time=179.559..179.559 rows=1 loops=1)\n>> Buffers: shared read=2042\n>> -> Gather (cost=12661.21..12661.42 rows=2 width=8) (actual\n>> time=179.529..179.556 rows=3 loops=1)\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared read=2042\n>> -> Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8)\n>> (actual time=162.831..162.831 rows=1 loops=3)\n>> Buffers: shared read=5406\n>> -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00\n>> rows=2083 width=0) (actual time=162.824..162.824 rows=0 loops=3)\n>> Filter: (lower(text_distinct) = 'test5000001'::text)\n>> Rows Removed by Filter: 333333\n>> Buffers: shared read=5406\n>> 2018-04-16 14:57:22.642 EDT [15132] CONTEXT: SQL statement \"SELECT\n>> (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\n>> LOWER(p_findme)) > 0\"\n>> PL/pgSQL function zz_spx_ifcount_noidx(text) line 4 at IF\n>> 2018-04-16 14:57:22.642 EDT [15132] LOG: duration: 199.371 ms plan:\n>> Query Text: explain (analyze, buffers) select * from\n>> zz_spx_ifcount_noidx('Test5000001')\n>> Function Scan on zz_spx_ifcount_noidx (cost=0.25..0.26 rows=1 width=32)\n>> (actual time=199.370..199.370 rows=1 loops=1)\n>> Buffers: shared hit=218 read=5446\n>>\n>> Here I could see that the 2 workers were getting employed again, which is\n>> great. Just what I expected. And the execution time was in the same\n>> ballpark as my first baseline test using just the query found inside of the\n>> IF() check. 199 milliseonds. Okay.\n>>\n>> I moved on to test the other function with the following query...\n>>\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM zz_spx_ifcount_noidx('Test5000\n>> 001');\n>>\n>> and I got the following \"auto_explain\" results...\n>>\n>> 2018-04-16 14:58:34.134 EDT [12616] LOG: duration: 426.279 ms plan:\n>> Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n>> Result (cost=4.08..4.09 rows=1 width=1) (actual time=426.274..426.274\n>> rows=1 loops=1)\n>> Buffers: shared read=5406\n>> InitPlan 1 (returns $0)\n>> -> Seq Scan on zz_noidx1 (cost=0.00..20406.00 rows=5000 width=0)\n>> (actual time=426.273..426.273 rows=0 loops=1)\n>> Filter: (lower(text_distinct) = 'test5000001'::text)\n>> Rows Removed by Filter: 1000000\n>> Buffers: shared read=5406\n>> 2018-04-16 14:58:34.134 EDT [12616] CONTEXT: SQL statement \"SELECT\n>> EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\n>> LOWER(p_findme))\"\n>> PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n>> 2018-04-16 14:58:34.134 EDT [12616] LOG: duration: 428.077 ms plan:\n>> Query Text: explain (analyze, buffers) select * from\n>> zz_spx_ifexists_noidx('Test5000001')\n>> Function Scan on zz_spx_ifexists_noidx (cost=0.25..0.26 rows=1\n>> width=32) (actual time=428.076..428.076 rows=1 loops=1)\n>> Buffers: shared hit=30 read=5438\n>>\n>> Definitely not the execution time, or query plan, results I was\n>> expecting. As we can see, no workers were employed here and my guess was\n>> that this was the reason or the large execution time difference between\n>> these 2 tests? 199 milliseconds versus 428 milliseconds, which is a big\n>> difference. Why are workers not being employed here like they were when I\n>> tested the query found inside of the IF() check in a standalone manner?\n>> But then I ran another test and the results made even less sense to me.\n>>\n>> When I ran the above query the first 5 times after starting my Postgres\n>> service, I got the same results each time (around 428 milliseconds), but\n>> when running the query 6 or more times, the execution time jumps up to\n>> almost double that. Here are the \"auto_explain\" results running this query\n>> a 6th time...\n>>\n>> --\"auto_explain\" results after running the same query 6 or more times.\n>> 2018-04-16 15:01:51.635 EDT [12616] LOG: duration: 761.847 ms plan:\n>> Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n>> Result (cost=4.58..4.59 rows=1 width=1) (actual time=761.843..761.843\n>> rows=1 loops=1)\n>> Buffers: shared hit=160 read=5246\n>> InitPlan 1 (returns $0)\n>> -> Seq Scan on zz_noidx1 (cost=0.00..22906.00 rows=5000 width=0)\n>> (actual time=761.841..761.841 rows=0 loops=1)\n>> Filter: (lower(text_distinct) = lower($1))\n>> Rows Removed by Filter: 1000000\n>> Buffers: shared hit=160 read=5246\n>> 2018-04-16 15:01:51.635 EDT [12616] CONTEXT: SQL statement \"SELECT\n>> EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) =\n>> LOWER(p_findme))\"\n>> PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF\n>> 2018-04-16 15:01:51.635 EDT [12616] LOG: duration: 762.156 ms plan:\n>> Query Text: explain (analyze, buffers) select * from\n>> zz_spx_ifexists_noidx('Test5000001')\n>> Function Scan on zz_spx_ifexists_noidx (cost=0.25..0.26 rows=1\n>> width=32) (actual time=762.154..762.155 rows=1 loops=1)\n>> Buffers: shared hit=160 read=5246\n>>\n>> As you can see, the execution time jumps up to about 762 milliseonds. I\n>> can see in the sequence scan node that the LOWER() function shows up on the\n>> right side of the equal operator, whereas in the first 5 runs of this test\n>> query the plan did not show this. Why is this?\n>>\n>> I tried increasing the \"work_mem\" setting to 1GB to see if this made any\n>> difference, but the results were the same.\n>>\n>> So those were the tests that I performed and the results I received,\n>> which left me with many questions. If anyone is able to help me understand\n>> this behavior, I'd greatly appreciate it. This is my first post to the\n>> email list, so I hope I did a good enough job providing all the information\n>> needed.\n>>\n>> Thanks!\n>> Ryan\n>>\n>> *PostgreSQL version number you are running:*\n>>\n>> PostgreSQL 10.2, compiled by Visual C++ build 1800, 64-bit\n>>\n>> *How you installed PostgreSQL:*\n>>\n>> Using the Enterprise DB installer.\n>>\n>> I have also installed Enterprise DB's Postgres Enterprise Manager (PEM)\n>> 7.2.0 software and Enterprise DB's SQL Profiler PG10-7.2.0 software. The\n>> PEM Agent service that gets installed is currently turned off.\n>>\n>> *Changes made to the settings in the postgresql.conf file: see Server\n>> Configuration for a quick way to list them all.*\n>>\n>> name |current_setting\n>> |source\n>> -----------------------------------|------------------------\n>> ---------------|---------------------\n>> application_name |DBeaver 5.0.3 -\n>> Main |session\n>> auto_explain.log_analyze |on\n>> |configuration file\n>> auto_explain.log_buffers |on\n>> |configuration file\n>> auto_explain.log_min_duration |0\n>> |configuration file\n>> auto_explain.log_nested_statements |on\n>> |configuration file\n>> auto_explain.log_triggers |on\n>> |configuration file\n>> client_encoding |UTF8\n>> |client\n>> DateStyle |ISO, MDY\n>> |client\n>> default_text_search_config |pg_catalog.english\n>> |configuration file\n>> dynamic_shared_memory_type |windows\n>> |configuration file\n>> extra_float_digits |3\n>> |session\n>> lc_messages |English_United\n>> States.1252 |configuration file\n>> lc_monetary |English_United\n>> States.1252 |configuration file\n>> lc_numeric |English_United\n>> States.1252 |configuration file\n>> lc_time |English_United\n>> States.1252 |configuration file\n>> listen_addresses |*\n>> |configuration file\n>> log_destination |stderr\n>> |configuration file\n>> log_timezone |US/Eastern\n>> |configuration file\n>> logging_collector |on\n>> |configuration file\n>> max_connections |100\n>> |configuration file\n>> max_stack_depth |2MB\n>> |environment variable\n>> port |5432\n>> |configuration file\n>> shared_buffers |128MB\n>> |configuration file\n>> shared_preload_libraries |$libdir/sql-profiler.dll,\n>> auto_explain |configuration file\n>> ssl |on\n>> |configuration file\n>> ssl_ca_file |root.crt\n>> |configuration file\n>> ssl_cert_file |server.crt\n>> |configuration file\n>> ssl_crl_file |root.crl\n>> |configuration file\n>> ssl_key_file |server.key\n>> |configuration file\n>> TimeZone |America/New_York\n>> |client\n>>\n>> *Operating system and version:*\n>>\n>> Windows 10 Pro 64-bit, Version 1709 (Build 16299.309)\n>>\n>> *Hardware:*\n>>\n>> Processor - Intel Core i7-7820HQ @ 2.90GHz\n>> RAM - 16GB\n>> RAID? - No\n>> Hard Drive - Samsung 512 GB SSD M.2 PCIe NVMe Opal2\n>>\n>> *What program you're using to connect to PostgreSQL:*\n>>\n>> DBeaver Community Edition v5.0.3\n>>\n>> *Is there anything relevant or unusual in the PostgreSQL server logs?:*\n>>\n>> Not that I noticed.\n>>\n>> *For questions about any kind of error:*\n>>\n>> N/A\n>>\n>> *What you were doing when the error happened / how to cause the error:*\n>>\n>> N/A\n>>\n>> *The EXACT TEXT of the error message you're getting, if there is one:\n>> (Copy and paste the message to the email, do not send a screenshot)*\n>>\n>> N/A\n>>\n>>\n> A support of parallel query execution is not complete - it doesn't work\n> in PostgreSQL 11 too. So although EXISTS variant can be faster (but can be\n> - the worst case of EXISTS is same like COUNT), then due disabled parallel\n> execution the COUNT(*) is faster now. It is unfortunate, because I believe\n> so this issue will be fixed in few years.\n>\n> Regards\n>\n> Pavel\n>\n\nHi Pavel,Thanks for sharing that information.  I was not aware that the parallel query functionality was not yet fully implemented.Thanks,RyanOn Tue, Apr 17, 2018 at 1:17 AM, Pavel Stehule <[email protected]> wrote:Hi2018-04-16 22:42 GMT+02:00 Hackety Man <[email protected]>:A description of what you are trying to achieve and what results you expect.:My end goal was to test the execution time difference between using an IF(SELECT COUNT(*)...) and an IF EXISTS() when no indexes were used and when a string match was not found.  My expectation was that my 2 functions would behave fairly similarly, but they most certainly did not.  Here are the table, functions, test queries, and test query results I received, as well as comments as I present the pieces and talk about the results from my perspective.This is the table and data that I used for my tests.  A table with 1 million sequenced records.  No indexing on any columns.  I ran ANALYZE on this table and a VACUUM on the entire database, just to be sure.CREATE TABLE zz_noidx1 AS SELECT generate_series(0, 999999) AS int_distinct, 'Test'::text || generate_series(0, 999999)::text AS text_distinct;These are the 2 functions that I ran my final tests with.  My goal was to determine which function would perform the fastest and my expectation was that they would still be somewhat close in execution time comparison.--Test Function #1CREATE OR REPLACE FUNCTION zz_spx_ifcount_noidx(p_findme text) RETURNS text LANGUAGE 'plpgsql' STABLEAS $$ BEGIN IF (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 THEN  RETURN 'Found'; ELSE  RETURN 'Not Found'; END IF;END;$$;--Test Function #2CREATE OR REPLACE FUNCTION zz_spx_ifexists_noidx(p_findme text) RETURNS text LANGUAGE 'plpgsql' STABLEAS $$ BEGIN IF EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) THEN  RETURN 'Found'; ELSE  RETURN 'Not Found'; END IF;END;$$;The first thing I did was to run some baseline tests using the basic queries inside of the IF() checks found in each of the functions to see how the query planner handled them.  I ran the following two queries.EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(text_distinct) = LOWER('Test5000001');EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE LOWER(text_distinct) = LOWER('Test5000001');The execution time results and query plans for these two were very similar, as expected.  In the results I can see that 2 workers were employed for each query plan.--Results for the SELECT COUNT(*) query.QUERY PLAN                                                                                                                              ----------------------------------------------------------------------------------------------------------------------------------------Finalize Aggregate  (cost=12661.42..12661.43 rows=1 width=8) (actual time=172.105..172.105 rows=1 loops=1)                                Buffers: shared read=1912                                                                                                               ->  Gather  (cost=12661.21..12661.42 rows=2 width=8) (actual time=172.020..172.099 rows=3 loops=1)                                      Workers Planned: 2                                                                                                                Workers Launched: 2                                                                                                               Buffers: shared read=1912                                                                                                         ->  Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=155.123..155.123 rows=1 loops=3)                        Buffers: shared read=5406                                                                                                      ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=155.103..155.103 rows=0 loops=3)      Filter: (lower(text_distinct) = 'test5000001'::text)                                                                     Rows Removed by Filter: 333333                                                                                           Buffers: shared read=5406                                                                                           Planning time: 0.718 ms                                                                                                                 Execution time: 187.601 ms--Results for the SELECT 1 query.QUERY PLAN                                                                                                                  ----------------------------------------------------------------------------------------------------------------------------Gather  (cost=1000.00..13156.00 rows=5000 width=4) (actual time=175.682..175.682 rows=0 loops=1)                              Workers Planned: 2                                                                                                          Workers Launched: 2                                                                                                         Buffers: shared read=2021                                                                                                   ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=4) (actual time=159.769..159.769 rows=0 loops=3)   Filter: (lower(text_distinct) = 'test5000001'::text)                                                                  Rows Removed by Filter: 333333                                                                                        Buffers: shared read=5406                                                                                           Planning time: 0.874 ms                                                                                                     Execution time: 192.045 ms  After running these baseline tests and viewing the fairly similar results, right or wrong, I expected my queries that tested the functions to behave similarly.  I started with the following query...EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM zz_spx_ifcount_noidx('Test5000001');and I got the following \"auto_explain\" results...2018-04-16 14:57:22.624 EDT [17812] LOG:  duration: 155.239 ms  plan: Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=155.230..155.230 rows=1 loops=1)   Buffers: shared read=1682   ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=155.222..155.222 rows=0 loops=1)   Filter: (lower(text_distinct) = 'test5000001'::text)   Rows Removed by Filter: 311170   Buffers: shared read=16822018-04-16 14:57:22.624 EDT [9096] LOG:  duration: 154.603 ms  plan: Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=154.576..154.576 rows=1 loops=1)   Buffers: shared read=1682   ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=154.570..154.570 rows=0 loops=1)   Filter: (lower(text_distinct) = 'test5000001'::text)   Rows Removed by Filter: 311061   Buffers: shared read=16822018-04-16 14:57:22.642 EDT [15132] LOG:  duration: 197.260 ms  plan: Query Text: SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0 Result  (cost=12661.43..12661.45 rows=1 width=1) (actual time=179.561..179.561 rows=1 loops=1)   Buffers: shared read=2042   InitPlan 1 (returns $1)  ->  Finalize Aggregate  (cost=12661.42..12661.43 rows=1 width=8) (actual time=179.559..179.559 rows=1 loops=1)     Buffers: shared read=2042     ->  Gather  (cost=12661.21..12661.42 rows=2 width=8) (actual time=179.529..179.556 rows=3 loops=1)     Workers Planned: 2     Workers Launched: 2     Buffers: shared read=2042     ->  Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=162.831..162.831 rows=1 loops=3)        Buffers: shared read=5406        ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=162.824..162.824 rows=0 loops=3)        Filter: (lower(text_distinct) = 'test5000001'::text)        Rows Removed by Filter: 333333        Buffers: shared read=54062018-04-16 14:57:22.642 EDT [15132] CONTEXT:  SQL statement \"SELECT (SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) > 0\" PL/pgSQL function zz_spx_ifcount_noidx(text) line 4 at IF2018-04-16 14:57:22.642 EDT [15132] LOG:  duration: 199.371 ms  plan: Query Text: explain (analyze, buffers) select * from zz_spx_ifcount_noidx('Test5000001') Function Scan on zz_spx_ifcount_noidx  (cost=0.25..0.26 rows=1 width=32) (actual time=199.370..199.370 rows=1 loops=1)   Buffers: shared hit=218 read=5446Here I could see that the 2 workers were getting employed again, which is great.  Just what I expected.  And the execution time was in the same ballpark as my first baseline test using just the query found inside of the IF() check.  199 milliseonds.  Okay.I moved on to test the other function with the following query...EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM zz_spx_ifcount_noidx('Test5000001');and I got the following \"auto_explain\" results...2018-04-16 14:58:34.134 EDT [12616] LOG:  duration: 426.279 ms  plan: Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) Result  (cost=4.08..4.09 rows=1 width=1) (actual time=426.274..426.274 rows=1 loops=1)   Buffers: shared read=5406   InitPlan 1 (returns $0)  ->  Seq Scan on zz_noidx1  (cost=0.00..20406.00 rows=5000 width=0) (actual time=426.273..426.273 rows=0 loops=1)     Filter: (lower(text_distinct) = 'test5000001'::text)     Rows Removed by Filter: 1000000     Buffers: shared read=54062018-04-16 14:58:34.134 EDT [12616] CONTEXT:  SQL statement \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\" PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF2018-04-16 14:58:34.134 EDT [12616] LOG:  duration: 428.077 ms  plan: Query Text: explain (analyze, buffers) select * from zz_spx_ifexists_noidx('Test5000001') Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26 rows=1 width=32) (actual time=428.076..428.076 rows=1 loops=1)   Buffers: shared hit=30 read=5438Definitely not the execution time, or query plan, results I was expecting.  As we can see, no workers were employed here and my guess was that this was the reason or the large execution time difference between these 2 tests?  199 milliseconds versus 428 milliseconds, which is a big difference.  Why are workers not being employed here like they were when I tested the query found inside of the IF() check in a standalone manner?  But then I ran another test and the results made even less sense to me.When I ran the above query the first 5 times after starting my Postgres service, I got the same results each time (around 428 milliseconds), but when running the query 6 or more times, the execution time jumps up to almost double that.  Here are the \"auto_explain\" results running this query a 6th time...--\"auto_explain\" results after running the same query 6 or more times.2018-04-16 15:01:51.635 EDT [12616] LOG:  duration: 761.847 ms  plan: Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme)) Result  (cost=4.58..4.59 rows=1 width=1) (actual time=761.843..761.843 rows=1 loops=1)   Buffers: shared hit=160 read=5246   InitPlan 1 (returns $0)  ->  Seq Scan on zz_noidx1  (cost=0.00..22906.00 rows=5000 width=0) (actual time=761.841..761.841 rows=0 loops=1)     Filter: (lower(text_distinct) = lower($1))     Rows Removed by Filter: 1000000     Buffers: shared hit=160 read=52462018-04-16 15:01:51.635 EDT [12616] CONTEXT:  SQL statement \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\" PL/pgSQL function zz_spx_ifexists_noidx(text) line 4 at IF2018-04-16 15:01:51.635 EDT [12616] LOG:  duration: 762.156 ms  plan: Query Text: explain (analyze, buffers) select * from zz_spx_ifexists_noidx('Test5000001') Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26 rows=1 width=32) (actual time=762.154..762.155 rows=1 loops=1)   Buffers: shared hit=160 read=5246As you can see, the execution time jumps up to about 762 milliseonds.  I can see in the sequence scan node that the LOWER() function shows up on the right side of the equal operator, whereas in the first 5 runs of this test query the plan did not show this.  Why is this?I tried increasing the \"work_mem\" setting to 1GB to see if this made any difference, but the results were the same.So those were the tests that I performed and the results I received, which left me with many questions.  If anyone is able to help me understand this behavior, I'd greatly appreciate it.  This is my first post to the email list, so I hope I did a good enough job providing all the information needed.Thanks!RyanPostgreSQL version number you are running:PostgreSQL 10.2, compiled by Visual C++ build 1800, 64-bitHow you installed PostgreSQL:Using the Enterprise DB installer.I have also installed Enterprise DB's Postgres Enterprise Manager (PEM) 7.2.0 software and Enterprise DB's SQL Profiler PG10-7.2.0 software.  The PEM Agent service that gets installed is currently turned off.Changes made to the settings in the postgresql.conf file:  see Server Configuration for a quick way to list them all.name                               |current_setting                        |source               -----------------------------------|---------------------------------------|---------------------application_name                   |DBeaver 5.0.3 - Main                   |session              auto_explain.log_analyze           |on                                     |configuration file   auto_explain.log_buffers           |on                                     |configuration file   auto_explain.log_min_duration      |0                                      |configuration file   auto_explain.log_nested_statements |on                                     |configuration file   auto_explain.log_triggers          |on                                     |configuration file   client_encoding                    |UTF8                                   |client               DateStyle                          |ISO, MDY                               |client               default_text_search_config         |pg_catalog.english                     |configuration file   dynamic_shared_memory_type         |windows                                |configuration file   extra_float_digits                 |3                                      |session              lc_messages                        |English_United States.1252             |configuration file   lc_monetary                        |English_United States.1252             |configuration file   lc_numeric                         |English_United States.1252             |configuration file   lc_time                            |English_United States.1252             |configuration file   listen_addresses                   |*                                      |configuration file   log_destination                    |stderr                                 |configuration file   log_timezone                       |US/Eastern                             |configuration file   logging_collector                  |on                                     |configuration file   max_connections                    |100                                    |configuration file   max_stack_depth                    |2MB                                    |environment variable port                               |5432                                   |configuration file   shared_buffers                     |128MB                                  |configuration file   shared_preload_libraries           |$libdir/sql-profiler.dll, auto_explain |configuration file   ssl                                |on                                     |configuration file   ssl_ca_file                        |root.crt                               |configuration file   ssl_cert_file                      |server.crt                             |configuration file   ssl_crl_file                       |root.crl                               |configuration file   ssl_key_file                       |server.key                             |configuration file   TimeZone                           |America/New_York                       |client               Operating system and version:Windows 10 Pro 64-bit, Version 1709 (Build 16299.309)Hardware:Processor - Intel Core i7-7820HQ @ 2.90GHzRAM - 16GBRAID? - NoHard Drive - Samsung 512 GB SSD M.2 PCIe NVMe Opal2What program you're using to connect to PostgreSQL:DBeaver Community Edition v5.0.3Is there anything relevant or unusual in the PostgreSQL server logs?:Not that I noticed.For questions about any kind of error:N/AWhat you were doing when the error happened / how to cause the error:N/AThe EXACT TEXT of the error message you're getting, if there is one: (Copy and paste the message to the email, do not send a screenshot)N/AA support of parallel query execution is not complete -  it doesn't work in PostgreSQL 11 too. So although EXISTS variant can be faster (but can be - the worst case of EXISTS is same like COUNT), then due disabled parallel execution the COUNT(*) is faster now. It is unfortunate, because I believe so this issue will be fixed in few years. RegardsPavel", "msg_date": "Tue, 17 Apr 2018 10:05:45 -0400", "msg_from": "Hackety Man <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": "\n\nOn 04/17/2018 04:01 PM, Hackety Man wrote:\n> \n> \n> On Tue, Apr 17, 2018 at 6:49 AM, Tomas Vondra \n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> \n> \n> On 04/16/2018 10:42 PM, Hackety Man wrote:\n> \n> ...\n>     The first thing I did was to run some baseline tests using\n> the basic\n>     queries inside of the IF() checks found in each of the\n> functions to\n>     see how the query planner handled them.  I ran the\n> following two\n>     queries.\n> \n>         EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM\n> zz_noidx1 WHERE\n>         LOWER(text_distinct) = LOWER('Test5000001');\n>         EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n>         LOWER(text_distinct) = LOWER('Test5000001');\n> \n> \n> Those are not the interesting plans, though. The EXISTS only cares\n> about the first row, so you should be looking at\n> \n>     EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n>     LOWER(text_distinct) = LOWER('Test5000001') LIMIT 1;\n> \n> \n> \n> Okay.  I tested this query and it did return an execution time on par \n> with my tests of the \"zz_spx_ifexists_noidx\" function.\n> *\n> *\n> \n> \n> \n>     I moved on to test the other function with the following\n> query...\n> \n>         EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\n>         zz_spx_ifcount_noidx('Test5000001');\n> \n>     and I got the following \"auto_explain\" results...\n> \n>         2018-04-16 14:58:34.134 EDT [12616] LOG:  duration:\n> 426.279 ms         plan:\n>           Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>         LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n>           Result  (cost=4.08..4.09 rows=1 width=1) (actual\n>         time=426.274..426.274 rows=1 loops=1)\n>             Buffers: shared read=5406\n>             InitPlan 1 (returns $0)\n>            ->  Seq Scan on zz_noidx1  (cost=0.00..20406.00\n> rows=5000\n>         width=0) (actual time=426.273..426.273 rows=0 loops=1)\n>               Filter: (lower(text_distinct) = 'test5000001'::text)\n>               Rows Removed by Filter: 1000000\n>               Buffers: shared read=5406\n>         2018-04-16 14:58:34.134 EDT [12616] CONTEXT:  SQL statement\n>         \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>         LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n>           PL/pgSQL function zz_spx_ifexists_noidx(text) line 4\n> at IF\n>         2018-04-16 14:58:34.134 EDT [12616] LOG:  duration:\n> 428.077 ms         plan:\n>           Query Text: explain (analyze, buffers) select * from\n>         zz_spx_ifexists_noidx('Test5000001')\n>           Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26\n>         rows=1 width=32) (actual time=428.076..428.076 rows=1\n> loops=1)\n>             Buffers: shared hit=30 read=5438\n> \n>     Definitely not the execution time, or query plan, results I was\n>     expecting.  As we can see, no workers were employed here and my\n>     guess was that this was the reason or the large execution time\n>     difference between these 2 tests?  199 milliseconds versus 428\n>     milliseconds, which is a big difference.  Why are workers\n> not being\n>     employed here like they were when I tested the query found\n> inside of\n>     the IF() check in a standalone manner?  But then I ran\n> another test\n>     and the results made even less sense to me.\n> \n> \n> The plan difference is due to not realizing the EXISTS essentially\n> implies LIMIT 1. Secondly, it expects about 5000 rows matching the\n> condition,  uniformly spread through the table. But it apparently\n> takes much longer to find the first one, hence the increased duration.\n> \n> \n> \n> Ah.  I did not know that.  So EXISTS inherently applies a LIMIT 1, even \n> though it doesn't show in the query plan, correct? Is it not possible \n> for parallel scans to be implemented while applying an implicit, or \n> explicit, LIMIT 1?\n> **//___^\n> \n\nIt doesn't add a limit node to the plan, but it behaves similarly to \nthat. The database only needs to fetch the first row to answer the \nEXISTS predicate.\n\nI don't think this is inherently incompatible with parallel plans, but \nthe planner simply thinks it's going to bee very cheap - cheaper than \nsetting up parallel workers etc. So it does not do that.\n\n> \n> How did you generate the data?\n> \n> \n> \n> I used generate_series() to create 1 million records in sequence at the \n> same time that I created the table using the following script...\n> \n> CREATE TABLE zz_noidx1 AS SELECT generate_series(0, 999999) AS\n> int_distinct, 'Test'::text || generate_series(0, 999999)::text AS\n> text_distinct;\n> \n\nWhich means that there are actually no matching rows for 'Test5000001'. \nSo the database will scan the whole table anyway, in order to answer the \nEXISTS condition. The estimate of 5000 matching rows is a default value \n(0.5% out of 1M rows), because the value is entirely out of the data \nrange covered by the histogram.\n\nThe easiest solution probably is adding an index on that column, which \nwill make answering the EXISTS much faster (at least in this case).\n\n> \n> \n>     When I ran the above query the first 5 times after starting my\n>     Postgres service, I got the same results each time (around 428\n>     milliseconds), but when running the query 6 or more times, the\n>     execution time jumps up to almost double that.  Here are the\n>     \"auto_explain\" results running this query a 6th time...\n> \n> \n> This is likely due to generating a generic plan after the fifth\n> execution. There seems to be only small difference in costs, though.\n> \n> \n>         --\"auto_explain\" results after running the same query 6\n> or more\n>         times.\n>         2018-04-16 15:01:51.635 EDT [12616] LOG:  duration:\n> 761.847 ms         plan:\n>           Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>         LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n>           Result  (cost=4.58..4.59 rows=1 width=1) (actual\n>         time=761.843..761.843 rows=1 loops=1)\n>             Buffers: shared hit=160 read=5246\n>             InitPlan 1 (returns $0)\n>            ->  Seq Scan on zz_noidx1  (cost=0.00..22906.00\n> rows=5000\n>         width=0) (actual time=761.841..761.841 rows=0 loops=1)\n>               Filter: (lower(text_distinct) = lower($1))\n>               Rows Removed by Filter: 1000000\n>               Buffers: shared hit=160 read=5246\n>         2018-04-16 15:01:51.635 EDT [12616] CONTEXT:  SQL statement\n>         \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>         LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n>           PL/pgSQL function zz_spx_ifexists_noidx(text) line 4\n> at IF\n>         2018-04-16 15:01:51.635 EDT [12616] LOG:  duration:\n> 762.156 ms         plan:\n>           Query Text: explain (analyze, buffers) select * from\n>         zz_spx_ifexists_noidx('Test5000001')\n>           Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26\n>         rows=1 width=32) (actual time=762.154..762.155 rows=1\n> loops=1)\n>             Buffers: shared hit=160 read=5246\n> \n>     As you can see, the execution time jumps up to about 762\n>     milliseonds.  I can see in the sequence scan node that the\n> LOWER()\n>     function shows up on the right side of the equal operator,\n> whereas\n>     in the first 5 runs of this test query the plan did not\n> show this.     Why is this?\n> \n> \n> It doesn't really matter on which side it shows, it's more about a\n> generic plan built without knowledge of the parameter value.\n> \n> \n> \n> Right.  I was more wondering why it switched over to a generic plan, as \n> you've stated, like clockwork starting with the 6th execution run.\n> \n\nThat's a hard-coded value. The first 5 executions are re-planned using \nthe actual parameter values, and then we try generating a generic plan \nand see if it's cheaper than the non-generic one. You can disable that, \nthough.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 17 Apr 2018 16:23:15 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": "On Tue, Apr 17, 2018 at 10:23 AM, Tomas Vondra <[email protected]\n> wrote:\n\n>\n>\n> On 04/17/2018 04:01 PM, Hackety Man wrote:\n>\n>>\n>>\n>> On Tue, Apr 17, 2018 at 6:49 AM, Tomas Vondra <\n>> [email protected] <mailto:[email protected]>>\n>> wrote:\n>>\n>>\n>>\n>> On 04/16/2018 10:42 PM, Hackety Man wrote:\n>>\n>> ...\n>> The first thing I did was to run some baseline tests using\n>> the basic\n>> queries inside of the IF() checks found in each of the\n>> functions to\n>> see how the query planner handled them. I ran the\n>> following two\n>> queries.\n>>\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM\n>> zz_noidx1 WHERE\n>> LOWER(text_distinct) = LOWER('Test5000001');\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(text_distinct) = LOWER('Test5000001');\n>>\n>>\n>> Those are not the interesting plans, though. The EXISTS only cares\n>> about the first row, so you should be looking at\n>>\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(text_distinct) = LOWER('Test5000001') LIMIT 1;\n>>\n>>\n>>\n>> Okay. I tested this query and it did return an execution time on par\n>> with my tests of the \"zz_spx_ifexists_noidx\" function.\n>> *\n>> *\n>>\n>>\n>>\n>>\n>> I moved on to test the other function with the following\n>> query...\n>>\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\n>> zz_spx_ifcount_noidx('Test5000001');\n>>\n>> and I got the following \"auto_explain\" results...\n>>\n>> 2018-04-16 14:58:34.134 EDT [12616] LOG: duration:\n>> 426.279 ms plan:\n>> Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1\n>> WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n>> Result (cost=4.08..4.09 rows=1 width=1) (actual\n>> time=426.274..426.274 rows=1 loops=1)\n>> Buffers: shared read=5406\n>> InitPlan 1 (returns $0)\n>> -> Seq Scan on zz_noidx1 (cost=0.00..20406.00\n>> rows=5000\n>> width=0) (actual time=426.273..426.273 rows=0 loops=1)\n>> Filter: (lower(text_distinct) =\n>> 'test5000001'::text)\n>> Rows Removed by Filter: 1000000\n>> Buffers: shared read=5406\n>> 2018-04-16 14:58:34.134 EDT [12616] CONTEXT: SQL\n>> statement\n>> \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n>> PL/pgSQL function zz_spx_ifexists_noidx(text) line 4\n>> at IF\n>> 2018-04-16 14:58:34.134 EDT [12616] LOG: duration:\n>> 428.077 ms plan:\n>> Query Text: explain (analyze, buffers) select * from\n>> zz_spx_ifexists_noidx('Test5000001')\n>> Function Scan on zz_spx_ifexists_noidx\n>> (cost=0.25..0.26\n>> rows=1 width=32) (actual time=428.076..428.076 rows=1\n>> loops=1)\n>> Buffers: shared hit=30 read=5438\n>>\n>> Definitely not the execution time, or query plan, results I\n>> was\n>> expecting. As we can see, no workers were employed here and\n>> my\n>> guess was that this was the reason or the large execution\n>> time\n>> difference between these 2 tests? 199 milliseconds versus\n>> 428\n>> milliseconds, which is a big difference. Why are workers\n>> not being\n>> employed here like they were when I tested the query found\n>> inside of\n>> the IF() check in a standalone manner? But then I ran\n>> another test\n>> and the results made even less sense to me.\n>>\n>>\n>> The plan difference is due to not realizing the EXISTS essentially\n>> implies LIMIT 1. Secondly, it expects about 5000 rows matching the\n>> condition, uniformly spread through the table. But it apparently\n>> takes much longer to find the first one, hence the increased duration.\n>>\n>>\n>>\n>> Ah. I did not know that. So EXISTS inherently applies a LIMIT 1, even\n>> though it doesn't show in the query plan, correct? Is it not possible for\n>> parallel scans to be implemented while applying an implicit, or explicit,\n>> LIMIT 1?\n>> **//___^\n>>\n>>\n> It doesn't add a limit node to the plan, but it behaves similarly to that.\n> The database only needs to fetch the first row to answer the EXISTS\n> predicate.\n>\n> I don't think this is inherently incompatible with parallel plans, but the\n> planner simply thinks it's going to bee very cheap - cheaper than setting\n> up parallel workers etc. So it does not do that.\n\n\n\nUnderstood. Any chance of the planner possibly being enhanced in the\nfuture to come to a better conclusion as to whether, or not, a parallel\nscan implementation would be a better choice during EXISTS condition\nchecks? :-)\n\n\n\n>\n>\n>\n>> How did you generate the data?\n>>\n>>\n>>\n>> I used generate_series() to create 1 million records in sequence at the\n>> same time that I created the table using the following script...\n>>\n>> CREATE TABLE zz_noidx1 AS SELECT generate_series(0, 999999) AS\n>> int_distinct, 'Test'::text || generate_series(0, 999999)::text AS\n>> text_distinct;\n>>\n>>\n> Which means that there are actually no matching rows for 'Test5000001'. So\n> the database will scan the whole table anyway, in order to answer the\n> EXISTS condition. The estimate of 5000 matching rows is a default value\n> (0.5% out of 1M rows), because the value is entirely out of the data range\n> covered by the histogram.\n>\n> The easiest solution probably is adding an index on that column, which\n> will make answering the EXISTS much faster (at least in this case).\n\n\n\nYes. I did test that scenario, as well. Adding an index does put the\nEXISTS condition check on par with the IF(SELECT COUNT(*) FROM...)\ncondition check. The one scenario where the EXISTS condition check\ndominated over the IF(SELECT COUNT(*) FROM...) condition check was when no\nindex was used and a matching string *was* found, as opposed to this test\nparticular test where we're looking for a string that will *not* be found.\nI just wanted to test all possible scenarios.\n\n\n\n>\n>\n>\n>>\n>> When I ran the above query the first 5 times after starting\n>> my\n>> Postgres service, I got the same results each time (around\n>> 428\n>> milliseconds), but when running the query 6 or more times,\n>> the\n>> execution time jumps up to almost double that. Here are the\n>> \"auto_explain\" results running this query a 6th time...\n>>\n>>\n>> This is likely due to generating a generic plan after the fifth\n>> execution. There seems to be only small difference in costs, though.\n>>\n>>\n>> --\"auto_explain\" results after running the same query 6\n>> or more\n>> times.\n>> 2018-04-16 15:01:51.635 EDT [12616] LOG: duration:\n>> 761.847 ms plan:\n>> Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1\n>> WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n>> Result (cost=4.58..4.59 rows=1 width=1) (actual\n>> time=761.843..761.843 rows=1 loops=1)\n>> Buffers: shared hit=160 read=5246\n>> InitPlan 1 (returns $0)\n>> -> Seq Scan on zz_noidx1 (cost=0.00..22906.00\n>> rows=5000\n>> width=0) (actual time=761.841..761.841 rows=0 loops=1)\n>> Filter: (lower(text_distinct) = lower($1))\n>> Rows Removed by Filter: 1000000\n>> Buffers: shared hit=160 read=5246\n>> 2018-04-16 15:01:51.635 EDT [12616] CONTEXT: SQL\n>> statement\n>> \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n>> LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n>> PL/pgSQL function zz_spx_ifexists_noidx(text) line 4\n>> at IF\n>> 2018-04-16 15:01:51.635 EDT [12616] LOG: duration:\n>> 762.156 ms plan:\n>> Query Text: explain (analyze, buffers) select * from\n>> zz_spx_ifexists_noidx('Test5000001')\n>> Function Scan on zz_spx_ifexists_noidx\n>> (cost=0.25..0.26\n>> rows=1 width=32) (actual time=762.154..762.155 rows=1\n>> loops=1)\n>> Buffers: shared hit=160 read=5246\n>>\n>> As you can see, the execution time jumps up to about 762\n>> milliseonds. I can see in the sequence scan node that the\n>> LOWER()\n>> function shows up on the right side of the equal operator,\n>> whereas\n>> in the first 5 runs of this test query the plan did not\n>> show this. Why is this?\n>>\n>>\n>> It doesn't really matter on which side it shows, it's more about a\n>> generic plan built without knowledge of the parameter value.\n>>\n>>\n>>\n>> Right. I was more wondering why it switched over to a generic plan, as\n>> you've stated, like clockwork starting with the 6th execution run.\n>>\n>>\n> That's a hard-coded value. The first 5 executions are re-planned using the\n> actual parameter values, and then we try generating a generic plan and see\n> if it's cheaper than the non-generic one. You can disable that, though.\n\n\n\nSo on that note, in the planner's eyes, starting with the 6th execution, it\nlooks like the planner still thinks that the generic plan will perform\nbetter than the non-generic one, which is why it keeps using the generic\nplan from that point forward?\n\nSimilar to the parallel scans, any chance of the planner possibly being\nenhanced in the future to come to a better conclusion as to whether, or\nnot, the generic plan will perform better than the non-generic plan? :-)\n\n\n\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\nThanks for all the help! I really appreciate it!\n\nRyan\n\nOn Tue, Apr 17, 2018 at 10:23 AM, Tomas Vondra <[email protected]> wrote:\n\nOn 04/17/2018 04:01 PM, Hackety Man wrote:\n\n\n\nOn Tue, Apr 17, 2018 at 6:49 AM, Tomas Vondra <[email protected] <mailto:[email protected]>> wrote:\n\n\n\n    On 04/16/2018 10:42 PM, Hackety Man wrote:\n\n        ...\n             The first thing I did was to run some baseline tests using\n        the basic\n             queries inside of the IF() checks found in each of the\n        functions to\n             see how the query planner handled them.  I ran the\n        following two\n             queries.\n\n                 EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM\n        zz_noidx1 WHERE\n                 LOWER(text_distinct) = LOWER('Test5000001');\n                 EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n                 LOWER(text_distinct) = LOWER('Test5000001');\n\n\n    Those are not the interesting plans, though. The EXISTS only cares\n    about the first row, so you should be looking at\n\n         EXPLAIN (ANALYZE, BUFFERS) SELECT 1 FROM zz_noidx1 WHERE\n         LOWER(text_distinct) = LOWER('Test5000001') LIMIT 1;\n\n\n\nOkay.  I tested this query and it did return an execution time on par with my tests of the \"zz_spx_ifexists_noidx\" function.\n*\n*\n\n\n\n             I moved on to test the other function with the following\n        query...\n\n                 EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\n                 zz_spx_ifcount_noidx('Test5000001');\n\n             and I got the following \"auto_explain\" results...\n\n                 2018-04-16 14:58:34.134 EDT [12616] LOG:  duration:\n        426.279 ms         plan:\n                   Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n                 LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n                   Result  (cost=4.08..4.09 rows=1 width=1) (actual\n                 time=426.274..426.274 rows=1 loops=1)\n                     Buffers: shared read=5406\n                     InitPlan 1 (returns $0)\n                    ->  Seq Scan on zz_noidx1  (cost=0.00..20406.00\n        rows=5000\n                 width=0) (actual time=426.273..426.273 rows=0 loops=1)\n                       Filter: (lower(text_distinct) = 'test5000001'::text)\n                       Rows Removed by Filter: 1000000\n                       Buffers: shared read=5406\n                 2018-04-16 14:58:34.134 EDT [12616] CONTEXT:  SQL statement\n                 \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n                 LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n                   PL/pgSQL function zz_spx_ifexists_noidx(text) line 4\n        at IF\n                 2018-04-16 14:58:34.134 EDT [12616] LOG:  duration:\n        428.077 ms         plan:\n                   Query Text: explain (analyze, buffers) select * from\n                 zz_spx_ifexists_noidx('Test5000001')\n                   Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26\n                 rows=1 width=32) (actual time=428.076..428.076 rows=1\n        loops=1)\n                     Buffers: shared hit=30 read=5438\n\n             Definitely not the execution time, or query plan, results I was\n             expecting.  As we can see, no workers were employed here and my\n             guess was that this was the reason or the large execution time\n             difference between these 2 tests?  199 milliseconds versus 428\n             milliseconds, which is a big difference.  Why are workers\n        not being\n             employed here like they were when I tested the query found\n        inside of\n             the IF() check in a standalone manner?  But then I ran\n        another test\n             and the results made even less sense to me.\n\n\n    The plan difference is due to not realizing the EXISTS essentially\n    implies LIMIT 1. Secondly, it expects about 5000 rows matching the\n    condition,  uniformly spread through the table. But it apparently\n    takes much longer to find the first one, hence the increased duration.\n\n\n\nAh.  I did not know that.  So EXISTS inherently applies a LIMIT 1, even though it doesn't show in the query plan, correct? Is it not possible for parallel scans to be implemented while applying an implicit, or explicit, LIMIT 1?\n**//___^\n\n\n\nIt doesn't add a limit node to the plan, but it behaves similarly to that. The database only needs to fetch the first row to answer the EXISTS predicate.\n\nI don't think this is inherently incompatible with parallel plans, but the planner simply thinks it's going to bee very cheap - cheaper than setting up parallel workers etc. So it does not do that.Understood.  Any chance of the planner possibly being enhanced in the future to come to a better conclusion as to whether, or not, a parallel scan implementation would be a better choice during EXISTS condition checks?  :-) \n\n\n\n    How did you generate the data?\n\n\n\nI used generate_series() to create 1 million records in sequence at the same time that I created the table using the following script...\n\n    CREATE TABLE zz_noidx1 AS SELECT generate_series(0, 999999) AS\n    int_distinct, 'Test'::text || generate_series(0, 999999)::text AS\n    text_distinct;\n\n\n\nWhich means that there are actually no matching rows for 'Test5000001'. So the database will scan the whole table anyway, in order to answer the EXISTS condition. The estimate of 5000 matching rows is a default value (0.5% out of 1M rows), because the value is entirely out of the data range covered by the histogram.\n\nThe easiest solution probably is adding an index on that column, which will make answering the EXISTS much faster (at least in this case).Yes.  I did test that scenario, as well.  Adding an index does put the EXISTS condition check on par with the IF(SELECT COUNT(*) FROM...) condition check.  The one scenario where the EXISTS condition check dominated over the IF(SELECT COUNT(*) FROM...) condition check was when no index was used and a matching string *was* found, as opposed to this test particular test where we're looking for a string that will *not* be found.  I just wanted to test all possible scenarios. \n\n\n\n\n             When I ran the above query the first 5 times after starting my\n             Postgres service, I got the same results each time (around 428\n             milliseconds), but when running the query 6 or more times, the\n             execution time jumps up to almost double that.  Here are the\n             \"auto_explain\" results running this query a 6th time...\n\n\n    This is likely due to generating a generic plan after the fifth\n    execution. There seems to be only small difference in costs, though.\n\n\n                 --\"auto_explain\" results after running the same query 6\n        or more\n                 times.\n                 2018-04-16 15:01:51.635 EDT [12616] LOG:  duration:\n        761.847 ms         plan:\n                   Query Text: SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n                 LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\n                   Result  (cost=4.58..4.59 rows=1 width=1) (actual\n                 time=761.843..761.843 rows=1 loops=1)\n                     Buffers: shared hit=160 read=5246\n                     InitPlan 1 (returns $0)\n                    ->  Seq Scan on zz_noidx1  (cost=0.00..22906.00\n        rows=5000\n                 width=0) (actual time=761.841..761.841 rows=0 loops=1)\n                       Filter: (lower(text_distinct) = lower($1))\n                       Rows Removed by Filter: 1000000\n                       Buffers: shared hit=160 read=5246\n                 2018-04-16 15:01:51.635 EDT [12616] CONTEXT:  SQL statement\n                 \"SELECT EXISTS (SELECT 1 FROM zz_noidx1 WHERE\n                 LOWER(zz_noidx1.text_distinct) = LOWER(p_findme))\"\n                   PL/pgSQL function zz_spx_ifexists_noidx(text) line 4\n        at IF\n                 2018-04-16 15:01:51.635 EDT [12616] LOG:  duration:\n        762.156 ms         plan:\n                   Query Text: explain (analyze, buffers) select * from\n                 zz_spx_ifexists_noidx('Test5000001')\n                   Function Scan on zz_spx_ifexists_noidx  (cost=0.25..0.26\n                 rows=1 width=32) (actual time=762.154..762.155 rows=1\n        loops=1)\n                     Buffers: shared hit=160 read=5246\n\n             As you can see, the execution time jumps up to about 762\n             milliseonds.  I can see in the sequence scan node that the\n        LOWER()\n             function shows up on the right side of the equal operator,\n        whereas\n             in the first 5 runs of this test query the plan did not\n        show this.     Why is this?\n\n\n    It doesn't really matter on which side it shows, it's more about a\n    generic plan built without knowledge of the parameter value.\n\n\n\nRight.  I was more wondering why it switched over to a generic plan, as you've stated, like clockwork starting with the 6th execution run.\n\n\n\nThat's a hard-coded value. The first 5 executions are re-planned using the actual parameter values, and then we try generating a generic plan and see if it's cheaper than the non-generic one. You can disable that, though.So on that note, in the planner's eyes, starting with the 6th execution, it looks like the planner still thinks that the generic plan will perform better than the non-generic one, which is why it keeps using the generic plan from that point forward?Similar to the parallel scans, any chance of the planner possibly being enhanced in the future to come to a better conclusion as to whether, or not, the generic plan will perform better than the non-generic plan?  :-) \n\nregards\n\n-- \nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & ServicesThanks for all the help!  I really appreciate it!Ryan", "msg_date": "Tue, 17 Apr 2018 11:43:36 -0400", "msg_from": "Hackety Man <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": "2018-04-17 12:52 GMT+02:00 Tomas Vondra <[email protected]>:\n\n>\n>\n> On 04/17/2018 07:17 AM, Pavel Stehule wrote:\n>\n>> Hi\n>>\n>> 2018-04-16 22:42 GMT+02:00 Hackety Man <[email protected] <mailto:\n>> [email protected]>>:\n>>\n>> ...\n>>\n> >\n>\n>> A support of parallel query execution is not complete - it doesn't work\n>> in PostgreSQL 11 too. So although EXISTS variant can be faster (but can be\n>> - the worst case of EXISTS is same like COUNT), then due disabled parallel\n>> execution the COUNT(*) is faster now. It is unfortunate, because I believe\n>> so this issue will be fixed in few years.\n>>\n>>\n> None of the issues seems to be particularly related to parallel query.\n> It's much more likely a general issue with planning EXISTS / LIMIT and\n> non-uniform data distribution.\n\n\nI was wrong EXISTS are not supported. It looks like new dimension of\nperformance issues related to parallelism. I understand so this example is\nworst case.\n\npostgres=# EXPLAIN (ANALYZE, BUFFERS) select exists(SELECT * FROM zz_noidx1\nWHERE LOWER(text_distinct) = LOWER('Test5000001'));\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------\n Result (cost=4.08..4.09 rows=1 width=1) (actual time=423.600..423.600\nrows=1 loops=1)\n Buffers: shared hit=3296 read=2110\n InitPlan 1 (returns $0)\n -> Seq Scan on zz_noidx1 (cost=0.00..20406.00 rows=5000 width=0)\n(actual time=423.595..423.595 rows=0 loops=1)\n Filter: (lower(text_distinct) = 'test5000001'::text)\n Rows Removed by Filter: 1000000\n Buffers: shared hit=3296 read=2110\n Planning Time: 0.133 ms\n Execution Time: 423.633 ms\n\npostgres=# EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM zz_noidx1 WHERE\nLOWER(text_distinct) = LOWER('Test5000001');\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=12661.42..12661.43 rows=1 width=8) (actual\ntime=246.662..246.662 rows=1 loops=1)\n Buffers: shared hit=817 read=549\n -> Gather (cost=12661.21..12661.42 rows=2 width=8) (actual\ntime=246.642..246.656 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=817 read=549\n -> Partial Aggregate (cost=11661.21..11661.22 rows=1 width=8)\n(actual time=242.168..242.169 rows=1 loops=3)\n Buffers: shared hit=3360 read=2046\n -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00\nrows=2083 width=0) (actual time=242.165..242.165 rows=0 loops=3)\n Filter: (lower(text_distinct) = 'test5000001'::text)\n Rows Removed by Filter: 333333\n Buffers: shared hit=3360 read=2046\n Planning Time: 0.222 ms\n Execution Time: 247.927 ms\n\nThe cost of EXISTS is too low to use parallelism, and value is found too\nlate.\n\nWhen I decrease startup cost to 0 of parallel exec I got similar plan,\nsimilar time\n\npostgres=# EXPLAIN (ANALYZE, BUFFERS) select exists(SELECT * FROM zz_noidx1\nWHERE LOWER(text_distinct) = LOWER('Test5000001'));\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=2.43..2.44 rows=1 width=1) (actual time=246.398..246.402\nrows=1 loops=1)\n Buffers: shared hit=885 read=489\n InitPlan 1 (returns $1)\n -> Gather (cost=0.00..12156.00 rows=5000 width=0) (actual\ntime=246.393..246.393 rows=0 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=885 read=489\n -> Parallel Seq Scan on zz_noidx1 (cost=0.00..11656.00\nrows=2083 width=0) (actual time=241.067..241.067 rows=0 loops=3)\n Filter: (lower(text_distinct) = 'test5000001'::text)\n Rows Removed by Filter: 333333\n Buffers: shared hit=3552 read=1854\n Planning Time: 0.138 ms\n Execution Time: 247.623 ms\n(13 rows)\n\n From this perspective it looks so cost of EXISTS(subselect) is maybe too\nlow.\n\nRegards\n\nPavel\n\n\n\n\n\n\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n2018-04-17 12:52 GMT+02:00 Tomas Vondra <[email protected]>:\n\r\nOn 04/17/2018 07:17 AM, Pavel Stehule wrote:\n\r\nHi\n\r\n2018-04-16 22:42 GMT+02:00 Hackety Man <[email protected] <mailto:[email protected]>>:\n\r\n...\n\r\n>\n\r\nA support of parallel query execution is not complete -  it doesn't work in PostgreSQL 11 too. So although EXISTS variant can be faster (but can be - the worst case of EXISTS is same like COUNT), then due disabled parallel execution the COUNT(*) is faster now. It is unfortunate, because I believe so this issue will be fixed in few years.\n\n\n\r\nNone of the issues seems to be particularly related to parallel query. It's much more likely a general issue with planning EXISTS / LIMIT and non-uniform data distribution.I was wrong EXISTS are not supported. It looks like new dimension of performance issues related to parallelism. I understand so this example is worst case.postgres=# EXPLAIN (ANALYZE, BUFFERS) select exists(SELECT * FROM zz_noidx1 WHERE LOWER(text_distinct) = LOWER('Test5000001'));                                                      QUERY PLAN                                                      ---------------------------------------------------------------------------------------------------------------------- Result  (cost=4.08..4.09 rows=1 width=1) (actual time=423.600..423.600 rows=1 loops=1)   Buffers: shared hit=3296 read=2110   InitPlan 1 (returns $0)     ->  Seq Scan on zz_noidx1  (cost=0.00..20406.00 rows=5000 width=0) (actual time=423.595..423.595 rows=0 loops=1)           Filter: (lower(text_distinct) = 'test5000001'::text)           Rows Removed by Filter: 1000000           Buffers: shared hit=3296 read=2110 Planning Time: 0.133 ms Execution Time: 423.633 mspostgres=# EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM zz_noidx1 WHERE LOWER(text_distinct) = LOWER('Test5000001');                                                               QUERY PLAN                                                                ----------------------------------------------------------------------------------------------------------------------------------------- Finalize Aggregate  (cost=12661.42..12661.43 rows=1 width=8) (actual time=246.662..246.662 rows=1 loops=1)   Buffers: shared hit=817 read=549   ->  Gather  (cost=12661.21..12661.42 rows=2 width=8) (actual time=246.642..246.656 rows=3 loops=1)         Workers Planned: 2         Workers Launched: 2         Buffers: shared hit=817 read=549         ->  Partial Aggregate  (cost=11661.21..11661.22 rows=1 width=8) (actual time=242.168..242.169 rows=1 loops=3)               Buffers: shared hit=3360 read=2046               ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=242.165..242.165 rows=0 loops=3)                     Filter: (lower(text_distinct) = 'test5000001'::text)                     Rows Removed by Filter: 333333                     Buffers: shared hit=3360 read=2046 Planning Time: 0.222 ms Execution Time: 247.927 msThe cost of EXISTS is too low to use parallelism, and value is found too late.When I decrease startup cost to 0 of parallel exec I got similar plan, similar timepostgres=# EXPLAIN (ANALYZE, BUFFERS) select exists(SELECT * FROM zz_noidx1 WHERE LOWER(text_distinct) = LOWER('Test5000001'));                                                             QUERY PLAN                                                              ------------------------------------------------------------------------------------------------------------------------------------- Result  (cost=2.43..2.44 rows=1 width=1) (actual time=246.398..246.402 rows=1 loops=1)   Buffers: shared hit=885 read=489   InitPlan 1 (returns $1)     ->  Gather  (cost=0.00..12156.00 rows=5000 width=0) (actual time=246.393..246.393 rows=0 loops=1)           Workers Planned: 2           Workers Launched: 2           Buffers: shared hit=885 read=489           ->  Parallel Seq Scan on zz_noidx1  (cost=0.00..11656.00 rows=2083 width=0) (actual time=241.067..241.067 rows=0 loops=3)                 Filter: (lower(text_distinct) = 'test5000001'::text)                 Rows Removed by Filter: 333333                 Buffers: shared hit=3552 read=1854 Planning Time: 0.138 ms Execution Time: 247.623 ms(13 rows)From this perspective it looks so cost of EXISTS(subselect) is maybe too low.RegardsPavel\n\n\r\nregards\n\r\n-- \r\nTomas Vondra                  http://www.2ndQuadrant.com\r\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 17 Apr 2018 18:13:48 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": ">>>\n>>>\n>>> Right. I was more wondering why it switched over to a generic plan, as\n>>> you've stated, like clockwork starting with the 6th execution run.\n>>>\n>>>\n>> That's a hard-coded value. The first 5 executions are re-planned using\n>> the actual parameter values, and then we try generating a generic plan and\n>> see if it's cheaper than the non-generic one. You can disable that, though.\n>\n>\n>\n> So on that note, in the planner's eyes, starting with the 6th execution,\n> it looks like the planner still thinks that the generic plan will perform\n> better than the non-generic one, which is why it keeps using the generic\n> plan from that point forward?\n>\n> Similar to the parallel scans, any chance of the planner possibly being\n> enhanced in the future to come to a better conclusion as to whether, or\n> not, the generic plan will perform better than the non-generic plan? :-)\n>\n\nall is based on estimations, and when estimations are not correct, then ..\nThe current solution is fart to perfect, but nobody goes with better ideas\n:( Statistic based planners is best available technology, unfortunately\nwith lot of gaps.\n\nThere are not any statistic where any tuple is in database, so a precious\nestimation of EXISTS is hard (impossible). Similar issue is with LIMIT. It\ncan be nice, but I don't expect any significant changes in this area -\nmaybe some tuning step by step of some parameters.\n\nRegards\n\nPavel\n\n\n>\n>\n>\n>>\n>>\n>> regards\n>>\n>> --\n>> Tomas Vondra http://www.2ndQuadrant.com\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>\n>\n> Thanks for all the help! I really appreciate it!\n>\n> Ryan\n>\n>\n\n\n\n\n\nRight.  I was more wondering why it switched over to a generic plan, as you've stated, like clockwork starting with the 6th execution run.\n\n\n\nThat's a hard-coded value. The first 5 executions are re-planned using the actual parameter values, and then we try generating a generic plan and see if it's cheaper than the non-generic one. You can disable that, though.So on that note, in the planner's eyes, starting with the 6th execution, it looks like the planner still thinks that the generic plan will perform better than the non-generic one, which is why it keeps using the generic plan from that point forward?Similar to the parallel scans, any chance of the planner possibly being enhanced in the future to come to a better conclusion as to whether, or not, the generic plan will perform better than the non-generic plan?  :-)all is based on estimations, and when estimations are not correct, then .. The current solution is fart to perfect, but nobody goes with better ideas :( Statistic based planners is best available technology, unfortunately with lot of gaps.There are not any statistic where any tuple is in database, so a precious estimation of EXISTS is hard (impossible). Similar issue is with LIMIT. It can be nice, but I don't expect any significant changes in this area - maybe some tuning step by step of some parameters. RegardsPavel   \n\nregards\n\n-- \nTomas Vondra                  http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & ServicesThanks for all the help!  I really appreciate it!Ryan", "msg_date": "Tue, 17 Apr 2018 18:41:50 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": "On 04/17/2018 05:43 PM, Hackety Man wrote:\n> \n> \n> On Tue, Apr 17, 2018 at 10:23 AM, Tomas Vondra\n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> \n> \n> On 04/17/2018 04:01 PM, Hackety Man wrote:\n> \n> ...\n> Right.  I was more wondering why it switched over to a generic\n> plan, as you've stated, like clockwork starting with the 6th\n> execution run.\n> \n> \n> That's a hard-coded value. The first 5 executions are re-planned\n> using the actual parameter values, and then we try generating a\n> generic plan and see if it's cheaper than the non-generic one. You\n> can disable that, though.\n> \n> \n> \n> So on that note, in the planner's eyes, starting with the 6th execution,\n> it looks like the planner still thinks that the generic plan will\n> perform better than the non-generic one, which is why it keeps using the\n> generic plan from that point forward?\n> \n\nYes. The point of prepared statements (which also applies to plpgsql, as\nit uses prepared statements automatically) is to eliminate the planning\noverhead. So we try planning it with actual parameter values for the\nfirst 5 plans, and then compare it to the generic plan.\n\n> Similar to the parallel scans, any chance of the planner possibly being\n> enhanced in the future to come to a better conclusion as to whether, or\n> not, the generic plan will perform better than the non-generic plan?  :-)\n\nThere's always hope, but it's hard to say if/when an enhancement will\nhappen, unfortunately.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 17 Apr 2018 22:29:16 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": "\n\nOn 04/17/2018 04:05 PM, Hackety Man wrote:\n> Hi Pavel,\n> \n> Thanks for sharing that information.  I was not aware that the parallel\n> query functionality was not yet fully implemented.\n> \n\nNothing is ever \"fully implemented\". There are always gaps and possible\nimprovements ;-)\n\nThat being said, parallelism opens an entirely new dimension of possible\nplans and planning issues.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Tue, 17 Apr 2018 22:30:47 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexplainable execution time difference between two test\n functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF\n EXISTS (SELECT 1 FROM...)" }, { "msg_contents": "Apology for sending you emails directly but I do see you guys responding on email related to performance so thought of copying you folks.\nFolks, I read following (PostgreSQL: Documentation: 9.6: citext) and it does not hold true in my testing.. i.e citext is not performing better than lower.Am I missing something? help is appreciated.\n\n| \n| \n| \n| | |\n\n |\n\n |\n| \n| | \nPostgreSQL: Documentation: 9.6: citext\n\n\n |\n\n |\n\n |\n\n\n\n\n\"citext is not as efficient as text because the operator functions and the B-tree comparison functions must make copies of the data and convert it to lower case for comparisons. It is, however, slightly more efficient than using lower to get case-insensitive matching.\"\n\n\nHere is what I have done \ndrop table test;drop table testci;\nCREATE TABLE test (id INTEGER PRIMARY KEY,name character varying(254));CREATE TABLE testci (id INTEGER PRIMARY KEY,name citext\n);\nINSERT INTO test(id, name)SELECT generate_series(1000001,2000000), (md5(random()::text));\nINSERT INTO testci(id, name)SELECT generate_series(1,1000000), (md5(random()::text));\n\nNow, I have done sequential search\nexplain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');\n- Plan:     Node Type: \"Seq Scan\"    Parallel Aware: false    Relation Name: \"test\"    Alias: \"test\"    Startup Cost: 0.00    Total Cost: 23334.00    Plan Rows: 5000    Plan Width: 37    Actual Startup Time: 0.016    Actual Total Time: 680.199    Actual Rows: 1    Actual Loops: 1    Filter: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"    Rows Removed by Filter: 999999  Planning Time: 0.045  Triggers:   Execution Time: 680.213\n\nexplain (analyze on, format yaml) select * from testci where name='956d692092f0b9f85f36bf2b2501f3ad';\n- Plan:     Node Type: \"Seq Scan\"    Parallel Aware: false    Relation Name: \"testci\"    Alias: \"testci\"    Startup Cost: 0.00    Total Cost: 20834.00    Plan Rows: 1    Plan Width: 37    Actual Startup Time: 0.017    Actual Total Time: 1184.485    Actual Rows: 1    Actual Loops: 1    Filter: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"    Rows Removed by Filter: 999999  Planning Time: 0.029  Triggers:   Execution Time: 1184.496\n\n\nYou can see sequential searches with lower working twice as fast as citext.\nNow I added index on citext and equivalent functional index (lower) on text.\n\nCREATE INDEX textlowerindex ON test (lower(name));\ncreate index textindex on test(name);\n\n\nIndex creation took longer with citext v/s creating lower functional index.\n\nNow here comes execution with indexes\nexplain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');\n\n- Plan:     Node Type: \"Bitmap Heap Scan\"    Parallel Aware: false    Relation Name: \"test\"    Alias: \"test\"    Startup Cost: 187.18    Total Cost: 7809.06    Plan Rows: 5000    Plan Width: 37    Actual Startup Time: 0.020    Actual Total Time: 0.020    Actual Rows: 1    Actual Loops: 1    Recheck Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"    Rows Removed by Index Recheck: 0    Exact Heap Blocks: 1    Lossy Heap Blocks: 0    Plans:       - Node Type: \"Bitmap Index Scan\"        Parent Relationship: \"Outer\"        Parallel Aware: false        Index Name: \"textlowerindex\"        Startup Cost: 0.00        Total Cost: 185.93        Plan Rows: 5000        Plan Width: 0        Actual Startup Time: 0.016        Actual Total Time: 0.016        Actual Rows: 1        Actual Loops: 1        Index Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"  Planning Time: 0.051  Triggers:   Execution Time: 0.035\n\n\n\nexplain (analyze on, format yaml) select * from testci where name='956d692092f0b9f85f36bf2b2501f3ad'; \n\n- Plan:     Node Type: \"Index Scan\"    Parallel Aware: false    Scan Direction: \"Forward\"    Index Name: \"citextindex\"    Relation Name: \"testci\"    Alias: \"testci\"    Startup Cost: 0.42    Total Cost: 8.44    Plan Rows: 1    Plan Width: 37    Actual Startup Time: 0.049    Actual Total Time: 0.050    Actual Rows: 1    Actual Loops: 1    Index Cond: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"    Rows Removed by Index Recheck: 0  Planning Time: 0.051  Triggers:   Execution Time: 0.064\n\n\n\nApology for sending you emails directly but I do see you guys responding on email related to performance so thought of copying you folks.Folks, I read following (PostgreSQL: Documentation: 9.6: citext) and it does not hold true in my testing.. i.e citext is not performing better than lower.Am I missing something? help is appreciated.PostgreSQL: Documentation: 9.6: citext\"citext is not as efficient as text because the operator functions and the B-tree comparison functions must make copies of the data and convert it to lower case for comparisons. It is, however, slightly more efficient than using lower to get case-insensitive matching.\"Here is what I have done drop table test;drop table testci;CREATE TABLE test (id INTEGER PRIMARY KEY,name character varying(254));CREATE TABLE testci (id INTEGER PRIMARY KEY,name citext);INSERT INTO test(id, name)SELECT generate_series(1000001,2000000), (md5(random()::text));INSERT INTO testci(id, name)SELECT generate_series(1,1000000), (md5(random()::text));Now, I have done sequential searchexplain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');- Plan:     Node Type: \"Seq Scan\"    Parallel Aware: false    Relation Name: \"test\"    Alias: \"test\"    Startup Cost: 0.00    Total Cost: 23334.00    Plan Rows: 5000    Plan Width: 37    Actual Startup Time: 0.016    Actual Total Time: 680.199    Actual Rows: 1    Actual Loops: 1    Filter: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"    Rows Removed by Filter: 999999  Planning Time: 0.045  Triggers:   Execution Time: 680.213explain (analyze on, format yaml) select * from testci where name='956d692092f0b9f85f36bf2b2501f3ad';- Plan:     Node Type: \"Seq Scan\"    Parallel Aware: false    Relation Name: \"testci\"    Alias: \"testci\"    Startup Cost: 0.00    Total Cost: 20834.00    Plan Rows: 1    Plan Width: 37    Actual Startup Time: 0.017    Actual Total Time: 1184.485    Actual Rows: 1    Actual Loops: 1    Filter: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"    Rows Removed by Filter: 999999  Planning Time: 0.029  Triggers:   Execution Time: 1184.496You can see sequential searches with lower working twice as fast as citext.Now I added index on citext and equivalent functional index (lower) on text.CREATE INDEX textlowerindex ON test (lower(name));create index textindex on test(name);Index creation took longer with citext v/s creating lower functional index.Now here comes execution with indexesexplain (analyze on, format yaml) select * from test where lower(name)=lower('f6d7d5be1d0bed1cca11540d3a2667de');- Plan:     Node Type: \"Bitmap Heap Scan\"    Parallel Aware: false    Relation Name: \"test\"    Alias: \"test\"    Startup Cost: 187.18    Total Cost: 7809.06    Plan Rows: 5000    Plan Width: 37    Actual Startup Time: 0.020    Actual Total Time: 0.020    Actual Rows: 1    Actual Loops: 1    Recheck Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"    Rows Removed by Index Recheck: 0    Exact Heap Blocks: 1    Lossy Heap Blocks: 0    Plans:       - Node Type: \"Bitmap Index Scan\"        Parent Relationship: \"Outer\"        Parallel Aware: false        Index Name: \"textlowerindex\"        Startup Cost: 0.00        Total Cost: 185.93        Plan Rows: 5000        Plan Width: 0        Actual Startup Time: 0.016        Actual Total Time: 0.016        Actual Rows: 1        Actual Loops: 1        Index Cond: \"(lower((name)::text) = 'f6d7d5be1d0bed1cca11540d3a2667de'::text)\"  Planning Time: 0.051  Triggers:   Execution Time: 0.035explain (analyze on, format yaml) select * from testci where name='956d692092f0b9f85f36bf2b2501f3ad'; - Plan:     Node Type: \"Index Scan\"    Parallel Aware: false    Scan Direction: \"Forward\"    Index Name: \"citextindex\"    Relation Name: \"testci\"    Alias: \"testci\"    Startup Cost: 0.42    Total Cost: 8.44    Plan Rows: 1    Plan Width: 37    Actual Startup Time: 0.049    Actual Total Time: 0.050    Actual Rows: 1    Actual Loops: 1    Index Cond: \"(name = '956d692092f0b9f85f36bf2b2501f3ad'::citext)\"    Rows Removed by Index Recheck: 0  Planning Time: 0.051  Triggers:   Execution Time: 0.064", "msg_date": "Tue, 17 Apr 2018 21:05:54 +0000 (UTC)", "msg_from": "Deepak Somaiya <[email protected]>", "msg_from_op": false, "msg_subject": "Citext Performance" } ]
[ { "msg_contents": "Hi Team,\n\nCould anyone help me to solve the below issue. I am installing PostgreSQL 9.5 in centos 6 using YUM\n\n\n[root@VM-02 PostgreSQL]# yum install https://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-6-x86_64/pgdg-centos95-9.5-3.noarch.rpm\n\nLoaded plugins: fastestmirror, refresh-packagekit, security\nLoading mirror speeds from cached hostfile\n* base: mirror.nbrc.ac.in\n* extras: mirror.nbrc.ac.in\n* updates: mirror.nbrc.ac.in\nbase/primary_db | 41 kB 00:00\nhttp://mirror.nbrc.ac.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db | 43 kB 00:00\nhttp://centos.excellmedia.net/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db | 45 kB 00:00\nhttp://mirror.dhakacom.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db | 47 kB 00:00\nhttp://mirror.xeonbd.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db | 49 kB 00:00\nhttp://mirror.vbctv.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db | 51 kB 00:00\nhttp://mirror.digistar.vn/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db | 53 kB 00:00\nhttp://centos.myfahim.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db | 54 kB 00:00\nhttp://ftp.iitm.ac.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db | 56 kB 00:00\nhttp://centos.mirror.net.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db | 58 kB 00:00\nhttp://del-mirrors.extreme-ix.org/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nError: failure: repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2 from base: [Errno 256] No more mirrors to try.\n\nRegards,\nDinesh Chandra\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nHi Team,\n \nCould anyone help me to solve the below issue. I am installing PostgreSQL 9.5 in centos 6 using YUM\n \n \n[root@VM-02 PostgreSQL]# yum install\n\nhttps://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-6-x86_64/pgdg-centos95-9.5-3.noarch.rpm\n \nLoaded plugins: fastestmirror, refresh-packagekit, security\nLoading mirror speeds from cached hostfile\n* base: mirror.nbrc.ac.in\n* extras: mirror.nbrc.ac.in\n* updates: mirror.nbrc.ac.in\nbase/primary_db                                                                                                                       |  41 kB     00:00\nhttp://mirror.nbrc.ac.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match\n checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  43 kB     00:00\nhttp://centos.excellmedia.net/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\n[Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  45 kB     00:00\nhttp://mirror.dhakacom.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\n[Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  47 kB     00:00\nhttp://mirror.xeonbd.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\n[Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  49 kB     00:00\nhttp://mirror.vbctv.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\n[Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  51 kB     00:00\nhttp://mirror.digistar.vn/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  53 kB     00:00\nhttp://centos.myfahim.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\n[Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  54 kB     00:00\nhttp://ftp.iitm.ac.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  56 kB     00:00\nhttp://centos.mirror.net.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\n[Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  58 kB     00:00\nhttp://del-mirrors.extreme-ix.org/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\n[Errno -1] Metadata file does not match checksum\nTrying other mirror.\nError: failure: repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2 from base:\n[Errno 256] No more mirrors to try.\n \nRegards,\nDinesh Chandra\n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.", "msg_date": "Tue, 17 Apr 2018 09:25:05 +0000", "msg_from": "Dinesh Chandra 12108 <[email protected]>", "msg_from_op": true, "msg_subject": "Installing PostgreSQL 9.5 in centos 6 using YUM" }, { "msg_contents": "Hi,\r\n\r\ncould you wget it and try installing instead ? or could you try an rpm that matches with your exact CentoOS version from here.\r\n\r\nhttps://download.postgresql.org/pub/repos/yum/9.5/redhat/\r\n\r\n\r\nBest Regards,\r\n\r\nNawaz Ahmed\r\nSoftware Development Engineer\r\n\r\nFujitsu Australia Software Technology Pty Ltd\r\n14 Rodborough Road, Frenchs Forest NSW 2086, Australia\r\nT +61 2 9452 9027\r\[email protected]<mailto:[email protected]>\r\nfastware.com.au<http://fastware.com.au/>\r\n\r\n\r\n\r\nFrom: Dinesh Chandra 12108 [mailto:[email protected]]\r\nSent: Tuesday, 17 April 2018 7:25 PM\r\nTo: [email protected]\r\nCc: [email protected]\r\nSubject: Installing PostgreSQL 9.5 in centos 6 using YUM\r\n\r\nHi Team,\r\n\r\nCould anyone help me to solve the below issue. I am installing PostgreSQL 9.5 in centos 6 using YUM\r\n\r\n\r\n[root@VM-02 PostgreSQL]# yum install https://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-6-x86_64/pgdg-centos95-9.5-3.noarch.rpm\r\n\r\nLoaded plugins: fastestmirror, refresh-packagekit, security\r\nLoading mirror speeds from cached hostfile\r\n* base: mirror.nbrc.ac.in\r\n* extras: mirror.nbrc.ac.in\r\n* updates: mirror.nbrc.ac.in\r\nbase/primary_db | 41 kB 00:00\r\nhttp://mirror.nbrc.ac.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\r\nTrying other mirror.\r\nbase/primary_db | 43 kB 00:00\r\nhttp://centos.excellmedia.net/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\r\nTrying other mirror.\r\nbase/primary_db | 45 kB 00:00\r\nhttp://mirror.dhakacom.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\r\nTrying other mirror.\r\nbase/primary_db | 47 kB 00:00\r\nhttp://mirror.xeonbd.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\r\nTrying other mirror.\r\nbase/primary_db | 49 kB 00:00\r\nhttp://mirror.vbctv.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\r\nTrying other mirror.\r\nbase/primary_db | 51 kB 00:00\r\nhttp://mirror.digistar.vn/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\r\nTrying other mirror.\r\nbase/primary_db | 53 kB 00:00\r\nhttp://centos.myfahim.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\r\nTrying other mirror.\r\nbase/primary_db | 54 kB 00:00\r\nhttp://ftp.iitm.ac.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\r\nTrying other mirror.\r\nbase/primary_db | 56 kB 00:00\r\nhttp://centos.mirror.net.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\r\nTrying other mirror.\r\nbase/primary_db | 58 kB 00:00\r\nhttp://del-mirrors.extreme-ix.org/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2: [Errno -1] Metadata file does not match checksum\r\nTrying other mirror.\r\nError: failure: repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2 from base: [Errno 256] No more mirrors to try.\r\n\r\nRegards,\r\nDinesh Chandra\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\nDisclaimer\r\n\r\nThe information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.\r\n\r\n\r\nWhereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.\r\n\r\n\r\nIf you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email [email protected]\r\n\n\n\n\n\n\n\n\n\nHi,\n \ncould you wget it and try installing instead ? or could you try an rpm that matches with your exact CentoOS version from here.\n \nhttps://download.postgresql.org/pub/repos/yum/9.5/redhat/\n \n\n \nBest Regards,\n \nNawaz Ahmed\r\nSoftware Development Engineer\n\r\nFujitsu Australia Software Technology Pty Ltd\r\n14 Rodborough Road, Frenchs Forest NSW 2086, Australia\nT +61 2 9452 9027 \[email protected]\nfastware.com.au\n\n\n\n\n \n\n\nFrom: Dinesh Chandra 12108 [mailto:[email protected]]\r\n\nSent: Tuesday, 17 April 2018 7:25 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Installing PostgreSQL 9.5 in centos 6 using YUM\n\n\n \nHi Team,\n \nCould anyone help me to solve the below issue. I am installing PostgreSQL 9.5 in centos 6 using YUM\n \n \n[root@VM-02 PostgreSQL]# yum install\r\n\r\nhttps://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-6-x86_64/pgdg-centos95-9.5-3.noarch.rpm\n \nLoaded plugins: fastestmirror, refresh-packagekit, security\nLoading mirror speeds from cached hostfile\n* base: mirror.nbrc.ac.in\n* extras: mirror.nbrc.ac.in\n* updates: mirror.nbrc.ac.in\nbase/primary_db                                                                                                                       |  41 kB     00:00\nhttp://mirror.nbrc.ac.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\r\n [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  43 kB     00:00\nhttp://centos.excellmedia.net/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\r\n[Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  45 kB     00:00\nhttp://mirror.dhakacom.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\r\n[Errno -1] Metadata file does not match\r\nchecksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  47 kB     00:00\nhttp://mirror.xeonbd.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\r\n[Errno -1] Metadata file does not match\r\nchecksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  49 kB     00:00\nhttp://mirror.vbctv.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\r\n[Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  51 kB     00:00\nhttp://mirror.digistar.vn/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\r\n [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  53 kB     00:00\nhttp://centos.myfahim.com/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\r\n[Errno -1] Metadata file does not match\r\nchecksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  54 kB     00:00\nhttp://ftp.iitm.ac.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\r\n [Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  56 kB     00:00\nhttp://centos.mirror.net.in/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\r\n[Errno -1] Metadata file does not match checksum\nTrying other mirror.\nbase/primary_db                                                                                                                       |  58 kB     00:00\nhttp://del-mirrors.extreme-ix.org/centos/6.9/os/x86_64/repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2:\r\n[Errno -1] Metadata file does not match\r\nchecksum\nTrying other mirror.\nError: failure: repodata/5d14ebd60604f4433dcc8a3a17cd3bbc7b80ec5dff74cbcc50dab6e711959265-primary.sqlite.bz2 from base:\r\n[Errno 256] No more mirrors to try.\n \nRegards,\nDinesh Chandra\n \n\n\n\n\r\nDISCLAIMER:\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\r\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\nDisclaimer\nThe information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified\r\n that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document\r\n and all copies thereof.\n\nWhereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu\r\n Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication\r\n or any files attached.\n\nIf you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email [email protected]", "msg_date": "Wed, 18 Apr 2018 07:47:25 +0000", "msg_from": "\"Ahmed, Nawaz\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Installing PostgreSQL 9.5 in centos 6 using YUM" } ]
[ { "msg_contents": "Hi all,\nI need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent \nresponse.\nInstalled both version and stopped it. Do i need to run both version or \nonly one 8.4 or 9.4 . Both should run on 50432 ?\n\n\n-bash-4.2$ id\nuid=26(postgres) gid=26(postgres) groups=26(postgres) \ncontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n\n-bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data -- \n8.4 data\n-bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4 \n -- 9.4 data\n\n\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade \n--old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin \n--old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n\nconnection to database failed: could not connect to server: No such file \nor directory\n Is the server running locally and accepting\n connections on Unix domain socket \n\"/var/run/postgresql/.s.PGSQL.50432\"?\n\n\ncould not connect to old postmaster started with the command:\n\"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D \n\"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off -c \nautovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c \nunix_socket_permissions=0700\" start\nFailure, exiting\n\n\n\n\nWith Best Regards\nAkshay\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\n\n\nHi all,\nI need help on pg_upgrade from 8.4 to 9.4\nversion. Appreciate urgent response.\nInstalled both version and stopped it.\nDo i need to run both version or only one 8.4 or 9.4 . Both should run\non 50432 ?\n\n\n-bash-4.2$ id\nuid=26(postgres) gid=26(postgres) groups=26(postgres)\ncontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n\n-bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data\n                     \n     -- 8.4 data\n-bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n                  -- 9.4 data\n\n\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n--old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n--old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n\nconnection to database failed: could\nnot connect to server: No such file or directory\n        Is the server\nrunning locally and accepting\n        connections\non Unix domain socket \"/var/run/postgresql/.s.PGSQL.50432\"?\n\n\ncould not connect to old postmaster\nstarted with the command:\n\"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432\n-c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c unix_socket_permissions=0700\" start\nFailure, exiting\n\n\n\n\nWith Best Regards\nAkshay\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you", "msg_date": "Wed, 18 Apr 2018 13:00:38 +0530", "msg_from": "Akshay Ballarpure <[email protected]>", "msg_from_op": true, "msg_subject": "pg_upgrade help" }, { "msg_contents": "Hi,\n\nplease avoid crossposting to multiple mailing lists.\n\n\nYou need to run both versions of the database, the old and the new.\n\nThey need to run on different ports (note that it is impossible to run 2\ndifferent processes on the same port, that's not a postgresql thing)\n\n\n\nOn 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n> Hi all,\n> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent\n> response.\n> Installed both version and stopped it. Do i need to run both version or\n> only one 8.4 or 9.4 . Both should run on 50432 ?\n> \n> \n> -bash-4.2$ id\n> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n> \n> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data � �\n> � � � � � � � � � � � �-- 8.4 data\n> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n> � � � � � � � � � -- 9.4 data\n> \n> \n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n> \n> *connection to database failed: could not connect to server: No such\n> file or directory*\n> � � � � Is the server running locally and accepting\n> � � � � connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.50432\"?\n> \n> \n> could not connect to old postmaster started with the command:\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n> -c autovacuum_freeze_max_age=2000000000 �-c listen_addresses='' -c\n> unix_socket_permissions=0700\" start\n> Failure, exiting\n> \n> \n> \n> \n> With Best Regards\n> Akshay\n> \n> =====-----=====-----=====\n> Notice: The information contained in this e-mail\n> message and/or attachments to it may contain\n> confidential or privileged information. If you are\n> not the intended recipient, any dissemination, use,\n> review, distribution, printing or copying of the\n> information contained in this e-mail message\n> and/or attachments to it are strictly prohibited. If\n> you have received this communication in error,\n> please notify us by reply e-mail or telephone and\n> immediately and permanently delete the message\n> and any attachments. Thank you\n> \n\n", "msg_date": "Wed, 18 Apr 2018 09:35:52 +0200", "msg_from": "Fabio Pardi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "Thanks Fabio for instant reply.\n\nI now started 8.4 with 50432 and 9.4 with default port but still its \nfailing ...Can you please suggest what is wrong ?\n\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade \n--old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin \n--old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n\n*failure*\nConsult the last few lines of \"pg_upgrade_server.log\" for\nthe probable cause of the failure.\n\nThere seems to be a postmaster servicing the old cluster.\nPlease shutdown that postmaster and try again.\nFailure, exiting\n-bash-4.2$ ps -eaf | grep postgres\nroot 8646 9365 0 08:07 pts/1 00:00:00 su - postgres\npostgres 8647 8646 0 08:07 pts/1 00:00:00 -bash\npostgres 9778 1 0 09:17 ? 00:00:00 /usr/bin/postgres -p 50432 \n-D /var/ericsson/esm-data/postgresql-data/\npostgres 9779 9778 0 09:17 ? 00:00:00 postgres: logger process\npostgres 9781 9778 0 09:17 ? 00:00:00 postgres: writer process\npostgres 9782 9778 0 09:17 ? 00:00:00 postgres: wal writer \nprocess\npostgres 9783 9778 0 09:17 ? 00:00:00 postgres: autovacuum \nlauncher process\npostgres 9784 9778 0 09:17 ? 00:00:00 postgres: stats collector \nprocess\npostgres 9900 1 0 09:20 ? 00:00:00 \n/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4/\npostgres 9901 9900 0 09:20 ? 00:00:00 postgres: logger process\npostgres 9903 9900 0 09:20 ? 00:00:00 postgres: checkpointer \nprocess\npostgres 9904 9900 0 09:20 ? 00:00:00 postgres: writer process\npostgres 9905 9900 0 09:20 ? 00:00:00 postgres: wal writer \nprocess\npostgres 9906 9900 0 09:20 ? 00:00:00 postgres: autovacuum \nlauncher process\npostgres 9907 9900 0 09:20 ? 00:00:00 postgres: stats collector \nprocess\npostgres 9926 8647 0 09:21 pts/1 00:00:00 ps -eaf\npostgres 9927 8647 0 09:21 pts/1 00:00:00 grep --color=auto postgres\n\n\n-bash-4.2$ netstat -antp | grep 50432\n(Not all processes could be identified, non-owned process info\n will not be shown, you would have to be root to see it all.)\ntcp 0 0 127.0.0.1:50432 0.0.0.0:* LISTEN \n 9778/postgres\ntcp6 0 0 ::1:50432 :::* LISTEN \n 9778/postgres\n-bash-4.2$ netstat -antp | grep 5432\n(Not all processes could be identified, non-owned process info\n will not be shown, you would have to be root to see it all.)\ntcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN \n 9900/postgres\ntcp6 0 0 ::1:5432 :::* LISTEN \n 9900/postgres\n\n-----------------------------------------------------------------\n pg_upgrade run on Wed Apr 18 09:24:47 2018\n-----------------------------------------------------------------\n\ncommand: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D \n\"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off -c \nautovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c \nunix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\" 2>&1\npg_ctl: another server might be running; trying to start server anyway\nFATAL: lock file \"postmaster.pid\" already exists\nHINT: Is another postmaster (PID 9778) running in data directory \n\"/var/ericsson/esm-data/postgresql-data\"?\npg_ctl: could not start server\nExamine the log output.\n\n\n[root@ms-esmon /]# cat \n./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n9900\n/var/ericsson/esm-data/postgresql-data-9.4\n1524039630\n5432\n/var/run/postgresql\nlocalhost\n 5432001 2031616\n \n \n[root@ms-esmon /]# cat \n./var/ericsson/esm-data/postgresql-data/postmaster.pid\n9778\n/var/ericsson/esm-data/postgresql-data\n 50432001 1998850\n\n\n\n\nWith Best Regards\nAkshay\n\n\n\n\n\nFrom: Fabio Pardi <[email protected]>\nTo: Akshay Ballarpure <[email protected]>, \[email protected]\nDate: 04/18/2018 01:06 PM\nSubject: Re: pg_upgrade help\n\n\n\nHi,\n\nplease avoid crossposting to multiple mailing lists.\n\n\nYou need to run both versions of the database, the old and the new.\n\nThey need to run on different ports (note that it is impossible to run 2\ndifferent processes on the same port, that's not a postgresql thing)\n\n\n\nOn 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n> Hi all,\n> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent\n> response.\n> Installed both version and stopped it. Do i need to run both version or\n> only one 8.4 or 9.4 . Both should run on 50432 ?\n> \n> \n> -bash-4.2$ id\n> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n> \n> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data \n> -- 8.4 data\n> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n> -- 9.4 data\n> \n> \n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n> \n> *connection to database failed: could not connect to server: No such\n> file or directory*\n> Is the server running locally and accepting\n> connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.50432\"?\n> \n> \n> could not connect to old postmaster started with the command:\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n> unix_socket_permissions=0700\" start\n> Failure, exiting\n> \n> \n> \n> \n> With Best Regards\n> Akshay\n> \n> =====-----=====-----=====\n> Notice: The information contained in this e-mail\n> message and/or attachments to it may contain\n> confidential or privileged information. If you are\n> not the intended recipient, any dissemination, use,\n> review, distribution, printing or copying of the\n> information contained in this e-mail message\n> and/or attachments to it are strictly prohibited. If\n> you have received this communication in error,\n> please notify us by reply e-mail or telephone and\n> immediately and permanently delete the message\n> and any attachments. Thank you\n> \n\n\nThanks Fabio for instant reply.\n\nI now started 8.4 with 50432 and 9.4 with\ndefault port but still its failing ...Can you please suggest what is wrong\n?\n\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n--old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n--old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n\n*failure*\nConsult the last few lines of \"pg_upgrade_server.log\"\nfor\nthe probable cause of the failure.\n\nThere seems to be a postmaster servicing\nthe old cluster.\nPlease shutdown that postmaster and try again.\nFailure, exiting\n-bash-4.2$ ps -eaf | grep postgres\nroot      8646  9365\n 0 08:07 pts/1    00:00:00 su - postgres\npostgres  8647  8646  0 08:07\npts/1    00:00:00 -bash\npostgres  9778     1  0\n09:17 ?        00:00:00 /usr/bin/postgres -p 50432\n-D /var/ericsson/esm-data/postgresql-data/\npostgres  9779  9778  0 09:17\n?        00:00:00 postgres: logger process\npostgres  9781  9778  0 09:17\n?        00:00:00 postgres: writer process\npostgres  9782  9778  0 09:17\n?        00:00:00 postgres: wal writer process\npostgres  9783  9778  0 09:17\n?        00:00:00 postgres: autovacuum launcher process\npostgres  9784  9778  0 09:17\n?        00:00:00 postgres: stats collector process\npostgres  9900     1  0\n09:20 ?        00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres\n-D /var/ericsson/esm-data/postgresql-data-9.4/\npostgres  9901  9900  0 09:20\n?        00:00:00 postgres: logger process\npostgres  9903  9900  0 09:20\n?        00:00:00 postgres: checkpointer process\npostgres  9904  9900  0 09:20\n?        00:00:00 postgres: writer process\npostgres  9905  9900  0 09:20\n?        00:00:00 postgres: wal writer process\npostgres  9906  9900  0 09:20\n?        00:00:00 postgres: autovacuum launcher process\npostgres  9907  9900  0 09:20\n?        00:00:00 postgres: stats collector process\npostgres  9926  8647  0 09:21\npts/1    00:00:00 ps -eaf\npostgres  9927  8647  0 09:21\npts/1    00:00:00 grep --color=auto postgres\n\n\n-bash-4.2$ netstat -antp | grep 50432\n(Not all processes could be identified,\nnon-owned process info\n will not be shown, you would have\nto be root to see it all.)\ntcp        0  \n   0 127.0.0.1:50432         0.0.0.0:*  \n            LISTEN      9778/postgres\ntcp6       0    \n 0 ::1:50432               :::*\n                   LISTEN\n     9778/postgres\n-bash-4.2$ netstat -antp | grep 5432\n(Not all processes could be identified,\nnon-owned process info\n will not be shown, you would have\nto be root to see it all.)\ntcp        0  \n   0 127.0.0.1:5432          0.0.0.0:*\n              LISTEN      9900/postgres\ntcp6       0    \n 0 ::1:5432                :::*\n                   LISTEN\n     9900/postgres\n\n-----------------------------------------------------------------\n  pg_upgrade run on Wed Apr 18\n09:24:47 2018\n-----------------------------------------------------------------\n\ncommand: \"/usr/bin/pg_ctl\"\n-w -l \"pg_upgrade_server.log\" -D \"/var/ericsson/esm-data/postgresql-data\"\n-o \"-p 50432 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000\n -c listen_addresses='' -c unix_socket_permissions=0700\" start\n>> \"pg_upgrade_server.log\" 2>&1\npg_ctl: another server might be running;\ntrying to start server anyway\nFATAL:  lock file \"postmaster.pid\"\nalready exists\nHINT:  Is another postmaster (PID\n9778) running in data directory \"/var/ericsson/esm-data/postgresql-data\"?\npg_ctl: could not start server\nExamine the log output.\n\n\n[root@ms-esmon /]# cat ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n9900\n/var/ericsson/esm-data/postgresql-data-9.4\n1524039630\n5432\n/var/run/postgresql\nlocalhost\n  5432001   2031616\n  \n  \n[root@ms-esmon /]# cat ./var/ericsson/esm-data/postgresql-data/postmaster.pid\n9778\n/var/ericsson/esm-data/postgresql-data\n 50432001   1998850\n\n\n\n\nWith Best Regards\nAkshay\n\n\n\n\n\nFrom:      \n Fabio Pardi <[email protected]>\nTo:      \n Akshay Ballarpure <[email protected]>,\[email protected]\nDate:      \n 04/18/2018 01:06 PM\nSubject:    \n   Re: pg_upgrade\nhelp\n\n\n\n\nHi,\n\nplease avoid crossposting to multiple mailing lists.\n\n\nYou need to run both versions of the database, the old and the new.\n\nThey need to run on different ports (note that it is impossible to run\n2\ndifferent processes on the same port, that's not a postgresql thing)\n\n\n\nOn 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n> Hi all,\n> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent\n> response.\n> Installed both version and stopped it. Do i need to run both version\nor\n> only one 8.4 or 9.4 . Both should run on 50432 ?\n> \n> \n> -bash-4.2$ id\n> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n> \n> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data\n   \n>                    \n   -- 8.4 data\n> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n>                   --\n9.4 data\n> \n> \n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n> \n> *connection to database failed: could not connect to server: No such\n> file or directory*\n>         Is the server running locally and accepting\n>         connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.50432\"?\n> \n> \n> could not connect to old postmaster started with the command:\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432\n-c autovacuum=off\n> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n> unix_socket_permissions=0700\" start\n> Failure, exiting\n> \n> \n> \n> \n> With Best Regards\n> Akshay\n> \n> =====-----=====-----=====\n> Notice: The information contained in this e-mail\n> message and/or attachments to it may contain\n> confidential or privileged information. If you are\n> not the intended recipient, any dissemination, use,\n> review, distribution, printing or copying of the\n> information contained in this e-mail message\n> and/or attachments to it are strictly prohibited. If\n> you have received this communication in error,\n> please notify us by reply e-mail or telephone and\n> immediately and permanently delete the message\n> and any attachments. Thank you\n>", "msg_date": "Wed, 18 Apr 2018 14:04:30 +0530", "msg_from": "Akshay Ballarpure <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "Hi\nBoth version should be correctly stopped. pg_upgrade started clusters itself.\nPlease check pg_upgrade_server.log file in directory where pg_upgrade was run.\nAlso where is postgresql.conf? In PGDATA? Otherwise you need tell pg_upgrade correct path, for example with options '-o \" -c config_file=/etc/postgresql/8.4/main/postgresql.conf\" -O \" -c config_file=/etc/postgresql/9.4/main/postgresql.conf\"'\n\nregards, Sergei\n\n", "msg_date": "Wed, 18 Apr 2018 11:39:56 +0300", "msg_from": "Sergei Kornilov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "Hi,\n\ni was too fast in reply (and perhaps i should drink my morning coffee\nbefore replying), I will try to be more detailed:\n\nboth servers should be able to run at the moment you run pg_upgrade,\nthat means the 2 servers should have been correctly stopped in advance,\nshould have their configuration files, and new cluster initialized too.\n\nThen, as Sergei highlights here below, pg_upgrade will take care of the\nupgrade process, starting the servers.\n\n\nHere there is a step by step guide, i considered my best ally when it\nwas time to upgrade:\n\nhttps://www.postgresql.org/docs/9.4/static/pgupgrade.html\n\nnote point 7:\n\n'stop both servers'\n\n\nAbout the port the servers will run on, at point 9 there is some\nclarification:\n\n' pg_upgrade defaults to running servers on port 50432 to avoid\nunintended client connections. You can use the same port number for both\nclusters when doing an upgrade because the old and new clusters will not\nbe running at the same time. However, when checking an old running\nserver, the old and new port numbers must be different.'\n\nHope it helps,\n\nFabio Pardi\n\n\nOn 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\n> Thanks Fabio for instant reply.\n> \n> I now started 8.4 with 50432 and 9.4 with default port but still its\n> failing ...Can you please suggest what is wrong ?\n> \n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n> \n> *failure*\n> Consult the last few lines of \"pg_upgrade_server.log\" for\n> the probable cause of the failure.\n> \n> There seems to be a postmaster servicing the old cluster.\n> Please shutdown that postmaster and try again.\n> Failure, exiting\n> -bash-4.2$ ps -eaf | grep postgres\n> root � � �8646 �9365 �0 08:07 pts/1 � �00:00:00 su - postgres\n> postgres �8647 �8646 �0 08:07 pts/1 � �00:00:00 -bash\n> postgres �9778 � � 1 �0 09:17 ? � � � �00:00:00 /usr/bin/postgres -p\n> 50432 -D /var/ericsson/esm-data/postgresql-data/\n> postgres �9779 �9778 �0 09:17 ? � � � �00:00:00 postgres: logger process\n> postgres �9781 �9778 �0 09:17 ? � � � �00:00:00 postgres: writer process\n> postgres �9782 �9778 �0 09:17 ? � � � �00:00:00 postgres: wal writer\n> process\n> postgres �9783 �9778 �0 09:17 ? � � � �00:00:00 postgres: autovacuum\n> launcher process\n> postgres �9784 �9778 �0 09:17 ? � � � �00:00:00 postgres: stats\n> collector process\n> postgres �9900 � � 1 �0 09:20 ? � � � �00:00:00\n> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\n> /var/ericsson/esm-data/postgresql-data-9.4/\n> postgres �9901 �9900 �0 09:20 ? � � � �00:00:00 postgres: logger process\n> postgres �9903 �9900 �0 09:20 ? � � � �00:00:00 postgres: checkpointer\n> process\n> postgres �9904 �9900 �0 09:20 ? � � � �00:00:00 postgres: writer process\n> postgres �9905 �9900 �0 09:20 ? � � � �00:00:00 postgres: wal writer\n> process\n> postgres �9906 �9900 �0 09:20 ? � � � �00:00:00 postgres: autovacuum\n> launcher process\n> postgres �9907 �9900 �0 09:20 ? � � � �00:00:00 postgres: stats\n> collector process\n> postgres �9926 �8647 �0 09:21 pts/1 � �00:00:00 ps -eaf\n> postgres �9927 �8647 �0 09:21 pts/1 � �00:00:00 grep --color=auto postgres\n> \n> \n> -bash-4.2$ netstat -antp | grep 50432\n> (Not all processes could be identified, non-owned process info\n> �will not be shown, you would have to be root to see it all.)\n> tcp � � � �0 � � �0 127.0.0.1:50432 � � � � 0.0.0.0:* � � � � � � �\n> LISTEN � � �9778/postgres\n> tcp6 � � � 0 � � �0 ::1:50432 � � � � � � � :::* � � � � � � � � �\n> �LISTEN � � �9778/postgres\n> -bash-4.2$ netstat -antp | grep 5432\n> (Not all processes could be identified, non-owned process info\n> �will not be shown, you would have to be root to see it all.)\n> tcp � � � �0 � � �0 127.0.0.1:5432 � � � � �0.0.0.0:* � � � � � � �\n> LISTEN � � �9900/postgres\n> tcp6 � � � 0 � � �0 ::1:5432 � � � � � � � �:::* � � � � � � � � �\n> �LISTEN � � �9900/postgres\n> \n> -----------------------------------------------------------------\n> � pg_upgrade run on Wed Apr 18 09:24:47 2018\n> -----------------------------------------------------------------\n> \n> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n> -c autovacuum_freeze_max_age=2000000000 �-c listen_addresses='' -c\n> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\" 2>&1\n> pg_ctl: another server might be running; trying to start server anyway\n> FATAL: �lock file \"postmaster.pid\" already exists\n> HINT: �Is another postmaster (PID 9778) running in data directory\n> \"/var/ericsson/esm-data/postgresql-data\"?\n> pg_ctl: could not start server\n> Examine the log output.\n> \n> \n> [root@ms-esmon /]# cat\n> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n> 9900\n> /var/ericsson/esm-data/postgresql-data-9.4\n> 1524039630\n> 5432\n> /var/run/postgresql\n> localhost\n> � 5432001 � 2031616\n> �\n> �\n> [root@ms-esmon /]# cat\n> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\n> 9778\n> /var/ericsson/esm-data/postgresql-data\n> �50432001 � 1998850\n> \n> \n> \n> \n> With Best Regards\n> Akshay\n> \n> \n> \n> \n> \n> From: � � � �Fabio Pardi <[email protected]>\n> To: � � � �Akshay Ballarpure <[email protected]>,\n> [email protected]\n> Date: � � � �04/18/2018 01:06 PM\n> Subject: � � � �Re: pg_upgrade help\n> ------------------------------------------------------------------------\n> \n> \n> \n> Hi,\n> \n> please avoid crossposting to multiple mailing lists.\n> \n> \n> You need to run both versions of the database, the old and the new.\n> \n> They need to run on different ports (note that it is impossible to run 2\n> different processes on the same port, that's not a postgresql thing)\n> \n> \n> \n> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n>> Hi all,\n>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent\n>> response.\n>> Installed both version and stopped it. Do i need to run both version or\n>> only one 8.4 or 9.4 . Both should run on 50432 ?\n>>\n>>\n>> -bash-4.2$ id\n>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n>>\n>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data � �\n>> � � � � � � � � � � � �-- 8.4 data\n>> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n>> � � � � � � � � � -- 9.4 data\n>>\n>>\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>\n>> *connection to database failed: could not connect to server: No such\n>> file or directory*\n>> � � � � Is the server running locally and accepting\n>> � � � � connections on Unix domain socket\n>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>\n>>\n>> could not connect to old postmaster started with the command:\n>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n>> -c autovacuum_freeze_max_age=2000000000 �-c listen_addresses='' -c\n>> unix_socket_permissions=0700\" start\n>> Failure, exiting\n>>\n>>\n>>\n>>\n>> With Best Regards\n>> Akshay\n>>\n>> =====-----=====-----=====\n>> Notice: The information contained in this e-mail\n>> message and/or attachments to it may contain\n>> confidential or privileged information. If you are\n>> not the intended recipient, any dissemination, use,\n>> review, distribution, printing or copying of the\n>> information contained in this e-mail message\n>> and/or attachments to it are strictly prohibited. If\n>> you have received this communication in error,\n>> please notify us by reply e-mail or telephone and\n>> immediately and permanently delete the message\n>> and any attachments. Thank you\n>>\n> \n\n", "msg_date": "Wed, 18 Apr 2018 11:05:36 +0200", "msg_from": "Fabio Pardi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "Hi Fabio,\nsorry to bother you again, its still failing with stopping both server \n(8.4 and 9.4)\n\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade \n--old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin \n--old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n\nconnection to database failed: could not connect to server: No such file \nor directory\n Is the server running locally and accepting\n connections on Unix domain socket \n\"/var/run/postgresql/.s.PGSQL.50432\"?\n\n\ncould not connect to old postmaster started with the command:\n\"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D \n\"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off -c \nautovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c \nunix_socket_permissions=0700\" start\nFailure, exiting\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty. IT Services\n Business Solutions\n Consulting\n____________________________________________\n\n\n\n\nFrom: Fabio Pardi <[email protected]>\nTo: Akshay Ballarpure <[email protected]>, \[email protected]\nDate: 04/18/2018 02:35 PM\nSubject: Re: pg_upgrade help\n\n\n\nHi,\n\ni was too fast in reply (and perhaps i should drink my morning coffee\nbefore replying), I will try to be more detailed:\n\nboth servers should be able to run at the moment you run pg_upgrade,\nthat means the 2 servers should have been correctly stopped in advance,\nshould have their configuration files, and new cluster initialized too.\n\nThen, as Sergei highlights here below, pg_upgrade will take care of the\nupgrade process, starting the servers.\n\n\nHere there is a step by step guide, i considered my best ally when it\nwas time to upgrade:\n\nhttps://www.postgresql.org/docs/9.4/static/pgupgrade.html\n\nnote point 7:\n\n'stop both servers'\n\n\nAbout the port the servers will run on, at point 9 there is some\nclarification:\n\n' pg_upgrade defaults to running servers on port 50432 to avoid\nunintended client connections. You can use the same port number for both\nclusters when doing an upgrade because the old and new clusters will not\nbe running at the same time. However, when checking an old running\nserver, the old and new port numbers must be different.'\n\nHope it helps,\n\nFabio Pardi\n\n\nOn 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\n> Thanks Fabio for instant reply.\n> \n> I now started 8.4 with 50432 and 9.4 with default port but still its\n> failing ...Can you please suggest what is wrong ?\n> \n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n> \n> *failure*\n> Consult the last few lines of \"pg_upgrade_server.log\" for\n> the probable cause of the failure.\n> \n> There seems to be a postmaster servicing the old cluster.\n> Please shutdown that postmaster and try again.\n> Failure, exiting\n> -bash-4.2$ ps -eaf | grep postgres\n> root 8646 9365 0 08:07 pts/1 00:00:00 su - postgres\n> postgres 8647 8646 0 08:07 pts/1 00:00:00 -bash\n> postgres 9778 1 0 09:17 ? 00:00:00 /usr/bin/postgres -p\n> 50432 -D /var/ericsson/esm-data/postgresql-data/\n> postgres 9779 9778 0 09:17 ? 00:00:00 postgres: logger process\n> postgres 9781 9778 0 09:17 ? 00:00:00 postgres: writer process\n> postgres 9782 9778 0 09:17 ? 00:00:00 postgres: wal writer\n> process\n> postgres 9783 9778 0 09:17 ? 00:00:00 postgres: autovacuum\n> launcher process\n> postgres 9784 9778 0 09:17 ? 00:00:00 postgres: stats\n> collector process\n> postgres 9900 1 0 09:20 ? 00:00:00\n> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\n> /var/ericsson/esm-data/postgresql-data-9.4/\n> postgres 9901 9900 0 09:20 ? 00:00:00 postgres: logger process\n> postgres 9903 9900 0 09:20 ? 00:00:00 postgres: checkpointer\n> process\n> postgres 9904 9900 0 09:20 ? 00:00:00 postgres: writer process\n> postgres 9905 9900 0 09:20 ? 00:00:00 postgres: wal writer\n> process\n> postgres 9906 9900 0 09:20 ? 00:00:00 postgres: autovacuum\n> launcher process\n> postgres 9907 9900 0 09:20 ? 00:00:00 postgres: stats\n> collector process\n> postgres 9926 8647 0 09:21 pts/1 00:00:00 ps -eaf\n> postgres 9927 8647 0 09:21 pts/1 00:00:00 grep --color=auto \npostgres\n> \n> \n> -bash-4.2$ netstat -antp | grep 50432\n> (Not all processes could be identified, non-owned process info\n> will not be shown, you would have to be root to see it all.)\n> tcp 0 0 127.0.0.1:50432 0.0.0.0:* \n> LISTEN 9778/postgres\n> tcp6 0 0 ::1:50432 :::* \n> LISTEN 9778/postgres\n> -bash-4.2$ netstat -antp | grep 5432\n> (Not all processes could be identified, non-owned process info\n> will not be shown, you would have to be root to see it all.)\n> tcp 0 0 127.0.0.1:5432 0.0.0.0:* \n> LISTEN 9900/postgres\n> tcp6 0 0 ::1:5432 :::* \n> LISTEN 9900/postgres\n> \n> -----------------------------------------------------------------\n> pg_upgrade run on Wed Apr 18 09:24:47 2018\n> -----------------------------------------------------------------\n> \n> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\" 2>&1\n> pg_ctl: another server might be running; trying to start server anyway\n> FATAL: lock file \"postmaster.pid\" already exists\n> HINT: Is another postmaster (PID 9778) running in data directory\n> \"/var/ericsson/esm-data/postgresql-data\"?\n> pg_ctl: could not start server\n> Examine the log output.\n> \n> \n> [root@ms-esmon /]# cat\n> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n> 9900\n> /var/ericsson/esm-data/postgresql-data-9.4\n> 1524039630\n> 5432\n> /var/run/postgresql\n> localhost\n> 5432001 2031616\n> \n> \n> [root@ms-esmon /]# cat\n> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\n> 9778\n> /var/ericsson/esm-data/postgresql-data\n> 50432001 1998850\n> \n> \n> \n> \n> With Best Regards\n> Akshay\n> \n> \n> \n> \n> \n> From: Fabio Pardi <[email protected]>\n> To: Akshay Ballarpure <[email protected]>,\n> [email protected]\n> Date: 04/18/2018 01:06 PM\n> Subject: Re: pg_upgrade help\n> ------------------------------------------------------------------------\n> \n> \n> \n> Hi,\n> \n> please avoid crossposting to multiple mailing lists.\n> \n> \n> You need to run both versions of the database, the old and the new.\n> \n> They need to run on different ports (note that it is impossible to run 2\n> different processes on the same port, that's not a postgresql thing)\n> \n> \n> \n> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n>> Hi all,\n>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent\n>> response.\n>> Installed both version and stopped it. Do i need to run both version or\n>> only one 8.4 or 9.4 . Both should run on 50432 ?\n>>\n>>\n>> -bash-4.2$ id\n>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n>>\n>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data \n>> -- 8.4 data\n>> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n>> -- 9.4 data\n>>\n>>\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>\n>> *connection to database failed: could not connect to server: No such\n>> file or directory*\n>> Is the server running locally and accepting\n>> connections on Unix domain socket\n>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>\n>>\n>> could not connect to old postmaster started with the command:\n>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n>> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n>> unix_socket_permissions=0700\" start\n>> Failure, exiting\n>>\n>>\n>>\n>>\n>> With Best Regards\n>> Akshay\n>>\n>> =====-----=====-----=====\n>> Notice: The information contained in this e-mail\n>> message and/or attachments to it may contain\n>> confidential or privileged information. If you are\n>> not the intended recipient, any dissemination, use,\n>> review, distribution, printing or copying of the\n>> information contained in this e-mail message\n>> and/or attachments to it are strictly prohibited. If\n>> you have received this communication in error,\n>> please notify us by reply e-mail or telephone and\n>> immediately and permanently delete the message\n>> and any attachments. Thank you\n>>\n> \n\n\nHi Fabio,\nsorry to bother you again, its still failing with stopping\nboth server (8.4 and 9.4)\n\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n--old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n--old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n\nconnection to database failed: could not connect to server:\nNo such file or directory\n        Is the server running locally\nand accepting\n        connections on Unix domain\nsocket \"/var/run/postgresql/.s.PGSQL.50432\"?\n\n\ncould not connect to old postmaster started with the command:\n\"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432\n-c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c unix_socket_permissions=0700\" start\nFailure, exiting\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty.        IT Services\n                \n       Business Solutions\n                \n       Consulting\n____________________________________________\n\n\n\n\nFrom:      \n Fabio Pardi <[email protected]>\nTo:      \n Akshay Ballarpure <[email protected]>,\[email protected]\nDate:      \n 04/18/2018 02:35 PM\nSubject:    \n   Re: pg_upgrade\nhelp\n\n\n\n\nHi,\n\ni was too fast in reply (and perhaps i should drink my morning coffee\nbefore replying), I will try to be more detailed:\n\nboth servers should be able to run at the moment you run pg_upgrade,\nthat means the 2 servers should have been correctly stopped in advance,\nshould have their configuration files, and new cluster initialized too.\n\nThen, as Sergei highlights here below, pg_upgrade will take care of the\nupgrade process, starting the servers.\n\n\nHere there is a step by step guide, i considered my best ally when it\nwas time to upgrade:\n\nhttps://www.postgresql.org/docs/9.4/static/pgupgrade.html\n\nnote point 7:\n\n'stop both servers'\n\n\nAbout the port the servers will run on, at point 9 there is some\nclarification:\n\n' pg_upgrade defaults to running servers on port 50432 to avoid\nunintended client connections. You can use the same port number for both\nclusters when doing an upgrade because the old and new clusters will not\nbe running at the same time. However, when checking an old running\nserver, the old and new port numbers must be different.'\n\nHope it helps,\n\nFabio Pardi\n\n\nOn 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\n> Thanks Fabio for instant reply.\n> \n> I now started 8.4 with 50432 and 9.4 with default port but still its\n> failing ...Can you please suggest what is wrong ?\n> \n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n> \n> *failure*\n> Consult the last few lines of \"pg_upgrade_server.log\" for\n> the probable cause of the failure.\n> \n> There seems to be a postmaster servicing the old cluster.\n> Please shutdown that postmaster and try again.\n> Failure, exiting\n> -bash-4.2$ ps -eaf | grep postgres\n> root      8646  9365  0 08:07 pts/1  \n 00:00:00 su - postgres\n> postgres  8647  8646  0 08:07 pts/1    00:00:00\n-bash\n> postgres  9778     1  0 09:17 ?    \n   00:00:00 /usr/bin/postgres -p\n> 50432 -D /var/ericsson/esm-data/postgresql-data/\n> postgres  9779  9778  0 09:17 ?      \n 00:00:00 postgres: logger process\n> postgres  9781  9778  0 09:17 ?      \n 00:00:00 postgres: writer process\n> postgres  9782  9778  0 09:17 ?      \n 00:00:00 postgres: wal writer\n> process\n> postgres  9783  9778  0 09:17 ?      \n 00:00:00 postgres: autovacuum\n> launcher process\n> postgres  9784  9778  0 09:17 ?      \n 00:00:00 postgres: stats\n> collector process\n> postgres  9900     1  0 09:20 ?    \n   00:00:00\n> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\n> /var/ericsson/esm-data/postgresql-data-9.4/\n> postgres  9901  9900  0 09:20 ?      \n 00:00:00 postgres: logger process\n> postgres  9903  9900  0 09:20 ?      \n 00:00:00 postgres: checkpointer\n> process\n> postgres  9904  9900  0 09:20 ?      \n 00:00:00 postgres: writer process\n> postgres  9905  9900  0 09:20 ?      \n 00:00:00 postgres: wal writer\n> process\n> postgres  9906  9900  0 09:20 ?      \n 00:00:00 postgres: autovacuum\n> launcher process\n> postgres  9907  9900  0 09:20 ?      \n 00:00:00 postgres: stats\n> collector process\n> postgres  9926  8647  0 09:21 pts/1    00:00:00\nps -eaf\n> postgres  9927  8647  0 09:21 pts/1    00:00:00\ngrep --color=auto postgres\n> \n> \n> -bash-4.2$ netstat -antp | grep 50432\n> (Not all processes could be identified, non-owned process info\n>  will not be shown, you would have to be root to see it all.)\n> tcp        0      0 127.0.0.1:50432\n        0.0.0.0:*          \n   \n> LISTEN      9778/postgres\n> tcp6       0      0 ::1:50432  \n            :::*        \n         \n>  LISTEN      9778/postgres\n> -bash-4.2$ netstat -antp | grep 5432\n> (Not all processes could be identified, non-owned process info\n>  will not be shown, you would have to be root to see it all.)\n> tcp        0      0 127.0.0.1:5432\n         0.0.0.0:*        \n     \n> LISTEN      9900/postgres\n> tcp6       0      0 ::1:5432  \n             :::*      \n           \n>  LISTEN      9900/postgres\n> \n> -----------------------------------------------------------------\n>   pg_upgrade run on Wed Apr 18 09:24:47 2018\n> -----------------------------------------------------------------\n> \n> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432\n-c autovacuum=off\n> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\"\n2>&1\n> pg_ctl: another server might be running; trying to start server anyway\n> FATAL:  lock file \"postmaster.pid\" already exists\n> HINT:  Is another postmaster (PID 9778) running in data directory\n> \"/var/ericsson/esm-data/postgresql-data\"?\n> pg_ctl: could not start server\n> Examine the log output.\n> \n> \n> [root@ms-esmon /]# cat\n> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n> 9900\n> /var/ericsson/esm-data/postgresql-data-9.4\n> 1524039630\n> 5432\n> /var/run/postgresql\n> localhost\n>   5432001   2031616\n>  \n>  \n> [root@ms-esmon /]# cat\n> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\n> 9778\n> /var/ericsson/esm-data/postgresql-data\n>  50432001   1998850\n> \n> \n> \n> \n> With Best Regards\n> Akshay\n> \n> \n> \n> \n> \n> From:        Fabio Pardi <[email protected]>\n> To:        Akshay Ballarpure <[email protected]>,\n> [email protected]\n> Date:        04/18/2018 01:06 PM\n> Subject:        Re: pg_upgrade help\n> ------------------------------------------------------------------------\n> \n> \n> \n> Hi,\n> \n> please avoid crossposting to multiple mailing lists.\n> \n> \n> You need to run both versions of the database, the old and the new.\n> \n> They need to run on different ports (note that it is impossible to\nrun 2\n> different processes on the same port, that's not a postgresql thing)\n> \n> \n> \n> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n>> Hi all,\n>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate\nurgent\n>> response.\n>> Installed both version and stopped it. Do i need to run both version\nor\n>> only one 8.4 or 9.4 . Both should run on 50432 ?\n>>\n>>\n>> -bash-4.2$ id\n>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n>>\n>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data\n   \n>>                  \n     -- 8.4 data\n>> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n>>                  \n-- 9.4 data\n>>\n>>\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>\n>> *connection to database failed: could not connect to server: No\nsuch\n>> file or directory*\n>>         Is the server running locally and\naccepting\n>>         connections on Unix domain socket\n>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>\n>>\n>> could not connect to old postmaster started with the command:\n>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p\n50432 -c autovacuum=off\n>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n>> unix_socket_permissions=0700\" start\n>> Failure, exiting\n>>\n>>\n>>\n>>\n>> With Best Regards\n>> Akshay\n>>\n>> =====-----=====-----=====\n>> Notice: The information contained in this e-mail\n>> message and/or attachments to it may contain\n>> confidential or privileged information. If you are\n>> not the intended recipient, any dissemination, use,\n>> review, distribution, printing or copying of the\n>> information contained in this e-mail message\n>> and/or attachments to it are strictly prohibited. If\n>> you have received this communication in error,\n>> please notify us by reply e-mail or telephone and\n>> immediately and permanently delete the message\n>> and any attachments. Thank you\n>>\n>", "msg_date": "Wed, 18 Apr 2018 17:32:57 +0530", "msg_from": "Akshay Ballarpure <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "did you run initdb on the new db?\n\nwhat happens if you manually start the new db?\n\n/opt/rh/rh-postgresql94/root/usr/bin/pg_ctl start -o \"-p 50432 -c\nlisten_addresses='' -c unix_socket_permissions=0700\" -D $NEWCLUSTER\n\nafter starting it, can you connect to it using psql?\n\npsql -p 50432 -h /var/run/postgresql -U your_user _db_\n\n\n\nregards,\n\nfabio pardi\n\n\nOn 04/18/2018 02:02 PM, Akshay Ballarpure wrote:\n> Hi Fabio,\n> sorry to bother you again, its still failing with stopping both server\n> (8.4 and 9.4)\n> \n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n> \n> connection to database failed: could not connect to server: No such file\n> or directory\n> � � � � Is the server running locally and accepting\n> � � � � connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.50432\"?\n> \n> \n> could not connect to old postmaster started with the command:\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n> -c autovacuum_freeze_max_age=2000000000 �-c listen_addresses='' -c\n> unix_socket_permissions=0700\" start\n> Failure, exiting\n> \n> \n> With Best Regards\n> Akshay\n> Ericsson OSS MON\n> Tata Consultancy Services\n> Mailto: [email protected]\n> Website: http://www.tcs.com <http://www.tcs.com/>\n> ____________________________________________\n> Experience certainty. � � � �IT Services\n> � � � � � � � � � � � �Business Solutions\n> � � � � � � � � � � � �Consulting\n> ____________________________________________\n> \n> \n> \n> \n> From: � � � �Fabio Pardi <[email protected]>\n> To: � � � �Akshay Ballarpure <[email protected]>,\n> [email protected]\n> Date: � � � �04/18/2018 02:35 PM\n> Subject: � � � �Re: pg_upgrade help\n> ------------------------------------------------------------------------\n> \n> \n> \n> Hi,\n> \n> i was too fast in reply (and perhaps i should drink my morning coffee\n> before replying), I will try to be more detailed:\n> \n> both servers should be able to run at the moment you run pg_upgrade,\n> that means the 2 servers should have been correctly stopped in advance,\n> should have their configuration files, and new cluster initialized too.\n> \n> Then, as Sergei highlights here below, pg_upgrade will take care of the\n> upgrade process, starting the servers.\n> \n> \n> Here there is a step by step guide, i considered my best ally when it\n> was time to upgrade:\n> \n> https://www.postgresql.org/docs/9.4/static/pgupgrade.html\n> \n> note point 7:\n> \n> 'stop both servers'\n> \n> \n> About the port the servers will run on, at point 9 there is some\n> clarification:\n> \n> ' pg_upgrade defaults to running servers on port 50432 to avoid\n> unintended client connections. You can use the same port number for both\n> clusters when doing an upgrade because the old and new clusters will not\n> be running at the same time. However, when checking an old running\n> server, the old and new port numbers must be different.'\n> \n> Hope it helps,\n> \n> Fabio Pardi\n> \n> \n> On 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\n>> Thanks Fabio for instant reply.\n>>\n>> I now started 8.4 with 50432 and 9.4 with default port but still its\n>> failing ...Can you please suggest what is wrong ?\n>>\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>\n>> *failure*\n>> Consult the last few lines of \"pg_upgrade_server.log\" for\n>> the probable cause of the failure.\n>>\n>> There seems to be a postmaster servicing the old cluster.\n>> Please shutdown that postmaster and try again.\n>> Failure, exiting\n>> -bash-4.2$ ps -eaf | grep postgres\n>> root � � �8646 �9365 �0 08:07 pts/1 � �00:00:00 su - postgres\n>> postgres �8647 �8646 �0 08:07 pts/1 � �00:00:00 -bash\n>> postgres �9778 � � 1 �0 09:17 ? � � � �00:00:00 /usr/bin/postgres -p\n>> 50432 -D /var/ericsson/esm-data/postgresql-data/\n>> postgres �9779 �9778 �0 09:17 ? � � � �00:00:00 postgres: logger process\n>> postgres �9781 �9778 �0 09:17 ? � � � �00:00:00 postgres: writer process\n>> postgres �9782 �9778 �0 09:17 ? � � � �00:00:00 postgres: wal writer\n>> process\n>> postgres �9783 �9778 �0 09:17 ? � � � �00:00:00 postgres: autovacuum\n>> launcher process\n>> postgres �9784 �9778 �0 09:17 ? � � � �00:00:00 postgres: stats\n>> collector process\n>> postgres �9900 � � 1 �0 09:20 ? � � � �00:00:00\n>> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\n>> /var/ericsson/esm-data/postgresql-data-9.4/\n>> postgres �9901 �9900 �0 09:20 ? � � � �00:00:00 postgres: logger process\n>> postgres �9903 �9900 �0 09:20 ? � � � �00:00:00 postgres: checkpointer\n>> process\n>> postgres �9904 �9900 �0 09:20 ? � � � �00:00:00 postgres: writer process\n>> postgres �9905 �9900 �0 09:20 ? � � � �00:00:00 postgres: wal writer\n>> process\n>> postgres �9906 �9900 �0 09:20 ? � � � �00:00:00 postgres: autovacuum\n>> launcher process\n>> postgres �9907 �9900 �0 09:20 ? � � � �00:00:00 postgres: stats\n>> collector process\n>> postgres �9926 �8647 �0 09:21 pts/1 � �00:00:00 ps -eaf\n>> postgres �9927 �8647 �0 09:21 pts/1 � �00:00:00 grep --color=auto postgres\n>>\n>>\n>> -bash-4.2$ netstat -antp | grep 50432\n>> (Not all processes could be identified, non-owned process info\n>> �will not be shown, you would have to be root to see it all.)\n>> tcp � � � �0 � � �0 127.0.0.1:50432 � � � � 0.0.0.0:* � � � � � � �\n>> LISTEN � � �9778/postgres\n>> tcp6 � � � 0 � � �0 ::1:50432 � � � � � � � :::* � � � � � � � � �\n>> �LISTEN � � �9778/postgres\n>> -bash-4.2$ netstat -antp | grep 5432\n>> (Not all processes could be identified, non-owned process info\n>> �will not be shown, you would have to be root to see it all.)\n>> tcp � � � �0 � � �0 127.0.0.1:5432 � � � � �0.0.0.0:* � � � � � � �\n>> LISTEN � � �9900/postgres\n>> tcp6 � � � 0 � � �0 ::1:5432 � � � � � � � �:::* � � � � � � � � �\n>> �LISTEN � � �9900/postgres\n>>\n>> -----------------------------------------------------------------\n>> � pg_upgrade run on Wed Apr 18 09:24:47 2018\n>> -----------------------------------------------------------------\n>>\n>> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n>> -c autovacuum_freeze_max_age=2000000000 �-c listen_addresses='' -c\n>> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\" 2>&1\n>> pg_ctl: another server might be running; trying to start server anyway\n>> FATAL: �lock file \"postmaster.pid\" already exists\n>> HINT: �Is another postmaster (PID 9778) running in data directory\n>> \"/var/ericsson/esm-data/postgresql-data\"?\n>> pg_ctl: could not start server\n>> Examine the log output.\n>>\n>>\n>> [root@ms-esmon /]# cat\n>> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n>> 9900\n>> /var/ericsson/esm-data/postgresql-data-9.4\n>> 1524039630\n>> 5432\n>> /var/run/postgresql\n>> localhost\n>> � 5432001 � 2031616\n>> �\n>> �\n>> [root@ms-esmon /]# cat\n>> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\n>> 9778\n>> /var/ericsson/esm-data/postgresql-data\n>> �50432001 � 1998850\n>>\n>>\n>>\n>>\n>> With Best Regards\n>> Akshay\n>>\n>>\n>>\n>>\n>>\n>> From: � � � �Fabio Pardi <[email protected]>\n>> To: � � � �Akshay Ballarpure <[email protected]>,\n>> [email protected]\n>> Date: � � � �04/18/2018 01:06 PM\n>> Subject: � � � �Re: pg_upgrade help\n>> ------------------------------------------------------------------------\n>>\n>>\n>>\n>> Hi,\n>>\n>> please avoid crossposting to multiple mailing lists.\n>>\n>>\n>> You need to run both versions of the database, the old and the new.\n>>\n>> They need to run on different ports (note that it is impossible to run 2\n>> different processes on the same port, that's not a postgresql thing)\n>>\n>>\n>>\n>> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n>>> Hi all,\n>>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent\n>>> response.\n>>> Installed both version and stopped it. Do i need to run both version or\n>>> only one 8.4 or 9.4 . Both should run on 50432 ?\n>>>\n>>>\n>>> -bash-4.2$ id\n>>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n>>>\n>>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data � �\n>>> � � � � � � � � � � � �-- 8.4 data\n>>> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n>>> � � � � � � � � � -- 9.4 data\n>>>\n>>>\n>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>>\n>>> *connection to database failed: could not connect to server: No such\n>>> file or directory*\n>>> � � � � Is the server running locally and accepting\n>>> � � � � connections on Unix domain socket\n>>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>>\n>>>\n>>> could not connect to old postmaster started with the command:\n>>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n>>> -c autovacuum_freeze_max_age=2000000000 �-c listen_addresses='' -c\n>>> unix_socket_permissions=0700\" start\n>>> Failure, exiting\n>>>\n>>>\n>>>\n>>>\n>>> With Best Regards\n>>> Akshay\n>>>\n>>> =====-----=====-----=====\n>>> Notice: The information contained in this e-mail\n>>> message and/or attachments to it may contain\n>>> confidential or privileged information. If you are\n>>> not the intended recipient, any dissemination, use,\n>>> review, distribution, printing or copying of the\n>>> information contained in this e-mail message\n>>> and/or attachments to it are strictly prohibited. If\n>>> you have received this communication in error,\n>>> please notify us by reply e-mail or telephone and\n>>> immediately and permanently delete the message\n>>> and any attachments. Thank you\n>>>\n>>\n> \n\n", "msg_date": "Wed, 18 Apr 2018 14:47:17 +0200", "msg_from": "Fabio Pardi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "On 04/18/2018 05:02 AM, Akshay Ballarpure wrote:\n> Hi Fabio,\n> sorry to bother you again, its still failing with stopping both server \n> (8.4 and 9.4)\n\nActually according to the command show at bottom of post it is failing \ntrying to start the 8.4 server. In your previous post that was because \nit was already running:\n\n-bash-4.2$ ps -eaf | grep postgres\npostgres 9778 1 0 09:17 ? 00:00:00 /usr/bin/postgres -p \n50432 -D /var/ericsson/esm-data/postgresql-data/\n\nFATAL: lock file \"postmaster.pid\" already exists\nHINT: Is another postmaster (PID 9778) running in data directory \n\"/var/ericsson/esm-data/postgresql-data\"?\npg_ctl: could not start server\n\n\nMake sure both servers are stopped before running pg_upgrade. Per a \nprevious suggestion follow the check list here:\n\nhttps://www.postgresql.org/docs/10/static/pgupgrade.html\n\"\nUsage\n\nThese are the steps to perform an upgrade with pg_upgrade:\n\n...\n\n\"\n\n> \n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade \n> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin \n> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n> \n> connection to database failed: could not connect to server: No such file \n> or directory\n> � � � � Is the server running locally and accepting\n> � � � � connections on Unix domain socket \n> \"/var/run/postgresql/.s.PGSQL.50432\"?\n> \n> \n> could not connect to old postmaster started with the command:\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D \n> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off \n> -c autovacuum_freeze_max_age=2000000000 �-c listen_addresses='' -c \n> unix_socket_permissions=0700\" start\n> Failure, exiting\n> \n> \n> With Best Regards\n> Akshay\n\n\n\n-- \nAdrian Klaver\[email protected]\n\n", "msg_date": "Wed, 18 Apr 2018 07:02:32 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "Hello Akshay,\n\nTry starting both servers individually. If you can then,it may be port\nconflict\n\nBelow is the part of document-\n\nObviously, no one should be accessing the clusters during the upgrade.\npg_upgrade defaults to running servers on port 50432 to avoid unintended\nclient connections. You can use the same port number for both clusters when\ndoing an upgrade because the old and new clusters will not be running at\nthe same time. *However, when checking an old running server, the old and\nnew port numbers must be different. *\n\nThanks\nRajni\n\nOn Thu, Apr 19, 2018 at 12:02 AM, Adrian Klaver <[email protected]>\nwrote:\n\n> On 04/18/2018 05:02 AM, Akshay Ballarpure wrote:\n>\n>> Hi Fabio,\n>> sorry to bother you again, its still failing with stopping both server\n>> (8.4 and 9.4)\n>>\n>\n> Actually according to the command show at bottom of post it is failing\n> trying to start the 8.4 server. In your previous post that was because it\n> was already running:\n>\n> -bash-4.2$ ps -eaf | grep postgres\n> postgres 9778 1 0 09:17 ? 00:00:00 /usr/bin/postgres -p 50432\n> -D /var/ericsson/esm-data/postgresql-data/\n>\n> FATAL: lock file \"postmaster.pid\" already exists\n> HINT: Is another postmaster (PID 9778) running in data directory\n> \"/var/ericsson/esm-data/postgresql-data\"?\n> pg_ctl: could not start server\n>\n>\n> Make sure both servers are stopped before running pg_upgrade. Per a\n> previous suggestion follow the check list here:\n>\n> https://www.postgresql.org/docs/10/static/pgupgrade.html\n> \"\n> Usage\n>\n> These are the steps to perform an upgrade with pg_upgrade:\n>\n> ...\n>\n>\n> \"\n>\n>\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>\n>> connection to database failed: could not connect to server: No such file\n>> or directory\n>> Is the server running locally and accepting\n>> connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.\n>> 50432\"?\n>>\n>>\n>> could not connect to old postmaster started with the command:\n>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n>> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n>> unix_socket_permissions=0700\" start\n>> Failure, exiting\n>>\n>>\n>> With Best Regards\n>> Akshay\n>>\n>\n>\n>\n> --\n> Adrian Klaver\n> [email protected]\n>\n>\n\n\n-- \nThank you\n\nSincere Regards\nRajni\n\n0410 472 086\n\nHello Akshay,Try starting both servers individually. If you can then,it may be port conflict Below is the part of document-\nObviously, no one should be accessing the clusters during the upgrade. pg_upgrade defaults to running servers on port 50432 to avoid unintended client connections. You can use the same port number for both clusters when doing an upgrade because the old and new clusters will not be running at the same time. However, when checking an old running server, the old and new port numbers must be different.\nThanksRajniOn Thu, Apr 19, 2018 at 12:02 AM, Adrian Klaver <[email protected]> wrote:On 04/18/2018 05:02 AM, Akshay Ballarpure wrote:\n\nHi Fabio,\nsorry to bother you again, its still failing with stopping both server (8.4 and 9.4)\n\n\nActually according to the command show at bottom of post it is failing trying to start the 8.4 server. In your previous post that was because it was already running:\n\n-bash-4.2$ ps -eaf | grep postgres\npostgres  9778     1  0 09:17 ?        00:00:00 /usr/bin/postgres -p 50432 -D /var/ericsson/esm-data/postgresql-data/\n\nFATAL:  lock file \"postmaster.pid\" already exists\nHINT:  Is another postmaster (PID 9778) running in data directory \"/var/ericsson/esm-data/postgresql-data\"?\npg_ctl: could not start server\n\n\nMake sure both servers are stopped before running pg_upgrade. Per a previous suggestion follow the check list here:\n\nhttps://www.postgresql.org/docs/10/static/pgupgrade.html\n\"\nUsage\n\nThese are the steps to perform an upgrade with pg_upgrade:\n\n...\n\n\"\n\n\n\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n\nconnection to database failed: could not connect to server: No such file or directory\n         Is the server running locally and accepting\n         connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.50432\"?\n\n\ncould not connect to old postmaster started with the command:\n\"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c unix_socket_permissions=0700\" start\nFailure, exiting\n\n\nWith Best Regards\nAkshay\n\n\n\n\n-- \nAdrian Klaver\[email protected]\n\n-- Thank youSincere RegardsRajni 0410 472 086", "msg_date": "Thu, 19 Apr 2018 10:12:01 +1000", "msg_from": "Rajni Baliyan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "Hi Fabio,\nYes i ran initdb on new database and able to start as below.\n\n[root@ms-esmon root]# su - postgres -c \"/usr/bin/postgres -p 50432 -D \n/var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\n[root@ms-esmon root]# su - postgres -c \n\"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4/ 2>&1 &\"\n[root@ms-esmon root]# 2018-04-19 08:17:53.553 IST LOG: redirecting log \noutput to logging collector process\n2018-04-19 08:17:53.553 IST HINT: Future log output will appear in \ndirectory \"pg_log\".\n\n[root@ms-esmon root]#\n[root@ms-esmon root]# ps -eaf | grep postgre\nsroot 8646 9365 0 Apr18 pts/1 00:00:00 su - postgres\npostgres 8647 8646 0 Apr18 pts/1 00:00:00 -bash\npostgres 28009 1 2 08:17 ? 00:00:00 /usr/bin/postgres -p 50432 \n-D /var/ericsson/esm-data/postgresql-data/ --8.4\npostgres 28010 28009 0 08:17 ? 00:00:00 postgres: logger process\npostgres 28012 28009 0 08:17 ? 00:00:00 postgres: writer process\npostgres 28013 28009 0 08:17 ? 00:00:00 postgres: wal writer \nprocess\npostgres 28014 28009 0 08:17 ? 00:00:00 postgres: autovacuum \nlauncher process\npostgres 28015 28009 0 08:17 ? 00:00:00 postgres: stats collector \nprocess\npostgres 28048 1 0 08:17 ? 00:00:00 \n/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4/\npostgres 28049 28048 0 08:17 ? 00:00:00 postgres: logger process\npostgres 28051 28048 0 08:17 ? 00:00:00 postgres: checkpointer \nprocess\npostgres 28052 28048 0 08:17 ? 00:00:00 postgres: writer process\npostgres 28053 28048 0 08:17 ? 00:00:00 postgres: wal writer \nprocess\npostgres 28054 28048 0 08:17 ? 00:00:00 postgres: autovacuum \nlauncher process\npostgres 28055 28048 0 08:17 ? 00:00:00 postgres: stats collector \nprocess\nroot 28057 2884 0 08:17 pts/0 00:00:00 grep --color=auto postgre\n\n\nAlso i am able to start db with the command provided by you and run psql.\n\n/opt/rh/rh-postgresql94/root/usr/bin/pg_ctl start -o \"-p 50432 -c \nlisten_addresses='' -c unix_socket_permissions=0700\" -D \n/var/ericsson/esm-data/postgresql-data-9.4/\npg_ctl: another server might be running; trying to start server anyway\nserver starting\n-bash-4.2$ 2018-04-19 08:22:46.527 IST LOG: redirecting log output to \nlogging collector process\n2018-04-19 08:22:46.527 IST HINT: Future log output will appear in \ndirectory \"pg_log\".\n\n-bash-4.2$ ps -eaf | grep postg\nroot 8646 9365 0 Apr18 pts/1 00:00:00 su - postgres\npostgres 8647 8646 0 Apr18 pts/1 00:00:00 -bash\npostgres 28174 1 0 08:22 pts/1 00:00:00 \n/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4 -p 50432 -c listen_addresses= \n-c unix_socket_permissions=0700\npostgres 28175 28174 0 08:22 ? 00:00:00 postgres: logger process\npostgres 28177 28174 0 08:22 ? 00:00:00 postgres: checkpointer \nprocess\npostgres 28178 28174 0 08:22 ? 00:00:00 postgres: writer process\npostgres 28179 28174 0 08:22 ? 00:00:00 postgres: wal writer \nprocess\npostgres 28180 28174 0 08:22 ? 00:00:00 postgres: autovacuum \nlauncher process\npostgres 28181 28174 0 08:22 ? 00:00:00 postgres: stats collector \nprocess\npostgres 28182 8647 0 08:22 pts/1 00:00:00 ps -eaf\npostgres 28183 8647 0 08:22 pts/1 00:00:00 grep --color=auto postg\n\n-bash-4.2$ psql -p 50432 -h /var/run/postgresql -U rhqadmin -d rhq\npsql (8.4.20, server 9.4.9)\nWARNING: psql version 8.4, server version 9.4.\n Some psql features might not work.\nType \"help\" for help.\n\nrhq=>\n\n\nStill its failing...\n\n-bash-4.2$ ps -efa | grep postgre\nroot 8646 9365 0 Apr18 pts/1 00:00:00 su - postgres\npostgres 8647 8646 0 Apr18 pts/1 00:00:00 -bash\npostgres 28349 8647 0 08:34 pts/1 00:00:00 ps -efa\npostgres 28350 8647 0 08:34 pts/1 00:00:00 grep --color=auto postgre\n\n-bash-4.2$ echo $OLDCLUSTER\n/usr/bin/postgres\n-bash-4.2$ echo $NEWCLUSTER\n/opt/rh/rh-postgresql94/\n\n[root@ms-esmon rh-postgresql94]# \n/opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade --old-bindir=/usr/bin \n--new-bindir=/opt/rh/rh-postgresql94/root/usr/bin \n--old-datadir=/var/ericsson/esm-data/postgresql-data \n--new-datadir=/var/ericsson/esm-data/postgresql-data-9.4\n\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\n\nconnection to database failed: could not connect to server: No such file \nor directory\n Is the server running locally and accepting\n connections on Unix domain socket \n\"/var/run/postgresql/.s.PGSQL.50432\"?\n\n\ncould not connect to old postmaster started with the command:\n\"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D \n\"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off -c \nautovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c \nunix_socket_permissions=0700\" start\nFailure, exiting\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty. IT Services\n Business Solutions\n Consulting\n____________________________________________\n\n\n\n\nFrom: Fabio Pardi <[email protected]>\nTo: Akshay Ballarpure <[email protected]>\nCc: [email protected]\nDate: 04/18/2018 06:17 PM\nSubject: Re: pg_upgrade help\n\n\n\ndid you run initdb on the new db?\n\nwhat happens if you manually start the new db?\n\n/opt/rh/rh-postgresql94/root/usr/bin/pg_ctl start -o \"-p 50432 -c\nlisten_addresses='' -c unix_socket_permissions=0700\" -D $NEWCLUSTER\n\nafter starting it, can you connect to it using psql?\n\npsql -p 50432 -h /var/run/postgresql -U your_user _db_\n\n\n\nregards,\n\nfabio pardi\n\n\nOn 04/18/2018 02:02 PM, Akshay Ballarpure wrote:\n> Hi Fabio,\n> sorry to bother you again, its still failing with stopping both server\n> (8.4 and 9.4)\n> \n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n> \n> connection to database failed: could not connect to server: No such file\n> or directory\n> Is the server running locally and accepting\n> connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.50432\"?\n> \n> \n> could not connect to old postmaster started with the command:\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n> unix_socket_permissions=0700\" start\n> Failure, exiting\n> \n> \n> With Best Regards\n> Akshay\n> Ericsson OSS MON\n> Tata Consultancy Services\n> Mailto: [email protected]\n> Website: http://www.tcs.com <http://www.tcs.com/>\n> ____________________________________________\n> Experience certainty. IT Services\n> Business Solutions\n> Consulting\n> ____________________________________________\n> \n> \n> \n> \n> From: Fabio Pardi <[email protected]>\n> To: Akshay Ballarpure <[email protected]>,\n> [email protected]\n> Date: 04/18/2018 02:35 PM\n> Subject: Re: pg_upgrade help\n> ------------------------------------------------------------------------\n> \n> \n> \n> Hi,\n> \n> i was too fast in reply (and perhaps i should drink my morning coffee\n> before replying), I will try to be more detailed:\n> \n> both servers should be able to run at the moment you run pg_upgrade,\n> that means the 2 servers should have been correctly stopped in advance,\n> should have their configuration files, and new cluster initialized too.\n> \n> Then, as Sergei highlights here below, pg_upgrade will take care of the\n> upgrade process, starting the servers.\n> \n> \n> Here there is a step by step guide, i considered my best ally when it\n> was time to upgrade:\n> \n> https://www.postgresql.org/docs/9.4/static/pgupgrade.html\n> \n> note point 7:\n> \n> 'stop both servers'\n> \n> \n> About the port the servers will run on, at point 9 there is some\n> clarification:\n> \n> ' pg_upgrade defaults to running servers on port 50432 to avoid\n> unintended client connections. You can use the same port number for both\n> clusters when doing an upgrade because the old and new clusters will not\n> be running at the same time. However, when checking an old running\n> server, the old and new port numbers must be different.'\n> \n> Hope it helps,\n> \n> Fabio Pardi\n> \n> \n> On 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\n>> Thanks Fabio for instant reply.\n>>\n>> I now started 8.4 with 50432 and 9.4 with default port but still its\n>> failing ...Can you please suggest what is wrong ?\n>>\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>\n>> *failure*\n>> Consult the last few lines of \"pg_upgrade_server.log\" for\n>> the probable cause of the failure.\n>>\n>> There seems to be a postmaster servicing the old cluster.\n>> Please shutdown that postmaster and try again.\n>> Failure, exiting\n>> -bash-4.2$ ps -eaf | grep postgres\n>> root 8646 9365 0 08:07 pts/1 00:00:00 su - postgres\n>> postgres 8647 8646 0 08:07 pts/1 00:00:00 -bash\n>> postgres 9778 1 0 09:17 ? 00:00:00 /usr/bin/postgres -p\n>> 50432 -D /var/ericsson/esm-data/postgresql-data/\n>> postgres 9779 9778 0 09:17 ? 00:00:00 postgres: logger \nprocess\n>> postgres 9781 9778 0 09:17 ? 00:00:00 postgres: writer \nprocess\n>> postgres 9782 9778 0 09:17 ? 00:00:00 postgres: wal writer\n>> process\n>> postgres 9783 9778 0 09:17 ? 00:00:00 postgres: autovacuum\n>> launcher process\n>> postgres 9784 9778 0 09:17 ? 00:00:00 postgres: stats\n>> collector process\n>> postgres 9900 1 0 09:20 ? 00:00:00\n>> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\n>> /var/ericsson/esm-data/postgresql-data-9.4/\n>> postgres 9901 9900 0 09:20 ? 00:00:00 postgres: logger \nprocess\n>> postgres 9903 9900 0 09:20 ? 00:00:00 postgres: checkpointer\n>> process\n>> postgres 9904 9900 0 09:20 ? 00:00:00 postgres: writer \nprocess\n>> postgres 9905 9900 0 09:20 ? 00:00:00 postgres: wal writer\n>> process\n>> postgres 9906 9900 0 09:20 ? 00:00:00 postgres: autovacuum\n>> launcher process\n>> postgres 9907 9900 0 09:20 ? 00:00:00 postgres: stats\n>> collector process\n>> postgres 9926 8647 0 09:21 pts/1 00:00:00 ps -eaf\n>> postgres 9927 8647 0 09:21 pts/1 00:00:00 grep --color=auto \npostgres\n>>\n>>\n>> -bash-4.2$ netstat -antp | grep 50432\n>> (Not all processes could be identified, non-owned process info\n>> will not be shown, you would have to be root to see it all.)\n>> tcp 0 0 127.0.0.1:50432 0.0.0.0:* \n>> LISTEN 9778/postgres\n>> tcp6 0 0 ::1:50432 :::* \n>> LISTEN 9778/postgres\n>> -bash-4.2$ netstat -antp | grep 5432\n>> (Not all processes could be identified, non-owned process info\n>> will not be shown, you would have to be root to see it all.)\n>> tcp 0 0 127.0.0.1:5432 0.0.0.0:* \n>> LISTEN 9900/postgres\n>> tcp6 0 0 ::1:5432 :::* \n>> LISTEN 9900/postgres\n>>\n>> -----------------------------------------------------------------\n>> pg_upgrade run on Wed Apr 18 09:24:47 2018\n>> -----------------------------------------------------------------\n>>\n>> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n>> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n>> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\" 2>&1\n>> pg_ctl: another server might be running; trying to start server anyway\n>> FATAL: lock file \"postmaster.pid\" already exists\n>> HINT: Is another postmaster (PID 9778) running in data directory\n>> \"/var/ericsson/esm-data/postgresql-data\"?\n>> pg_ctl: could not start server\n>> Examine the log output.\n>>\n>>\n>> [root@ms-esmon /]# cat\n>> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n>> 9900\n>> /var/ericsson/esm-data/postgresql-data-9.4\n>> 1524039630\n>> 5432\n>> /var/run/postgresql\n>> localhost\n>> 5432001 2031616\n>> \n>> \n>> [root@ms-esmon /]# cat\n>> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\n>> 9778\n>> /var/ericsson/esm-data/postgresql-data\n>> 50432001 1998850\n>>\n>>\n>>\n>>\n>> With Best Regards\n>> Akshay\n>>\n>>\n>>\n>>\n>>\n>> From: Fabio Pardi <[email protected]>\n>> To: Akshay Ballarpure <[email protected]>,\n>> [email protected]\n>> Date: 04/18/2018 01:06 PM\n>> Subject: Re: pg_upgrade help\n>> \n------------------------------------------------------------------------\n>>\n>>\n>>\n>> Hi,\n>>\n>> please avoid crossposting to multiple mailing lists.\n>>\n>>\n>> You need to run both versions of the database, the old and the new.\n>>\n>> They need to run on different ports (note that it is impossible to run \n2\n>> different processes on the same port, that's not a postgresql thing)\n>>\n>>\n>>\n>> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n>>> Hi all,\n>>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent\n>>> response.\n>>> Installed both version and stopped it. Do i need to run both version \nor\n>>> only one 8.4 or 9.4 . Both should run on 50432 ?\n>>>\n>>>\n>>> -bash-4.2$ id\n>>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n>>>\n>>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data \n \n>>> -- 8.4 data\n>>> -bash-4.2$ export \nNEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n>>> -- 9.4 data\n>>>\n>>>\n>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>>> --old-bindir=/usr/bin \n--new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>>\n>>> *connection to database failed: could not connect to server: No such\n>>> file or directory*\n>>> Is the server running locally and accepting\n>>> connections on Unix domain socket\n>>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>>\n>>>\n>>> could not connect to old postmaster started with the command:\n>>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c \nautovacuum=off\n>>> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n>>> unix_socket_permissions=0700\" start\n>>> Failure, exiting\n>>>\n>>>\n>>>\n>>>\n>>> With Best Regards\n>>> Akshay\n>>>\n>>> =====-----=====-----=====\n>>> Notice: The information contained in this e-mail\n>>> message and/or attachments to it may contain\n>>> confidential or privileged information. If you are\n>>> not the intended recipient, any dissemination, use,\n>>> review, distribution, printing or copying of the\n>>> information contained in this e-mail message\n>>> and/or attachments to it are strictly prohibited. If\n>>> you have received this communication in error,\n>>> please notify us by reply e-mail or telephone and\n>>> immediately and permanently delete the message\n>>> and any attachments. Thank you\n>>>\n>>\n> \n\n\nHi Fabio,\nYes i ran initdb on new database and able\nto start as below.\n\n[root@ms-esmon root]# su -\npostgres -c \"/usr/bin/postgres -p 50432 -D /var/ericsson/esm-data/postgresql-data/\n2>&1 &\"\n[root@ms-esmon root]# su -\npostgres -c \"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4/\n2>&1 &\"\n[root@ms-esmon root]# 2018-04-19\n08:17:53.553 IST  LOG:  redirecting log output to logging collector\nprocess\n2018-04-19 08:17:53.553 IST\n HINT:  Future log output will appear in directory \"pg_log\".\n\n[root@ms-esmon root]#\n[root@ms-esmon root]# ps -eaf\n| grep postgre\nsroot      8646\n 9365  0 Apr18 pts/1    00:00:00 su - postgres\npostgres  8647  8646\n 0 Apr18 pts/1    00:00:00 -bash\npostgres 28009    \n1  2 08:17 ?        00:00:00 /usr/bin/postgres\n-p 50432 -D /var/ericsson/esm-data/postgresql-data/  --8.4\npostgres 28010 28009  0\n08:17 ?        00:00:00 postgres: logger process\npostgres 28012 28009  0\n08:17 ?        00:00:00 postgres: writer process\npostgres 28013 28009  0\n08:17 ?        00:00:00 postgres: wal writer process\npostgres 28014 28009  0\n08:17 ?        00:00:00 postgres: autovacuum launcher\nprocess\npostgres 28015 28009  0\n08:17 ?        00:00:00 postgres: stats collector process\npostgres 28048    \n1  0 08:17 ?        00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres\n-D /var/ericsson/esm-data/postgresql-data-9.4/\npostgres 28049 28048  0\n08:17 ?        00:00:00 postgres: logger process\npostgres 28051 28048  0\n08:17 ?        00:00:00 postgres: checkpointer process\npostgres 28052 28048  0\n08:17 ?        00:00:00 postgres: writer process\npostgres 28053 28048  0\n08:17 ?        00:00:00 postgres: wal writer process\npostgres 28054 28048  0\n08:17 ?        00:00:00 postgres: autovacuum launcher\nprocess\npostgres 28055 28048  0\n08:17 ?        00:00:00 postgres: stats collector process\nroot     28057  2884\n 0 08:17 pts/0    00:00:00 grep --color=auto postgre\n\n\nAlso i am able to start db with the\ncommand provided by you and run psql.\n\n/opt/rh/rh-postgresql94/root/usr/bin/pg_ctl\n start -o \"-p 50432 -c listen_addresses='' -c unix_socket_permissions=0700\"\n -D /var/ericsson/esm-data/postgresql-data-9.4/\npg_ctl: another server\nmight be running; trying to start server anyway\nserver starting\n-bash-4.2$ 2018-04-19\n08:22:46.527 IST  LOG:  redirecting log output to logging collector\nprocess\n2018-04-19 08:22:46.527\nIST  HINT:  Future log output will appear in directory \"pg_log\".\n\n-bash-4.2$ ps -eaf | grep\npostg\nroot      8646\n 9365  0 Apr18 pts/1    00:00:00 su - postgres\npostgres  8647  8646\n 0 Apr18 pts/1    00:00:00 -bash\npostgres 28174  \n  1  0 08:22 pts/1    00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres\n-D /var/ericsson/esm-data/postgresql-data-9.4 -p 50432 -c listen_addresses=\n-c unix_socket_permissions=0700\npostgres 28175 28174  0\n08:22 ?        00:00:00 postgres: logger process\npostgres 28177 28174  0\n08:22 ?        00:00:00 postgres: checkpointer process\npostgres 28178 28174  0\n08:22 ?        00:00:00 postgres: writer process\npostgres 28179 28174  0\n08:22 ?        00:00:00 postgres: wal writer process\npostgres 28180 28174  0\n08:22 ?        00:00:00 postgres: autovacuum launcher\nprocess\npostgres 28181 28174  0\n08:22 ?        00:00:00 postgres: stats collector process\npostgres 28182  8647\n 0 08:22 pts/1    00:00:00 ps -eaf\npostgres 28183  8647\n 0 08:22 pts/1    00:00:00 grep --color=auto postg\n\n-bash-4.2$ psql -p 50432\n-h /var/run/postgresql -U rhqadmin -d rhq\npsql (8.4.20, server 9.4.9)\nWARNING: psql version\n8.4, server version 9.4.\n       \n Some psql features might not work.\nType \"help\"\nfor help.\n\nrhq=>\n\n\nStill its failing...\n\n-bash-4.2$ ps -efa | grep postgre\nroot      8646  9365\n 0 Apr18 pts/1    00:00:00 su - postgres\npostgres  8647  8646  0\nApr18 pts/1    00:00:00 -bash\npostgres 28349  8647  0 08:34\npts/1    00:00:00 ps -efa\npostgres 28350  8647  0 08:34\npts/1    00:00:00 grep --color=auto postgre\n\n-bash-4.2$ echo $OLDCLUSTER\n/usr/bin/postgres\n-bash-4.2$ echo $NEWCLUSTER\n/opt/rh/rh-postgresql94/\n\n[root@ms-esmon rh-postgresql94]# /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n--old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n--old-datadir=/var/ericsson/esm-data/postgresql-data --new-datadir=/var/ericsson/esm-data/postgresql-data-9.4\n\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions    \n                     \n        ok\n\nconnection to database failed: could\nnot connect to server: No such file or directory\n        Is the server\nrunning locally and accepting\n        connections\non Unix domain socket \"/var/run/postgresql/.s.PGSQL.50432\"?\n\n\ncould not connect to old postmaster\nstarted with the command:\n\"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432\n-c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c unix_socket_permissions=0700\" start\nFailure, exiting\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty.        IT Services\n                \n       Business Solutions\n                \n       Consulting\n____________________________________________\n\n\n\n\nFrom:      \n Fabio Pardi <[email protected]>\nTo:      \n Akshay Ballarpure <[email protected]>\nCc:      \n [email protected]\nDate:      \n 04/18/2018 06:17 PM\nSubject:    \n   Re: pg_upgrade\nhelp\n\n\n\n\ndid you run initdb on the new db?\n\nwhat happens if you manually start the new db?\n\n/opt/rh/rh-postgresql94/root/usr/bin/pg_ctl  start -o \"-p 50432\n-c\nlisten_addresses='' -c unix_socket_permissions=0700\"  -D $NEWCLUSTER\n\nafter starting it, can you connect to it using psql?\n\npsql -p 50432 -h /var/run/postgresql  -U your_user _db_\n\n\n\nregards,\n\nfabio pardi\n\n\nOn 04/18/2018 02:02 PM, Akshay Ballarpure wrote:\n> Hi Fabio,\n> sorry to bother you again, its still failing with stopping both server\n> (8.4 and 9.4)\n> \n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n> \n> connection to database failed: could not connect to server: No such\nfile\n> or directory\n>         Is the server running locally and accepting\n>         connections on Unix domain socket\n> \"/var/run/postgresql/.s.PGSQL.50432\"?\n> \n> \n> could not connect to old postmaster started with the command:\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432\n-c autovacuum=off\n> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n> unix_socket_permissions=0700\" start\n> Failure, exiting\n> \n> \n> With Best Regards\n> Akshay\n> Ericsson OSS MON\n> Tata Consultancy Services\n> Mailto: [email protected]\n> Website: http://www.tcs.com\n<http://www.tcs.com/>\n> ____________________________________________\n> Experience certainty.        IT Services\n>                    \n   Business Solutions\n>                    \n   Consulting\n> ____________________________________________\n> \n> \n> \n> \n> From:        Fabio Pardi <[email protected]>\n> To:        Akshay Ballarpure <[email protected]>,\n> [email protected]\n> Date:        04/18/2018 02:35 PM\n> Subject:        Re: pg_upgrade help\n> ------------------------------------------------------------------------\n> \n> \n> \n> Hi,\n> \n> i was too fast in reply (and perhaps i should drink my morning coffee\n> before replying), I will try to be more detailed:\n> \n> both servers should be able to run at the moment you run pg_upgrade,\n> that means the 2 servers should have been correctly stopped in advance,\n> should have their configuration files, and new cluster initialized\ntoo.\n> \n> Then, as Sergei highlights here below, pg_upgrade will take care of\nthe\n> upgrade process, starting the servers.\n> \n> \n> Here there is a step by step guide, i considered my best ally when\nit\n> was time to upgrade:\n> \n> https://www.postgresql.org/docs/9.4/static/pgupgrade.html\n> \n> note point 7:\n> \n> 'stop both servers'\n> \n> \n> About the port the servers will run on, at point 9 there is some\n> clarification:\n> \n> ' pg_upgrade defaults to running servers on port 50432 to avoid\n> unintended client connections. You can use the same port number for\nboth\n> clusters when doing an upgrade because the old and new clusters will\nnot\n> be running at the same time. However, when checking an old running\n> server, the old and new port numbers must be different.'\n> \n> Hope it helps,\n> \n> Fabio Pardi\n> \n> \n> On 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\n>> Thanks Fabio for instant reply.\n>>\n>> I now started 8.4 with 50432 and 9.4 with default port but still\nits\n>> failing ...Can you please suggest what is wrong ?\n>>\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>\n>> *failure*\n>> Consult the last few lines of \"pg_upgrade_server.log\"\nfor\n>> the probable cause of the failure.\n>>\n>> There seems to be a postmaster servicing the old cluster.\n>> Please shutdown that postmaster and try again.\n>> Failure, exiting\n>> -bash-4.2$ ps -eaf | grep postgres\n>> root      8646  9365  0 08:07 pts/1  \n 00:00:00 su - postgres\n>> postgres  8647  8646  0 08:07 pts/1    00:00:00\n-bash\n>> postgres  9778     1  0 09:17 ?    \n   00:00:00 /usr/bin/postgres -p\n>> 50432 -D /var/ericsson/esm-data/postgresql-data/\n>> postgres  9779  9778  0 09:17 ?      \n 00:00:00 postgres: logger process\n>> postgres  9781  9778  0 09:17 ?      \n 00:00:00 postgres: writer process\n>> postgres  9782  9778  0 09:17 ?      \n 00:00:00 postgres: wal writer\n>> process\n>> postgres  9783  9778  0 09:17 ?      \n 00:00:00 postgres: autovacuum\n>> launcher process\n>> postgres  9784  9778  0 09:17 ?      \n 00:00:00 postgres: stats\n>> collector process\n>> postgres  9900     1  0 09:20 ?    \n   00:00:00\n>> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\n>> /var/ericsson/esm-data/postgresql-data-9.4/\n>> postgres  9901  9900  0 09:20 ?      \n 00:00:00 postgres: logger process\n>> postgres  9903  9900  0 09:20 ?      \n 00:00:00 postgres: checkpointer\n>> process\n>> postgres  9904  9900  0 09:20 ?      \n 00:00:00 postgres: writer process\n>> postgres  9905  9900  0 09:20 ?      \n 00:00:00 postgres: wal writer\n>> process\n>> postgres  9906  9900  0 09:20 ?      \n 00:00:00 postgres: autovacuum\n>> launcher process\n>> postgres  9907  9900  0 09:20 ?      \n 00:00:00 postgres: stats\n>> collector process\n>> postgres  9926  8647  0 09:21 pts/1    00:00:00\nps -eaf\n>> postgres  9927  8647  0 09:21 pts/1    00:00:00\ngrep --color=auto postgres\n>>\n>>\n>> -bash-4.2$ netstat -antp | grep 50432\n>> (Not all processes could be identified, non-owned process info\n>>  will not be shown, you would have to be root to see it all.)\n>> tcp        0      0 127.0.0.1:50432\n        0.0.0.0:*          \n   \n>> LISTEN      9778/postgres\n>> tcp6       0      0 ::1:50432  \n            :::*        \n         \n>>  LISTEN      9778/postgres\n>> -bash-4.2$ netstat -antp | grep 5432\n>> (Not all processes could be identified, non-owned process info\n>>  will not be shown, you would have to be root to see it all.)\n>> tcp        0      0 127.0.0.1:5432\n         0.0.0.0:*        \n     \n>> LISTEN      9900/postgres\n>> tcp6       0      0 ::1:5432  \n             :::*      \n           \n>>  LISTEN      9900/postgres\n>>\n>> -----------------------------------------------------------------\n>>   pg_upgrade run on Wed Apr 18 09:24:47 2018\n>> -----------------------------------------------------------------\n>>\n>> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p\n50432 -c autovacuum=off\n>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n>> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\"\n2>&1\n>> pg_ctl: another server might be running; trying to start server\nanyway\n>> FATAL:  lock file \"postmaster.pid\" already exists\n>> HINT:  Is another postmaster (PID 9778) running in data directory\n>> \"/var/ericsson/esm-data/postgresql-data\"?\n>> pg_ctl: could not start server\n>> Examine the log output.\n>>\n>>\n>> [root@ms-esmon /]# cat\n>> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n>> 9900\n>> /var/ericsson/esm-data/postgresql-data-9.4\n>> 1524039630\n>> 5432\n>> /var/run/postgresql\n>> localhost\n>>   5432001   2031616\n>>  \n>>  \n>> [root@ms-esmon /]# cat\n>> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\n>> 9778\n>> /var/ericsson/esm-data/postgresql-data\n>>  50432001   1998850\n>>\n>>\n>>\n>>\n>> With Best Regards\n>> Akshay\n>>\n>>\n>>\n>>\n>>\n>> From:        Fabio Pardi <[email protected]>\n>> To:        Akshay Ballarpure <[email protected]>,\n>> [email protected]\n>> Date:        04/18/2018 01:06 PM\n>> Subject:        Re: pg_upgrade help\n>> ------------------------------------------------------------------------\n>>\n>>\n>>\n>> Hi,\n>>\n>> please avoid crossposting to multiple mailing lists.\n>>\n>>\n>> You need to run both versions of the database, the old and the\nnew.\n>>\n>> They need to run on different ports (note that it is impossible\nto run 2\n>> different processes on the same port, that's not a postgresql\nthing)\n>>\n>>\n>>\n>> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n>>> Hi all,\n>>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate\nurgent\n>>> response.\n>>> Installed both version and stopped it. Do i need to run both\nversion or\n>>> only one 8.4 or 9.4 . Both should run on 50432 ?\n>>>\n>>>\n>>> -bash-4.2$ id\n>>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n>>>\n>>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data\n   \n>>>                  \n     -- 8.4 data\n>>> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n>>>                  \n-- 9.4 data\n>>>\n>>>\n>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>>\n>>> *connection to database failed: could not connect to server:\nNo such\n>>> file or directory*\n>>>         Is the server running locally\nand accepting\n>>>         connections on Unix domain socket\n>>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>>\n>>>\n>>> could not connect to old postmaster started with the command:\n>>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p\n50432 -c autovacuum=off\n>>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n>>> unix_socket_permissions=0700\" start\n>>> Failure, exiting\n>>>\n>>>\n>>>\n>>>\n>>> With Best Regards\n>>> Akshay\n>>>\n>>> =====-----=====-----=====\n>>> Notice: The information contained in this e-mail\n>>> message and/or attachments to it may contain\n>>> confidential or privileged information. If you are\n>>> not the intended recipient, any dissemination, use,\n>>> review, distribution, printing or copying of the\n>>> information contained in this e-mail message\n>>> and/or attachments to it are strictly prohibited. If\n>>> you have received this communication in error,\n>>> please notify us by reply e-mail or telephone and\n>>> immediately and permanently delete the message\n>>> and any attachments. Thank you\n>>>\n>>\n>", "msg_date": "Thu, 19 Apr 2018 13:07:50 +0530", "msg_from": "Akshay Ballarpure <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "Hi,\r\n\r\nwhile trying to reproduce your problem, i noticed that on my Centos 6 installations Postgres 8.4 and Postgres 9.6 (I do not have 9.4 readily available) store the socket in different places:\r\n\r\nPostgres 9.6.6 uses /var/run/postgresql/\r\n\r\nPostgres 8.4 uses /tmp/\r\n\r\ntherefore using default settings, i can connect to 9.6 but not 8.4 without specifying where the socket is\r\n\r\nConnect to 9.6\r\n\r\n12:01 postgres@machine:~# psql\r\npsql (8.4.20, server 9.6.6)\r\nWARNING: psql version 8.4, server version 9.6.\r\n Some psql features might not work.\r\nType \"help\" for help.\r\n\r\n---------\r\n\r\nConnect to 8.4\r\n\r\n12:01 postgres@machine:~# psql\r\npsql: could not connect to server: No such file or directory\r\n Is the server running locally and accepting\r\n connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?\r\n\r\n12:04 postgres@machine:~# psql -h /tmp\r\npsql (8.4.20)\r\nType \"help\" for help.\r\n\r\n\r\n\r\n\r\nI think you might be incurring in the same problem.\r\n\r\nCan you confirm it?\r\n\r\n\r\nregards,\r\n\r\nfabio pardi \r\n\r\n\r\n\r\n\r\n\r\nOn 04/19/2018 09:37 AM, Akshay Ballarpure wrote:\r\n> Hi Fabio,\r\n> Yes i ran initdb on new database and able to start as below.\r\n> \r\n> [root@ms-esmon root]# su - postgres -c \"/usr/bin/postgres -p 50432 -D /var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\r\n> [root@ms-esmon root]# su - postgres -c \"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4/ 2>&1 &\"\r\n> [root@ms-esmon root]# 2018-04-19 08:17:53.553 IST  LOG:  redirecting log output to logging collector process\r\n> 2018-04-19 08:17:53.553 IST  HINT:  Future log output will appear in directory \"pg_log\".\r\n> \r\n> [root@ms-esmon root]#\r\n> [root@ms-esmon root]# ps -eaf | grep postgre\r\n> sroot      8646  9365  0 Apr18 pts/1    00:00:00 su - postgres\r\n> postgres  8647  8646  0 Apr18 pts/1    00:00:00 -bash\r\n> postgres 28009     1  2 08:17 ?        00:00:00 /usr/bin/postgres -p 50432 -D /var/ericsson/esm-data/postgresql-data/  *--8.4*\r\n> postgres 28010 28009  0 08:17 ?        00:00:00 postgres: logger process\r\n> postgres 28012 28009  0 08:17 ?        00:00:00 postgres: writer process\r\n> postgres 28013 28009  0 08:17 ?        00:00:00 postgres: wal writer process\r\n> postgres 28014 28009  0 08:17 ?        00:00:00 postgres: autovacuum launcher process\r\n> postgres 28015 28009  0 08:17 ?        00:00:00 postgres: stats collector process\r\n> postgres 28048     1  0 08:17 ?        00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4/\r\n> postgres 28049 28048  0 08:17 ?        00:00:00 postgres: logger process\r\n> postgres 28051 28048  0 08:17 ?        00:00:00 postgres: checkpointer process\r\n> postgres 28052 28048  0 08:17 ?        00:00:00 postgres: writer process\r\n> postgres 28053 28048  0 08:17 ?        00:00:00 postgres: wal writer process\r\n> postgres 28054 28048  0 08:17 ?        00:00:00 postgres: autovacuum launcher process\r\n> postgres 28055 28048  0 08:17 ?        00:00:00 postgres: stats collector process\r\n> root     28057  2884  0 08:17 pts/0    00:00:00 grep --color=auto postgre\r\n> \r\n> \r\n> Also i am able to start db with the command provided by you and run psql.\r\n> \r\n> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl  start -o \"-p 50432 -c listen_addresses='' -c unix_socket_permissions=0700\"  -D /var/ericsson/esm-data/postgresql-data-9.4/\r\n> pg_ctl: another server might be running; trying to start server anyway\r\n> server starting\r\n> -bash-4.2$ 2018-04-19 08:22:46.527 IST  LOG:  redirecting log output to logging collector process\r\n> 2018-04-19 08:22:46.527 IST  HINT:  Future log output will appear in directory \"pg_log\".\r\n> \r\n> -bash-4.2$ ps -eaf | grep postg\r\n> root      8646  9365  0 Apr18 pts/1    00:00:00 su - postgres\r\n> postgres  8647  8646  0 Apr18 pts/1    00:00:00 -bash\r\n> postgres 28174     1  0 08:22 pts/1    00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4 -p 50432 -c listen_addresses= -c unix_socket_permissions=0700\r\n> postgres 28175 28174  0 08:22 ?        00:00:00 postgres: logger process\r\n> postgres 28177 28174  0 08:22 ?        00:00:00 postgres: checkpointer process\r\n> postgres 28178 28174  0 08:22 ?        00:00:00 postgres: writer process\r\n> postgres 28179 28174  0 08:22 ?        00:00:00 postgres: wal writer process\r\n> postgres 28180 28174  0 08:22 ?        00:00:00 postgres: autovacuum launcher process\r\n> postgres 28181 28174  0 08:22 ?        00:00:00 postgres: stats collector process\r\n> postgres 28182  8647  0 08:22 pts/1    00:00:00 ps -eaf\r\n> postgres 28183  8647  0 08:22 pts/1    00:00:00 grep --color=auto postg\r\n> \r\n> -bash-4.2$ psql -p 50432 -h /var/run/postgresql -U rhqadmin -d rhq\r\n> psql (8.4.20, server 9.4.9)\r\n> WARNING: psql version 8.4, server version 9.4.\r\n>          Some psql features might not work.\r\n> Type \"help\" for help.\r\n> \r\n> rhq=>\r\n> \r\n> \r\n> Still its failing...\r\n> \r\n> -bash-4.2$ ps -efa | grep postgre\r\n> root      8646  9365  0 Apr18 pts/1    00:00:00 su - postgres\r\n> postgres  8647  8646  0 Apr18 pts/1    00:00:00 -bash\r\n> postgres 28349  8647  0 08:34 pts/1    00:00:00 ps -efa\r\n> postgres 28350  8647  0 08:34 pts/1    00:00:00 grep --color=auto postgre\r\n> \r\n> -bash-4.2$ echo $OLDCLUSTER\r\n> /usr/bin/postgres\r\n> -bash-4.2$ echo $NEWCLUSTER\r\n> /opt/rh/rh-postgresql94/\r\n> \r\n> [root@ms-esmon rh-postgresql94]# /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin --old-datadir=/var/ericsson/esm-data/postgresql-data --new-datadir=/var/ericsson/esm-data/postgresql-data-9.4\r\n> \r\n> Performing Consistency Checks\r\n> -----------------------------\r\n> Checking cluster versions                                   ok\r\n> \r\n> connection to database failed: could not connect to server: No such file or directory\r\n>         Is the server running locally and accepting\r\n>         connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.50432\"?\r\n> \r\n> \r\n> could not connect to old postmaster started with the command:\r\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c unix_socket_permissions=0700\" start\r\n> Failure, exiting\r\n> \r\n> With Best Regards\r\n> Akshay\r\n> Ericsson OSS MON\r\n> Tata Consultancy Services\r\n> Mailto: [email protected]\r\n> Website: http://www.tcs.com <http://www.tcs.com/>\r\n> ____________________________________________\r\n> Experience certainty.        IT Services\r\n>                        Business Solutions\r\n>                        Consulting\r\n> ____________________________________________\r\n> \r\n> \r\n> \r\n> \r\n> From:        Fabio Pardi <[email protected]>\r\n> To:        Akshay Ballarpure <[email protected]>\r\n> Cc:        [email protected]\r\n> Date:        04/18/2018 06:17 PM\r\n> Subject:        Re: pg_upgrade help\r\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n> \r\n> \r\n> \r\n> did you run initdb on the new db?\r\n> \r\n> what happens if you manually start the new db?\r\n> \r\n> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl  start -o \"-p 50432 -c\r\n> listen_addresses='' -c unix_socket_permissions=0700\"  -D $NEWCLUSTER\r\n> \r\n> after starting it, can you connect to it using psql?\r\n> \r\n> psql -p 50432 -h /var/run/postgresql  -U your_user _db_\r\n> \r\n> \r\n> \r\n> regards,\r\n> \r\n> fabio pardi\r\n> \r\n> \r\n> On 04/18/2018 02:02 PM, Akshay Ballarpure wrote:\r\n>> Hi Fabio,\r\n>> sorry to bother you again, its still failing with stopping both server\r\n>> (8.4 and 9.4)\r\n>>\r\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\r\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\r\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\r\n>>\r\n>> connection to database failed: could not connect to server: No such file\r\n>> or directory\r\n>>         Is the server running locally and accepting\r\n>>         connections on Unix domain socket\r\n>> \"/var/run/postgresql/.s.PGSQL.50432\"?\r\n>>\r\n>>\r\n>> could not connect to old postmaster started with the command:\r\n>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\r\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\r\n>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c\r\n>> unix_socket_permissions=0700\" start\r\n>> Failure, exiting\r\n>>\r\n>>\r\n>> With Best Regards\r\n>> Akshay\r\n>> Ericsson OSS MON\r\n>> Tata Consultancy Services\r\n>> Mailto: [email protected]\r\n>> Website: http://www.tcs.com <http://www.tcs.com/><http://www.tcs.com/>\r\n>> ____________________________________________\r\n>> Experience certainty.        IT Services\r\n>>                        Business Solutions\r\n>>                        Consulting\r\n>> ____________________________________________\r\n>>\r\n>>\r\n>>\r\n>>\r\n>> From:        Fabio Pardi <[email protected]>\r\n>> To:        Akshay Ballarpure <[email protected]>,\r\n>> [email protected]\r\n>> Date:        04/18/2018 02:35 PM\r\n>> Subject:        Re: pg_upgrade help\r\n>> ------------------------------------------------------------------------\r\n>>\r\n>>\r\n>>\r\n>> Hi,\r\n>>\r\n>> i was too fast in reply (and perhaps i should drink my morning coffee\r\n>> before replying), I will try to be more detailed:\r\n>>\r\n>> both servers should be able to run at the moment you run pg_upgrade,\r\n>> that means the 2 servers should have been correctly stopped in advance,\r\n>> should have their configuration files, and new cluster initialized too.\r\n>>\r\n>> Then, as Sergei highlights here below, pg_upgrade will take care of the\r\n>> upgrade process, starting the servers.\r\n>>\r\n>>\r\n>> Here there is a step by step guide, i considered my best ally when it\r\n>> was time to upgrade:\r\n>>\r\n>> https://www.postgresql.org/docs/9.4/static/pgupgrade.html\r\n>>\r\n>> note point 7:\r\n>>\r\n>> 'stop both servers'\r\n>>\r\n>>\r\n>> About the port the servers will run on, at point 9 there is some\r\n>> clarification:\r\n>>\r\n>> ' pg_upgrade defaults to running servers on port 50432 to avoid\r\n>> unintended client connections. You can use the same port number for both\r\n>> clusters when doing an upgrade because the old and new clusters will not\r\n>> be running at the same time. However, when checking an old running\r\n>> server, the old and new port numbers must be different.'\r\n>>\r\n>> Hope it helps,\r\n>>\r\n>> Fabio Pardi\r\n>>\r\n>>\r\n>> On 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\r\n>>> Thanks Fabio for instant reply.\r\n>>>\r\n>>> I now started 8.4 with 50432 and 9.4 with default port but still its\r\n>>> failing ...Can you please suggest what is wrong ?\r\n>>>\r\n>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\r\n>>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\r\n>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\r\n>>>\r\n>>> *failure*\r\n>>> Consult the last few lines of \"pg_upgrade_server.log\" for\r\n>>> the probable cause of the failure.\r\n>>>\r\n>>> There seems to be a postmaster servicing the old cluster.\r\n>>> Please shutdown that postmaster and try again.\r\n>>> Failure, exiting\r\n>>> -bash-4.2$ ps -eaf | grep postgres\r\n>>> root      8646  9365  0 08:07 pts/1    00:00:00 su - postgres\r\n>>> postgres  8647  8646  0 08:07 pts/1    00:00:00 -bash\r\n>>> postgres  9778     1  0 09:17 ?        00:00:00 /usr/bin/postgres -p\r\n>>> 50432 -D /var/ericsson/esm-data/postgresql-data/\r\n>>> postgres  9779  9778  0 09:17 ?        00:00:00 postgres: logger process\r\n>>> postgres  9781  9778  0 09:17 ?        00:00:00 postgres: writer process\r\n>>> postgres  9782  9778  0 09:17 ?        00:00:00 postgres: wal writer\r\n>>> process\r\n>>> postgres  9783  9778  0 09:17 ?        00:00:00 postgres: autovacuum\r\n>>> launcher process\r\n>>> postgres  9784  9778  0 09:17 ?        00:00:00 postgres: stats\r\n>>> collector process\r\n>>> postgres  9900     1  0 09:20 ?        00:00:00\r\n>>> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\r\n>>> /var/ericsson/esm-data/postgresql-data-9.4/\r\n>>> postgres  9901  9900  0 09:20 ?        00:00:00 postgres: logger process\r\n>>> postgres  9903  9900  0 09:20 ?        00:00:00 postgres: checkpointer\r\n>>> process\r\n>>> postgres  9904  9900  0 09:20 ?        00:00:00 postgres: writer process\r\n>>> postgres  9905  9900  0 09:20 ?        00:00:00 postgres: wal writer\r\n>>> process\r\n>>> postgres  9906  9900  0 09:20 ?        00:00:00 postgres: autovacuum\r\n>>> launcher process\r\n>>> postgres  9907  9900  0 09:20 ?        00:00:00 postgres: stats\r\n>>> collector process\r\n>>> postgres  9926  8647  0 09:21 pts/1    00:00:00 ps -eaf\r\n>>> postgres  9927  8647  0 09:21 pts/1    00:00:00 grep --color=auto postgres\r\n>>>\r\n>>>\r\n>>> -bash-4.2$ netstat -antp | grep 50432\r\n>>> (Not all processes could be identified, non-owned process info\r\n>>>  will not be shown, you would have to be root to see it all.)\r\n>>> tcp        0      0 127.0.0.1:50432         0.0.0.0:*              \r\n>>> LISTEN      9778/postgres\r\n>>> tcp6       0      0 ::1:50432               :::*                  \r\n>>>  LISTEN      9778/postgres\r\n>>> -bash-4.2$ netstat -antp | grep 5432\r\n>>> (Not all processes could be identified, non-owned process info\r\n>>>  will not be shown, you would have to be root to see it all.)\r\n>>> tcp        0      0 127.0.0.1:5432          0.0.0.0:*              \r\n>>> LISTEN      9900/postgres\r\n>>> tcp6       0      0 ::1:5432                :::*                  \r\n>>>  LISTEN      9900/postgres\r\n>>>\r\n>>> -----------------------------------------------------------------\r\n>>>   pg_upgrade run on Wed Apr 18 09:24:47 2018\r\n>>> -----------------------------------------------------------------\r\n>>>\r\n>>> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\r\n>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\r\n>>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c\r\n>>> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\" 2>&1\r\n>>> pg_ctl: another server might be running; trying to start server anyway\r\n>>> FATAL:  lock file \"postmaster.pid\" already exists\r\n>>> HINT:  Is another postmaster (PID 9778) running in data directory\r\n>>> \"/var/ericsson/esm-data/postgresql-data\"?\r\n>>> pg_ctl: could not start server\r\n>>> Examine the log output.\r\n>>>\r\n>>>\r\n>>> [root@ms-esmon /]# cat\r\n>>> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\r\n>>> 9900\r\n>>> /var/ericsson/esm-data/postgresql-data-9.4\r\n>>> 1524039630\r\n>>> 5432\r\n>>> /var/run/postgresql\r\n>>> localhost\r\n>>>   5432001   2031616\r\n>>>  \r\n>>>  \r\n>>> [root@ms-esmon /]# cat\r\n>>> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\r\n>>> 9778\r\n>>> /var/ericsson/esm-data/postgresql-data\r\n>>>  50432001   1998850\r\n>>>\r\n>>>\r\n>>>\r\n>>>\r\n>>> With Best Regards\r\n>>> Akshay\r\n>>>\r\n>>>\r\n>>>\r\n>>>\r\n>>>\r\n>>> From:        Fabio Pardi <[email protected]>\r\n>>> To:        Akshay Ballarpure <[email protected]>,\r\n>>> [email protected]\r\n>>> Date:        04/18/2018 01:06 PM\r\n>>> Subject:        Re: pg_upgrade help\r\n>>> ------------------------------------------------------------------------\r\n>>>\r\n>>>\r\n>>>\r\n>>> Hi,\r\n>>>\r\n>>> please avoid crossposting to multiple mailing lists.\r\n>>>\r\n>>>\r\n>>> You need to run both versions of the database, the old and the new.\r\n>>>\r\n>>> They need to run on different ports (note that it is impossible to run 2\r\n>>> different processes on the same port, that's not a postgresql thing)\r\n>>>\r\n>>>\r\n>>>\r\n>>> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\r\n>>>> Hi all,\r\n>>>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent\r\n>>>> response.\r\n>>>> Installed both version and stopped it. Do i need to run both version or\r\n>>>> only one 8.4 or 9.4 . Both should run on 50432 ?\r\n>>>>\r\n>>>>\r\n>>>> -bash-4.2$ id\r\n>>>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\r\n>>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\r\n>>>>\r\n>>>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data    \r\n>>>>                        -- 8.4 data\r\n>>>> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\r\n>>>>                   -- 9.4 data\r\n>>>>\r\n>>>>\r\n>>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\r\n>>>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\r\n>>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\r\n>>>>\r\n>>>> *connection to database failed: could not connect to server: No such\r\n>>>> file or directory*\r\n>>>>         Is the server running locally and accepting\r\n>>>>         connections on Unix domain socket\r\n>>>> \"/var/run/postgresql/.s.PGSQL.50432\"?\r\n>>>>\r\n>>>>\r\n>>>> could not connect to old postmaster started with the command:\r\n>>>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\r\n>>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\r\n>>>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c\r\n>>>> unix_socket_permissions=0700\" start\r\n>>>> Failure, exiting\r\n>>>>\r\n>>>>\r\n>>>>\r\n>>>>\r\n>>>> With Best Regards\r\n>>>> Akshay\r\n>>>>\r\n>>>> =====-----=====-----=====\r\n>>>> Notice: The information contained in this e-mail\r\n>>>> message and/or attachments to it may contain\r\n>>>> confidential or privileged information. If you are\r\n>>>> not the intended recipient, any dissemination, use,\r\n>>>> review, distribution, printing or copying of the\r\n>>>> information contained in this e-mail message\r\n>>>> and/or attachments to it are strictly prohibited. If\r\n>>>> you have received this communication in error,\r\n>>>> please notify us by reply e-mail or telephone and\r\n>>>> immediately and permanently delete the message\r\n>>>> and any attachments. Thank you\r\n>>>>\r\n>>>\r\n>>\r\n> \r\n", "msg_date": "Thu, 19 Apr 2018 12:14:01 +0200", "msg_from": "Fabio Pardi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "Hi Fabio,\nI think you have found the problem. Please find o/p below.\n\n\n-bash-4.2$ ps -aef | grep postgres\npostgres 478 1 0 13:40 ? 00:00:00 /usr/bin/postgres -p 50432 \n-D /var/ericsson/esm-data/postgresql-data/\npostgres 490 478 0 13:40 ? 00:00:00 postgres: logger process\npostgres 492 478 0 13:40 ? 00:00:00 postgres: writer process\npostgres 493 478 0 13:40 ? 00:00:00 postgres: wal writer \nprocess\npostgres 494 478 0 13:40 ? 00:00:00 postgres: autovacuum \nlauncher process\npostgres 495 478 0 13:40 ? 00:00:00 postgres: stats collector \nprocess\npostgres 528 1 0 13:40 ? 00:00:00 \n/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4/\npostgres 529 528 0 13:40 ? 00:00:00 postgres: logger process\npostgres 531 528 0 13:40 ? 00:00:00 postgres: checkpointer \nprocess\npostgres 532 528 0 13:40 ? 00:00:00 postgres: writer process\npostgres 533 528 0 13:40 ? 00:00:00 postgres: wal writer \nprocess\npostgres 534 528 0 13:40 ? 00:00:00 postgres: autovacuum \nlauncher process\npostgres 535 528 0 13:40 ? 00:00:00 postgres: stats collector \nprocess\npostgres 734 8647 0 13:50 pts/1 00:00:00 ps -aef\npostgres 735 8647 0 13:50 pts/1 00:00:00 grep --color=auto postgres\nroot 8646 9365 0 Apr18 pts/1 00:00:00 su - postgres\npostgres 8647 8646 0 Apr18 pts/1 00:00:00 -bash\n\n9.4\n===\n\n-bash-4.2$ psql\npsql (8.4.20, server 9.4.9)\nWARNING: psql version 8.4, server version 9.4.\n Some psql features might not work.\nType \"help\" for help.\n\npostgres=#\n\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/psql\npsql (9.4.9)\nType \"help\" for help.\n\npostgres=#\n\n8.4\n====\n\n-bash-4.2$ psql -p 50432\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.50432\"?\n\n\n\n==========================================================================================================\n\nAfter setting PGHOST, i can connect to PSQL\n \n-bash-4.2$ echo $PGHOST\n/var/run/postgresql\n-bash-4.2$ psql -p 50432\npsql (8.4.20)\nType \"help\" for help.\n\npostgres=#\n\n \n \n\n\n \n\n\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty. IT Services\n Business Solutions\n Consulting\n____________________________________________\n\n\n\n\nFrom: Fabio Pardi <[email protected]>\nTo: Akshay Ballarpure <[email protected]>, \[email protected]\nDate: 04/19/2018 03:45 PM\nSubject: Re: pg_upgrade help\n\n\n\nHi,\n\nwhile trying to reproduce your problem, i noticed that on my Centos 6 \ninstallations Postgres 8.4 and Postgres 9.6 (I do not have 9.4 readily \navailable) store the socket in different places:\n\nPostgres 9.6.6 uses /var/run/postgresql/\n\nPostgres 8.4 uses /tmp/\n\ntherefore using default settings, i can connect to 9.6 but not 8.4 without \nspecifying where the socket is\n\nConnect to 9.6\n\n12:01 postgres@machine:~# psql\npsql (8.4.20, server 9.6.6)\nWARNING: psql version 8.4, server version 9.6.\n Some psql features might not work.\nType \"help\" for help.\n\n---------\n\nConnect to 8.4\n\n12:01 postgres@machine:~# psql\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \n\"/var/run/postgresql/.s.PGSQL.5432\"?\n\n12:04 postgres@machine:~# psql -h /tmp\npsql (8.4.20)\nType \"help\" for help.\n\n\n\n\nI think you might be incurring in the same problem.\n\nCan you confirm it?\n\n\nregards,\n\nfabio pardi \n\n\n\n\n\nOn 04/19/2018 09:37 AM, Akshay Ballarpure wrote:\n> Hi Fabio,\n> Yes i ran initdb on new database and able to start as below.\n> \n> [root@ms-esmon root]# su - postgres -c \"/usr/bin/postgres -p 50432 -D \n/var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\n> [root@ms-esmon root]# su - postgres -c \n\"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4/ 2>&1 &\"\n> [root@ms-esmon root]# 2018-04-19 08:17:53.553 IST LOG: redirecting log \noutput to logging collector process\n> 2018-04-19 08:17:53.553 IST HINT: Future log output will appear in \ndirectory \"pg_log\".\n> \n> [root@ms-esmon root]#\n> [root@ms-esmon root]# ps -eaf | grep postgre\n> sroot 8646 9365 0 Apr18 pts/1 00:00:00 su - postgres\n> postgres 8647 8646 0 Apr18 pts/1 00:00:00 -bash\n> postgres 28009 1 2 08:17 ? 00:00:00 /usr/bin/postgres -p \n50432 -D /var/ericsson/esm-data/postgresql-data/ *--8.4*\n> postgres 28010 28009 0 08:17 ? 00:00:00 postgres: logger process\n> postgres 28012 28009 0 08:17 ? 00:00:00 postgres: writer process\n> postgres 28013 28009 0 08:17 ? 00:00:00 postgres: wal writer \nprocess\n> postgres 28014 28009 0 08:17 ? 00:00:00 postgres: autovacuum \nlauncher process\n> postgres 28015 28009 0 08:17 ? 00:00:00 postgres: stats \ncollector process\n> postgres 28048 1 0 08:17 ? 00:00:00 \n/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4/\n> postgres 28049 28048 0 08:17 ? 00:00:00 postgres: logger process\n> postgres 28051 28048 0 08:17 ? 00:00:00 postgres: checkpointer \nprocess\n> postgres 28052 28048 0 08:17 ? 00:00:00 postgres: writer process\n> postgres 28053 28048 0 08:17 ? 00:00:00 postgres: wal writer \nprocess\n> postgres 28054 28048 0 08:17 ? 00:00:00 postgres: autovacuum \nlauncher process\n> postgres 28055 28048 0 08:17 ? 00:00:00 postgres: stats \ncollector process\n> root 28057 2884 0 08:17 pts/0 00:00:00 grep --color=auto \npostgre\n> \n> \n> Also i am able to start db with the command provided by you and run \npsql.\n> \n> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl start -o \"-p 50432 -c \nlisten_addresses='' -c unix_socket_permissions=0700\" -D \n/var/ericsson/esm-data/postgresql-data-9.4/\n> pg_ctl: another server might be running; trying to start server anyway\n> server starting\n> -bash-4.2$ 2018-04-19 08:22:46.527 IST LOG: redirecting log output to \nlogging collector process\n> 2018-04-19 08:22:46.527 IST HINT: Future log output will appear in \ndirectory \"pg_log\".\n> \n> -bash-4.2$ ps -eaf | grep postg\n> root 8646 9365 0 Apr18 pts/1 00:00:00 su - postgres\n> postgres 8647 8646 0 Apr18 pts/1 00:00:00 -bash\n> postgres 28174 1 0 08:22 pts/1 00:00:00 \n/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4 -p 50432 -c listen_addresses= \n-c unix_socket_permissions=0700\n> postgres 28175 28174 0 08:22 ? 00:00:00 postgres: logger process\n> postgres 28177 28174 0 08:22 ? 00:00:00 postgres: checkpointer \nprocess\n> postgres 28178 28174 0 08:22 ? 00:00:00 postgres: writer process\n> postgres 28179 28174 0 08:22 ? 00:00:00 postgres: wal writer \nprocess\n> postgres 28180 28174 0 08:22 ? 00:00:00 postgres: autovacuum \nlauncher process\n> postgres 28181 28174 0 08:22 ? 00:00:00 postgres: stats \ncollector process\n> postgres 28182 8647 0 08:22 pts/1 00:00:00 ps -eaf\n> postgres 28183 8647 0 08:22 pts/1 00:00:00 grep --color=auto postg\n> \n> -bash-4.2$ psql -p 50432 -h /var/run/postgresql -U rhqadmin -d rhq\n> psql (8.4.20, server 9.4.9)\n> WARNING: psql version 8.4, server version 9.4.\n> Some psql features might not work.\n> Type \"help\" for help.\n> \n> rhq=>\n> \n> \n> Still its failing...\n> \n> -bash-4.2$ ps -efa | grep postgre\n> root 8646 9365 0 Apr18 pts/1 00:00:00 su - postgres\n> postgres 8647 8646 0 Apr18 pts/1 00:00:00 -bash\n> postgres 28349 8647 0 08:34 pts/1 00:00:00 ps -efa\n> postgres 28350 8647 0 08:34 pts/1 00:00:00 grep --color=auto \npostgre\n> \n> -bash-4.2$ echo $OLDCLUSTER\n> /usr/bin/postgres\n> -bash-4.2$ echo $NEWCLUSTER\n> /opt/rh/rh-postgresql94/\n> \n> [root@ms-esmon rh-postgresql94]# \n/opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade --old-bindir=/usr/bin \n--new-bindir=/opt/rh/rh-postgresql94/root/usr/bin \n--old-datadir=/var/ericsson/esm-data/postgresql-data \n--new-datadir=/var/ericsson/esm-data/postgresql-data-9.4\n> \n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions ok\n> \n> connection to database failed: could not connect to server: No such file \nor directory\n> Is the server running locally and accepting\n> connections on Unix domain socket \n\"/var/run/postgresql/.s.PGSQL.50432\"?\n> \n> \n> could not connect to old postmaster started with the command:\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D \n\"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off -c \nautovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c \nunix_socket_permissions=0700\" start\n> Failure, exiting\n> \n> With Best Regards\n> Akshay\n> Ericsson OSS MON\n> Tata Consultancy Services\n> Mailto: [email protected]\n> Website: http://www.tcs.com <http://www.tcs.com/>\n> ____________________________________________\n> Experience certainty. IT Services\n> Business Solutions\n> Consulting\n> ____________________________________________\n> \n> \n> \n> \n> From: Fabio Pardi <[email protected]>\n> To: Akshay Ballarpure <[email protected]>\n> Cc: [email protected]\n> Date: 04/18/2018 06:17 PM\n> Subject: Re: pg_upgrade help\n> \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> \n> \n> \n> did you run initdb on the new db?\n> \n> what happens if you manually start the new db?\n> \n> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl start -o \"-p 50432 -c\n> listen_addresses='' -c unix_socket_permissions=0700\" -D $NEWCLUSTER\n> \n> after starting it, can you connect to it using psql?\n> \n> psql -p 50432 -h /var/run/postgresql -U your_user _db_\n> \n> \n> \n> regards,\n> \n> fabio pardi\n> \n> \n> On 04/18/2018 02:02 PM, Akshay Ballarpure wrote:\n>> Hi Fabio,\n>> sorry to bother you again, its still failing with stopping both server\n>> (8.4 and 9.4)\n>>\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>\n>> connection to database failed: could not connect to server: No such \nfile\n>> or directory\n>> Is the server running locally and accepting\n>> connections on Unix domain socket\n>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>\n>>\n>> could not connect to old postmaster started with the command:\n>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n>> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n>> unix_socket_permissions=0700\" start\n>> Failure, exiting\n>>\n>>\n>> With Best Regards\n>> Akshay\n>> Ericsson OSS MON\n>> Tata Consultancy Services\n>> Mailto: [email protected]\n>> Website: http://www.tcs.com <http://www.tcs.com/><http://www.tcs.com/>\n>> ____________________________________________\n>> Experience certainty. IT Services\n>> Business Solutions\n>> Consulting\n>> ____________________________________________\n>>\n>>\n>>\n>>\n>> From: Fabio Pardi <[email protected]>\n>> To: Akshay Ballarpure <[email protected]>,\n>> [email protected]\n>> Date: 04/18/2018 02:35 PM\n>> Subject: Re: pg_upgrade help\n>> \n------------------------------------------------------------------------\n>>\n>>\n>>\n>> Hi,\n>>\n>> i was too fast in reply (and perhaps i should drink my morning coffee\n>> before replying), I will try to be more detailed:\n>>\n>> both servers should be able to run at the moment you run pg_upgrade,\n>> that means the 2 servers should have been correctly stopped in advance,\n>> should have their configuration files, and new cluster initialized too.\n>>\n>> Then, as Sergei highlights here below, pg_upgrade will take care of the\n>> upgrade process, starting the servers.\n>>\n>>\n>> Here there is a step by step guide, i considered my best ally when it\n>> was time to upgrade:\n>>\n>> https://www.postgresql.org/docs/9.4/static/pgupgrade.html\n>>\n>> note point 7:\n>>\n>> 'stop both servers'\n>>\n>>\n>> About the port the servers will run on, at point 9 there is some\n>> clarification:\n>>\n>> ' pg_upgrade defaults to running servers on port 50432 to avoid\n>> unintended client connections. You can use the same port number for \nboth\n>> clusters when doing an upgrade because the old and new clusters will \nnot\n>> be running at the same time. However, when checking an old running\n>> server, the old and new port numbers must be different.'\n>>\n>> Hope it helps,\n>>\n>> Fabio Pardi\n>>\n>>\n>> On 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\n>>> Thanks Fabio for instant reply.\n>>>\n>>> I now started 8.4 with 50432 and 9.4 with default port but still its\n>>> failing ...Can you please suggest what is wrong ?\n>>>\n>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>>> --old-bindir=/usr/bin \n--new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>>\n>>> *failure*\n>>> Consult the last few lines of \"pg_upgrade_server.log\" for\n>>> the probable cause of the failure.\n>>>\n>>> There seems to be a postmaster servicing the old cluster.\n>>> Please shutdown that postmaster and try again.\n>>> Failure, exiting\n>>> -bash-4.2$ ps -eaf | grep postgres\n>>> root 8646 9365 0 08:07 pts/1 00:00:00 su - postgres\n>>> postgres 8647 8646 0 08:07 pts/1 00:00:00 -bash\n>>> postgres 9778 1 0 09:17 ? 00:00:00 /usr/bin/postgres -p\n>>> 50432 -D /var/ericsson/esm-data/postgresql-data/\n>>> postgres 9779 9778 0 09:17 ? 00:00:00 postgres: logger \nprocess\n>>> postgres 9781 9778 0 09:17 ? 00:00:00 postgres: writer \nprocess\n>>> postgres 9782 9778 0 09:17 ? 00:00:00 postgres: wal writer\n>>> process\n>>> postgres 9783 9778 0 09:17 ? 00:00:00 postgres: autovacuum\n>>> launcher process\n>>> postgres 9784 9778 0 09:17 ? 00:00:00 postgres: stats\n>>> collector process\n>>> postgres 9900 1 0 09:20 ? 00:00:00\n>>> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\n>>> /var/ericsson/esm-data/postgresql-data-9.4/\n>>> postgres 9901 9900 0 09:20 ? 00:00:00 postgres: logger \nprocess\n>>> postgres 9903 9900 0 09:20 ? 00:00:00 postgres: checkpointer\n>>> process\n>>> postgres 9904 9900 0 09:20 ? 00:00:00 postgres: writer \nprocess\n>>> postgres 9905 9900 0 09:20 ? 00:00:00 postgres: wal writer\n>>> process\n>>> postgres 9906 9900 0 09:20 ? 00:00:00 postgres: autovacuum\n>>> launcher process\n>>> postgres 9907 9900 0 09:20 ? 00:00:00 postgres: stats\n>>> collector process\n>>> postgres 9926 8647 0 09:21 pts/1 00:00:00 ps -eaf\n>>> postgres 9927 8647 0 09:21 pts/1 00:00:00 grep --color=auto \npostgres\n>>>\n>>>\n>>> -bash-4.2$ netstat -antp | grep 50432\n>>> (Not all processes could be identified, non-owned process info\n>>> will not be shown, you would have to be root to see it all.)\n>>> tcp 0 0 127.0.0.1:50432 0.0.0.0:* \n>>> LISTEN 9778/postgres\n>>> tcp6 0 0 ::1:50432 :::* \n>>> LISTEN 9778/postgres\n>>> -bash-4.2$ netstat -antp | grep 5432\n>>> (Not all processes could be identified, non-owned process info\n>>> will not be shown, you would have to be root to see it all.)\n>>> tcp 0 0 127.0.0.1:5432 0.0.0.0:* \n>>> LISTEN 9900/postgres\n>>> tcp6 0 0 ::1:5432 :::* \n>>> LISTEN 9900/postgres\n>>>\n>>> -----------------------------------------------------------------\n>>> pg_upgrade run on Wed Apr 18 09:24:47 2018\n>>> -----------------------------------------------------------------\n>>>\n>>> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c \nautovacuum=off\n>>> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n>>> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\" 2>&1\n>>> pg_ctl: another server might be running; trying to start server anyway\n>>> FATAL: lock file \"postmaster.pid\" already exists\n>>> HINT: Is another postmaster (PID 9778) running in data directory\n>>> \"/var/ericsson/esm-data/postgresql-data\"?\n>>> pg_ctl: could not start server\n>>> Examine the log output.\n>>>\n>>>\n>>> [root@ms-esmon /]# cat\n>>> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n>>> 9900\n>>> /var/ericsson/esm-data/postgresql-data-9.4\n>>> 1524039630\n>>> 5432\n>>> /var/run/postgresql\n>>> localhost\n>>> 5432001 2031616\n>>> \n>>> \n>>> [root@ms-esmon /]# cat\n>>> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\n>>> 9778\n>>> /var/ericsson/esm-data/postgresql-data\n>>> 50432001 1998850\n>>>\n>>>\n>>>\n>>>\n>>> With Best Regards\n>>> Akshay\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> From: Fabio Pardi <[email protected]>\n>>> To: Akshay Ballarpure <[email protected]>,\n>>> [email protected]\n>>> Date: 04/18/2018 01:06 PM\n>>> Subject: Re: pg_upgrade help\n>>> \n------------------------------------------------------------------------\n>>>\n>>>\n>>>\n>>> Hi,\n>>>\n>>> please avoid crossposting to multiple mailing lists.\n>>>\n>>>\n>>> You need to run both versions of the database, the old and the new.\n>>>\n>>> They need to run on different ports (note that it is impossible to run \n2\n>>> different processes on the same port, that's not a postgresql thing)\n>>>\n>>>\n>>>\n>>> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n>>>> Hi all,\n>>>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent\n>>>> response.\n>>>> Installed both version and stopped it. Do i need to run both version \nor\n>>>> only one 8.4 or 9.4 . Both should run on 50432 ?\n>>>>\n>>>>\n>>>> -bash-4.2$ id\n>>>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n>>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n>>>>\n>>>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data \n \n>>>> -- 8.4 data\n>>>> -bash-4.2$ export \nNEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n>>>> -- 9.4 data\n>>>>\n>>>>\n>>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>>>> --old-bindir=/usr/bin \n--new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>>>\n>>>> *connection to database failed: could not connect to server: No such\n>>>> file or directory*\n>>>> Is the server running locally and accepting\n>>>> connections on Unix domain socket\n>>>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>>>\n>>>>\n>>>> could not connect to old postmaster started with the command:\n>>>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c \nautovacuum=off\n>>>> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n>>>> unix_socket_permissions=0700\" start\n>>>> Failure, exiting\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> With Best Regards\n>>>> Akshay\n>>>>\n>>>> =====-----=====-----=====\n>>>> Notice: The information contained in this e-mail\n>>>> message and/or attachments to it may contain\n>>>> confidential or privileged information. If you are\n>>>> not the intended recipient, any dissemination, use,\n>>>> review, distribution, printing or copying of the\n>>>> information contained in this e-mail message\n>>>> and/or attachments to it are strictly prohibited. If\n>>>> you have received this communication in error,\n>>>> please notify us by reply e-mail or telephone and\n>>>> immediately and permanently delete the message\n>>>> and any attachments. Thank you\n>>>>\n>>>\n>>\n> \n\n\nHi Fabio,\nI think you have found the problem. Please\nfind o/p below.\n\n\n-bash-4.2$ ps -aef | grep postgres\npostgres   478     1  0\n13:40 ?        00:00:00 /usr/bin/postgres -p 50432\n-D /var/ericsson/esm-data/postgresql-data/\npostgres   490   478  0 13:40\n?        00:00:00 postgres: logger process\npostgres   492   478  0 13:40\n?        00:00:00 postgres: writer process\npostgres   493   478  0 13:40\n?        00:00:00 postgres: wal writer process\npostgres   494   478  0 13:40\n?        00:00:00 postgres: autovacuum launcher process\npostgres   495   478  0 13:40\n?        00:00:00 postgres: stats collector process\npostgres   528     1  0\n13:40 ?        00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres\n-D /var/ericsson/esm-data/postgresql-data-9.4/\npostgres   529   528  0 13:40\n?        00:00:00 postgres: logger process\npostgres   531   528  0 13:40\n?        00:00:00 postgres: checkpointer process\npostgres   532   528  0 13:40\n?        00:00:00 postgres: writer process\npostgres   533   528  0 13:40\n?        00:00:00 postgres: wal writer process\npostgres   534   528  0 13:40\n?        00:00:00 postgres: autovacuum launcher process\npostgres   535   528  0 13:40\n?        00:00:00 postgres: stats collector process\npostgres   734  8647  0 13:50\npts/1    00:00:00 ps -aef\npostgres   735  8647  0 13:50\npts/1    00:00:00 grep --color=auto postgres\nroot      8646  9365\n 0 Apr18 pts/1    00:00:00 su - postgres\npostgres  8647  8646  0 Apr18\npts/1    00:00:00 -bash\n\n9.4\n===\n\n-bash-4.2$ psql\npsql (8.4.20, server 9.4.9)\nWARNING: psql version 8.4, server version\n9.4.\n         Some psql\nfeatures might not work.\nType \"help\" for help.\n\npostgres=#\n\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/psql\npsql (9.4.9)\nType \"help\" for help.\n\npostgres=#\n\n8.4\n====\n\n-bash-4.2$  psql -p 50432\npsql: could not connect to server: No\nsuch file or directory\n        Is the server\nrunning locally and accepting\n        connections\non Unix domain socket \"/tmp/.s.PGSQL.50432\"?\n\n\n\n==========================================================================================================\n\nAfter setting PGHOST, i can connect to PSQL\n         \n      \n-bash-4.2$ echo $PGHOST\n/var/run/postgresql\n-bash-4.2$ psql -p 50432\npsql (8.4.20)\nType \"help\" for help.\n\npostgres=#\n\n         \n      \n         \n      \n\n\n         \n      \n\n\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty.        IT Services\n                \n       Business Solutions\n                \n       Consulting\n____________________________________________\n\n\n\n\nFrom:      \n Fabio Pardi <[email protected]>\nTo:      \n Akshay Ballarpure <[email protected]>,\[email protected]\nDate:      \n 04/19/2018 03:45 PM\nSubject:    \n   Re: pg_upgrade\nhelp\n\n\n\n\nHi,\n\nwhile trying to reproduce your problem, i noticed that on my Centos 6 installations\nPostgres 8.4 and Postgres 9.6 (I do not have 9.4 readily available) store\nthe socket in different places:\n\nPostgres 9.6.6 uses /var/run/postgresql/\n\nPostgres 8.4 uses /tmp/\n\ntherefore using default settings, i can connect to 9.6 but not 8.4 without\nspecifying where the socket is\n\nConnect to 9.6\n\n12:01 postgres@machine:~# psql\npsql (8.4.20, server 9.6.6)\nWARNING: psql version 8.4, server version 9.6.\n         Some psql features might not work.\nType \"help\" for help.\n\n---------\n\nConnect to 8.4\n\n12:01 postgres@machine:~# psql\npsql: could not connect to server: No such file or directory\n        Is the server running locally and accepting\n        connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?\n\n12:04 postgres@machine:~# psql -h /tmp\npsql (8.4.20)\nType \"help\" for help.\n\n\n\n\nI think you might be incurring in the same problem.\n\nCan you confirm it?\n\n\nregards,\n\nfabio pardi \n\n\n\n\n\nOn 04/19/2018 09:37 AM, Akshay Ballarpure wrote:\n> Hi Fabio,\n> Yes i ran initdb on new database and able to start as below.\n> \n> [root@ms-esmon root]# su - postgres -c \"/usr/bin/postgres -p\n50432 -D /var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\n> [root@ms-esmon root]# su - postgres -c \"/opt/rh/rh-postgresql94/root/usr/bin/postgres\n-D /var/ericsson/esm-data/postgresql-data-9.4/ 2>&1 &\"\n> [root@ms-esmon root]# 2018-04-19 08:17:53.553 IST  LOG:  redirecting\nlog output to logging collector process\n> 2018-04-19 08:17:53.553 IST  HINT:  Future log output will\nappear in directory \"pg_log\".\n> \n> [root@ms-esmon root]#\n> [root@ms-esmon root]# ps -eaf | grep postgre\n> sroot      8646  9365  0 Apr18 pts/1  \n 00:00:00 su - postgres\n> postgres  8647  8646  0 Apr18 pts/1    00:00:00\n-bash\n> postgres 28009     1  2 08:17 ?      \n 00:00:00 /usr/bin/postgres -p 50432 -D /var/ericsson/esm-data/postgresql-data/\n *--8.4*\n> postgres 28010 28009  0 08:17 ?        00:00:00\npostgres: logger process\n> postgres 28012 28009  0 08:17 ?        00:00:00\npostgres: writer process\n> postgres 28013 28009  0 08:17 ?        00:00:00\npostgres: wal writer process\n> postgres 28014 28009  0 08:17 ?        00:00:00\npostgres: autovacuum launcher process\n> postgres 28015 28009  0 08:17 ?        00:00:00\npostgres: stats collector process\n> postgres 28048     1  0 08:17 ?      \n 00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4/\n> postgres 28049 28048  0 08:17 ?        00:00:00\npostgres: logger process\n> postgres 28051 28048  0 08:17 ?        00:00:00\npostgres: checkpointer process\n> postgres 28052 28048  0 08:17 ?        00:00:00\npostgres: writer process\n> postgres 28053 28048  0 08:17 ?        00:00:00\npostgres: wal writer process\n> postgres 28054 28048  0 08:17 ?        00:00:00\npostgres: autovacuum launcher process\n> postgres 28055 28048  0 08:17 ?        00:00:00\npostgres: stats collector process\n> root     28057  2884  0 08:17 pts/0    00:00:00\ngrep --color=auto postgre\n> \n> \n> Also i am able to start db with the command provided by you and run\npsql.\n> \n> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl  start -o \"-p\n50432 -c listen_addresses='' -c unix_socket_permissions=0700\"  -D\n/var/ericsson/esm-data/postgresql-data-9.4/\n> pg_ctl: another server might be running; trying to start server anyway\n> server starting\n> -bash-4.2$ 2018-04-19 08:22:46.527 IST  LOG:  redirecting\nlog output to logging collector process\n> 2018-04-19 08:22:46.527 IST  HINT:  Future log output will\nappear in directory \"pg_log\".\n> \n> -bash-4.2$ ps -eaf | grep postg\n> root      8646  9365  0 Apr18 pts/1  \n 00:00:00 su - postgres\n> postgres  8647  8646  0 Apr18 pts/1    00:00:00\n-bash\n> postgres 28174     1  0 08:22 pts/1    00:00:00\n/opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4\n-p 50432 -c listen_addresses= -c unix_socket_permissions=0700\n> postgres 28175 28174  0 08:22 ?        00:00:00\npostgres: logger process\n> postgres 28177 28174  0 08:22 ?        00:00:00\npostgres: checkpointer process\n> postgres 28178 28174  0 08:22 ?        00:00:00\npostgres: writer process\n> postgres 28179 28174  0 08:22 ?        00:00:00\npostgres: wal writer process\n> postgres 28180 28174  0 08:22 ?        00:00:00\npostgres: autovacuum launcher process\n> postgres 28181 28174  0 08:22 ?        00:00:00\npostgres: stats collector process\n> postgres 28182  8647  0 08:22 pts/1    00:00:00\nps -eaf\n> postgres 28183  8647  0 08:22 pts/1    00:00:00\ngrep --color=auto postg\n> \n> -bash-4.2$ psql -p 50432 -h /var/run/postgresql -U rhqadmin -d rhq\n> psql (8.4.20, server 9.4.9)\n> WARNING: psql version 8.4, server version 9.4.\n>          Some psql features might not work.\n> Type \"help\" for help.\n> \n> rhq=>\n> \n> \n> Still its failing...\n> \n> -bash-4.2$ ps -efa | grep postgre\n> root      8646  9365  0 Apr18 pts/1  \n 00:00:00 su - postgres\n> postgres  8647  8646  0 Apr18 pts/1    00:00:00\n-bash\n> postgres 28349  8647  0 08:34 pts/1    00:00:00\nps -efa\n> postgres 28350  8647  0 08:34 pts/1    00:00:00\ngrep --color=auto postgre\n> \n> -bash-4.2$ echo $OLDCLUSTER\n> /usr/bin/postgres\n> -bash-4.2$ echo $NEWCLUSTER\n> /opt/rh/rh-postgresql94/\n> \n> [root@ms-esmon rh-postgresql94]# /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n--old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n--old-datadir=/var/ericsson/esm-data/postgresql-data --new-datadir=/var/ericsson/esm-data/postgresql-data-9.4\n> \n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions            \n                     \nok\n> \n> connection to database failed: could not connect to server: No such\nfile or directory\n>         Is the server running locally and accepting\n>         connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.50432\"?\n> \n> \n> could not connect to old postmaster started with the command:\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432\n-c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c unix_socket_permissions=0700\" start\n> Failure, exiting\n> \n> With Best Regards\n> Akshay\n> Ericsson OSS MON\n> Tata Consultancy Services\n> Mailto: [email protected]\n> Website: http://www.tcs.com\n<http://www.tcs.com/>\n> ____________________________________________\n> Experience certainty.        IT Services\n>                    \n   Business Solutions\n>                    \n   Consulting\n> ____________________________________________\n> \n> \n> \n> \n> From:        Fabio Pardi <[email protected]>\n> To:        Akshay Ballarpure <[email protected]>\n> Cc:        [email protected]\n> Date:        04/18/2018 06:17 PM\n> Subject:        Re: pg_upgrade help\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n -----\n> \n> \n> \n> did you run initdb on the new db?\n> \n> what happens if you manually start the new db?\n> \n> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl  start -o \"-p\n50432 -c\n> listen_addresses='' -c unix_socket_permissions=0700\"  -D\n$NEWCLUSTER\n> \n> after starting it, can you connect to it using psql?\n> \n> psql -p 50432 -h /var/run/postgresql  -U your_user _db_\n> \n> \n> \n> regards,\n> \n> fabio pardi\n> \n> \n> On 04/18/2018 02:02 PM, Akshay Ballarpure wrote:\n>> Hi Fabio,\n>> sorry to bother you again, its still failing with stopping both\nserver\n>> (8.4 and 9.4)\n>>\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>\n>> connection to database failed: could not connect to server: No\nsuch file\n>> or directory\n>>         Is the server running locally and\naccepting\n>>         connections on Unix domain socket\n>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>\n>>\n>> could not connect to old postmaster started with the command:\n>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p\n50432 -c autovacuum=off\n>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n>> unix_socket_permissions=0700\" start\n>> Failure, exiting\n>>\n>>\n>> With Best Regards\n>> Akshay\n>> Ericsson OSS MON\n>> Tata Consultancy Services\n>> Mailto: [email protected]\n>> Website: http://www.tcs.com\n<http://www.tcs.com/><http://www.tcs.com/>\n>> ____________________________________________\n>> Experience certainty.        IT Services\n>>                  \n     Business Solutions\n>>                  \n     Consulting\n>> ____________________________________________\n>>\n>>\n>>\n>>\n>> From:        Fabio Pardi <[email protected]>\n>> To:        Akshay Ballarpure <[email protected]>,\n>> [email protected]\n>> Date:        04/18/2018 02:35 PM\n>> Subject:        Re: pg_upgrade help\n>> ------------------------------------------------------------------------\n>>\n>>\n>>\n>> Hi,\n>>\n>> i was too fast in reply (and perhaps i should drink my morning\ncoffee\n>> before replying), I will try to be more detailed:\n>>\n>> both servers should be able to run at the moment you run pg_upgrade,\n>> that means the 2 servers should have been correctly stopped in\nadvance,\n>> should have their configuration files, and new cluster initialized\ntoo.\n>>\n>> Then, as Sergei highlights here below, pg_upgrade will take care\nof the\n>> upgrade process, starting the servers.\n>>\n>>\n>> Here there is a step by step guide, i considered my best ally\nwhen it\n>> was time to upgrade:\n>>\n>> https://www.postgresql.org/docs/9.4/static/pgupgrade.html\n>>\n>> note point 7:\n>>\n>> 'stop both servers'\n>>\n>>\n>> About the port the servers will run on, at point 9 there is some\n>> clarification:\n>>\n>> ' pg_upgrade defaults to running servers on port 50432 to avoid\n>> unintended client connections. You can use the same port number\nfor both\n>> clusters when doing an upgrade because the old and new clusters\nwill not\n>> be running at the same time. However, when checking an old running\n>> server, the old and new port numbers must be different.'\n>>\n>> Hope it helps,\n>>\n>> Fabio Pardi\n>>\n>>\n>> On 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\n>>> Thanks Fabio for instant reply.\n>>>\n>>> I now started 8.4 with 50432 and 9.4 with default port but\nstill its\n>>> failing ...Can you please suggest what is wrong ?\n>>>\n>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>>\n>>> *failure*\n>>> Consult the last few lines of \"pg_upgrade_server.log\"\nfor\n>>> the probable cause of the failure.\n>>>\n>>> There seems to be a postmaster servicing the old cluster.\n>>> Please shutdown that postmaster and try again.\n>>> Failure, exiting\n>>> -bash-4.2$ ps -eaf | grep postgres\n>>> root      8646  9365  0 08:07 pts/1\n   00:00:00 su - postgres\n>>> postgres  8647  8646  0 08:07 pts/1  \n 00:00:00 -bash\n>>> postgres  9778     1  0 09:17 ?  \n     00:00:00 /usr/bin/postgres -p\n>>> 50432 -D /var/ericsson/esm-data/postgresql-data/\n>>> postgres  9779  9778  0 09:17 ?    \n   00:00:00 postgres: logger process\n>>> postgres  9781  9778  0 09:17 ?    \n   00:00:00 postgres: writer process\n>>> postgres  9782  9778  0 09:17 ?    \n   00:00:00 postgres: wal writer\n>>> process\n>>> postgres  9783  9778  0 09:17 ?    \n   00:00:00 postgres: autovacuum\n>>> launcher process\n>>> postgres  9784  9778  0 09:17 ?    \n   00:00:00 postgres: stats\n>>> collector process\n>>> postgres  9900     1  0 09:20 ?  \n     00:00:00\n>>> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\n>>> /var/ericsson/esm-data/postgresql-data-9.4/\n>>> postgres  9901  9900  0 09:20 ?    \n   00:00:00 postgres: logger process\n>>> postgres  9903  9900  0 09:20 ?    \n   00:00:00 postgres: checkpointer\n>>> process\n>>> postgres  9904  9900  0 09:20 ?    \n   00:00:00 postgres: writer process\n>>> postgres  9905  9900  0 09:20 ?    \n   00:00:00 postgres: wal writer\n>>> process\n>>> postgres  9906  9900  0 09:20 ?    \n   00:00:00 postgres: autovacuum\n>>> launcher process\n>>> postgres  9907  9900  0 09:20 ?    \n   00:00:00 postgres: stats\n>>> collector process\n>>> postgres  9926  8647  0 09:21 pts/1  \n 00:00:00 ps -eaf\n>>> postgres  9927  8647  0 09:21 pts/1  \n 00:00:00 grep --color=auto postgres\n>>>\n>>>\n>>> -bash-4.2$ netstat -antp | grep 50432\n>>> (Not all processes could be identified, non-owned process\ninfo\n>>>  will not be shown, you would have to be root to see\nit all.)\n>>> tcp        0      0 127.0.0.1:50432\n        0.0.0.0:*          \n   \n>>> LISTEN      9778/postgres\n>>> tcp6       0      0 ::1:50432\n              :::*      \n           \n>>>  LISTEN      9778/postgres\n>>> -bash-4.2$ netstat -antp | grep 5432\n>>> (Not all processes could be identified, non-owned process\ninfo\n>>>  will not be shown, you would have to be root to see\nit all.)\n>>> tcp        0      0 127.0.0.1:5432\n         0.0.0.0:*        \n     \n>>> LISTEN      9900/postgres\n>>> tcp6       0      0 ::1:5432\n               :::*    \n             \n>>>  LISTEN      9900/postgres\n>>>\n>>> -----------------------------------------------------------------\n>>>   pg_upgrade run on Wed Apr 18 09:24:47 2018\n>>> -----------------------------------------------------------------\n>>>\n>>> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p\n50432 -c autovacuum=off\n>>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n>>> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\"\n2>&1\n>>> pg_ctl: another server might be running; trying to start server\nanyway\n>>> FATAL:  lock file \"postmaster.pid\" already\nexists\n>>> HINT:  Is another postmaster (PID 9778) running in data\ndirectory\n>>> \"/var/ericsson/esm-data/postgresql-data\"?\n>>> pg_ctl: could not start server\n>>> Examine the log output.\n>>>\n>>>\n>>> [root@ms-esmon /]# cat\n>>> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n>>> 9900\n>>> /var/ericsson/esm-data/postgresql-data-9.4\n>>> 1524039630\n>>> 5432\n>>> /var/run/postgresql\n>>> localhost\n>>>   5432001   2031616\n>>>  \n>>>  \n>>> [root@ms-esmon /]# cat\n>>> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\n>>> 9778\n>>> /var/ericsson/esm-data/postgresql-data\n>>>  50432001   1998850\n>>>\n>>>\n>>>\n>>>\n>>> With Best Regards\n>>> Akshay\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> From:        Fabio Pardi <[email protected]>\n>>> To:        Akshay Ballarpure <[email protected]>,\n>>> [email protected]\n>>> Date:        04/18/2018 01:06 PM\n>>> Subject:        Re: pg_upgrade help\n>>> ------------------------------------------------------------------------\n>>>\n>>>\n>>>\n>>> Hi,\n>>>\n>>> please avoid crossposting to multiple mailing lists.\n>>>\n>>>\n>>> You need to run both versions of the database, the old and\nthe new.\n>>>\n>>> They need to run on different ports (note that it is impossible\nto run 2\n>>> different processes on the same port, that's not a postgresql\nthing)\n>>>\n>>>\n>>>\n>>> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n>>>> Hi all,\n>>>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate\nurgent\n>>>> response.\n>>>> Installed both version and stopped it. Do i need to run\nboth version or\n>>>> only one 8.4 or 9.4 . Both should run on 50432 ?\n>>>>\n>>>>\n>>>> -bash-4.2$ id\n>>>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n>>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n>>>>\n>>>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data\n   \n>>>>                \n       -- 8.4 data\n>>>> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n>>>>                \n  -- 9.4 data\n>>>>\n>>>>\n>>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>>>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>>>\n>>>> *connection to database failed: could not connect to server:\nNo such\n>>>> file or directory*\n>>>>         Is the server running locally\nand accepting\n>>>>         connections on Unix domain\nsocket\n>>>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>>>\n>>>>\n>>>> could not connect to old postmaster started with the command:\n>>>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n>>>> \"/var/ericsson/esm-data/postgresql-data\" -o\n\"-p 50432 -c autovacuum=off\n>>>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n>>>> unix_socket_permissions=0700\" start\n>>>> Failure, exiting\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> With Best Regards\n>>>> Akshay\n>>>>\n>>>> =====-----=====-----=====\n>>>> Notice: The information contained in this e-mail\n>>>> message and/or attachments to it may contain\n>>>> confidential or privileged information. If you are\n>>>> not the intended recipient, any dissemination, use,\n>>>> review, distribution, printing or copying of the\n>>>> information contained in this e-mail message\n>>>> and/or attachments to it are strictly prohibited. If\n>>>> you have received this communication in error,\n>>>> please notify us by reply e-mail or telephone and\n>>>> immediately and permanently delete the message\n>>>> and any attachments. Thank you\n>>>>\n>>>\n>>\n>", "msg_date": "Thu, 19 Apr 2018 18:26:38 +0530", "msg_from": "Akshay Ballarpure <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "Hi Fabio,\nThanks so much for figuring out an issue..!!! much appreciated.\ni have stopped both postgres version (8.4 and 9.4) \n\n-bash-4.2$ export PGDATA=/var/ericsson/esm-data/postgresql-data - \npostgresql 8.4\n-bash-4.2$ pg_ctl stop -mfast\nwaiting for server to shut down.... done\nserver stopped\n\n\n-bash-4.2$ export PGDATA=/var/ericsson/esm-data/postgresql-data-9.4/ - \npostgresql 9.4\n-bash-4.2$ ps -eaf | grep postgre^C\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl stop -mfast\nwaiting for server to shut down.... done\nserver stopped\n\n\nAnd set below environment variables on terminal where i ran pg_upgrade. \nand its working fine. thanks so much for figuring out an issue..!!! much \nappreciated.\n\n-bash-4.2$ echo $PGDATA\n/var/ericsson/esm-data/postgresql-data - postgresql 8.4\n-bash-4.2$ echo $PGHOST\n/var/run/postgresql\n\n\n-bash-4.2$ env | grep PG\nPGHOST=/var/run/postgresql\nPGDATA=/var/ericsson/esm-data/postgresql-data\n\n\n/opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade --old-bindir=/usr/bin \n--new-bindir=/opt/rh/rh-postgresql94/root/usr/bin \n--old-datadir=/var/ericsson/esm-data/postgresql-data \n--new-datadir=/var/ericsson/esm-data/postgresql-data-9.4\n\n\n\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\nChecking database user is a superuser ok\nChecking database connection settings ok\nChecking for prepared transactions ok\nChecking for reg* system OID user data types ok\nChecking for contrib/isn with bigint-passing mismatch ok\nChecking for invalid \"line\" user columns ok\nChecking for large objects ok\nCreating dump of global objects ok\nCreating dump of database schemas\n ok\nChecking for presence of required libraries ok\nChecking database user is a superuser ok\nChecking for prepared transactions ok\n\nIf pg_upgrade fails after this point, you must re-initdb the\nnew cluster before continuing.\n\nPerforming Upgrade\n------------------\nAnalyzing all rows in the new cluster ok\nFreezing all rows on the new cluster ok\nDeleting files from new pg_clog ok\nCopying old pg_clog to new server ok\nSetting next transaction ID and epoch for new cluster ok\nDeleting files from new pg_multixact/offsets ok\nSetting oldest multixact ID on new cluster ok\nResetting WAL archives ok\nSetting frozenxid and minmxid counters in new cluster ok\nRestoring global objects in the new cluster ok\nAdding support functions to new cluster ok\nRestoring database schemas in the new cluster\n ok\nSetting minmxid counter in new cluster ok\nRemoving support functions from new cluster ok\nCopying user relation files\n ok\nSetting next OID for new cluster ok\nSync data directory to disk ok\nCreating script to analyze new cluster ok\nCreating script to delete old cluster ok\nChecking for large objects ok\n\nUpgrade Complete\n----------------\nOptimizer statistics are not transferred by pg_upgrade so,\nonce you start the new server, consider running:\n analyze_new_cluster.sh\n\nRunning this script will delete the old cluster's data files:\n delete_old_cluster.sh\n\n\n\nNow few more questions..\n\nI migrated export PGDATA=/var/ericsson/esm-data/postgresql-data - \npostgresql 8.4 \nI can start 9.4 with above PGDATA right ?\nanalyze_new_cluster.sh -- is this script will be from 9.4 ?\n\n\n\n\n\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty. IT Services\n Business Solutions\n Consulting\n____________________________________________\n\n\n\n\nFrom: Akshay Ballarpure/HYD/TCS\nTo: Fabio Pardi <[email protected]>\nCc: [email protected]\nDate: 04/19/2018 06:24 PM\nSubject: Re: pg_upgrade help\n\n\nHi Fabio,\nI think you have found the problem. Please find o/p below.\n\n\n-bash-4.2$ ps -aef | grep postgres\npostgres 478 1 0 13:40 ? 00:00:00 /usr/bin/postgres -p 50432 \n-D /var/ericsson/esm-data/postgresql-data/\npostgres 490 478 0 13:40 ? 00:00:00 postgres: logger process\npostgres 492 478 0 13:40 ? 00:00:00 postgres: writer process\npostgres 493 478 0 13:40 ? 00:00:00 postgres: wal writer \nprocess\npostgres 494 478 0 13:40 ? 00:00:00 postgres: autovacuum \nlauncher process\npostgres 495 478 0 13:40 ? 00:00:00 postgres: stats collector \nprocess\npostgres 528 1 0 13:40 ? 00:00:00 \n/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4/\npostgres 529 528 0 13:40 ? 00:00:00 postgres: logger process\npostgres 531 528 0 13:40 ? 00:00:00 postgres: checkpointer \nprocess\npostgres 532 528 0 13:40 ? 00:00:00 postgres: writer process\npostgres 533 528 0 13:40 ? 00:00:00 postgres: wal writer \nprocess\npostgres 534 528 0 13:40 ? 00:00:00 postgres: autovacuum \nlauncher process\npostgres 535 528 0 13:40 ? 00:00:00 postgres: stats collector \nprocess\npostgres 734 8647 0 13:50 pts/1 00:00:00 ps -aef\npostgres 735 8647 0 13:50 pts/1 00:00:00 grep --color=auto postgres\nroot 8646 9365 0 Apr18 pts/1 00:00:00 su - postgres\npostgres 8647 8646 0 Apr18 pts/1 00:00:00 -bash\n\n9.4\n===\n\n-bash-4.2$ psql\npsql (8.4.20, server 9.4.9)\nWARNING: psql version 8.4, server version 9.4.\n Some psql features might not work.\nType \"help\" for help.\n\npostgres=#\n\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/psql\npsql (9.4.9)\nType \"help\" for help.\n\npostgres=#\n\n8.4\n====\n\n-bash-4.2$ psql -p 50432\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/tmp/.s.PGSQL.50432\"?\n\n\n\n==========================================================================================================\n\nAfter setting PGHOST, i can connect to PSQL\n \n-bash-4.2$ echo $PGHOST\n/var/run/postgresql\n-bash-4.2$ psql -p 50432\npsql (8.4.20)\nType \"help\" for help.\n\npostgres=#\n\n \n \n\n\n \n\n\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty. IT Services\n Business Solutions\n Consulting\n____________________________________________\n\n\n\n\n\nFrom: Fabio Pardi <[email protected]>\nTo: Akshay Ballarpure <[email protected]>, \[email protected]\nDate: 04/19/2018 03:45 PM\nSubject: Re: pg_upgrade help\n\n\n\nHi,\n\nwhile trying to reproduce your problem, i noticed that on my Centos 6 \ninstallations Postgres 8.4 and Postgres 9.6 (I do not have 9.4 readily \navailable) store the socket in different places:\n\nPostgres 9.6.6 uses /var/run/postgresql/\n\nPostgres 8.4 uses /tmp/\n\ntherefore using default settings, i can connect to 9.6 but not 8.4 without \nspecifying where the socket is\n\nConnect to 9.6\n\n12:01 postgres@machine:~# psql\npsql (8.4.20, server 9.6.6)\nWARNING: psql version 8.4, server version 9.6.\n Some psql features might not work.\nType \"help\" for help.\n\n---------\n\nConnect to 8.4\n\n12:01 postgres@machine:~# psql\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \n\"/var/run/postgresql/.s.PGSQL.5432\"?\n\n12:04 postgres@machine:~# psql -h /tmp\npsql (8.4.20)\nType \"help\" for help.\n\n\n\n\nI think you might be incurring in the same problem.\n\nCan you confirm it?\n\n\nregards,\n\nfabio pardi \n\n\n\n\n\nOn 04/19/2018 09:37 AM, Akshay Ballarpure wrote:\n> Hi Fabio,\n> Yes i ran initdb on new database and able to start as below.\n> \n> [root@ms-esmon root]# su - postgres -c \"/usr/bin/postgres -p 50432 -D \n/var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\n> [root@ms-esmon root]# su - postgres -c \n\"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4/ 2>&1 &\"\n> [root@ms-esmon root]# 2018-04-19 08:17:53.553 IST LOG: redirecting log \noutput to logging collector process\n> 2018-04-19 08:17:53.553 IST HINT: Future log output will appear in \ndirectory \"pg_log\".\n> \n> [root@ms-esmon root]#\n> [root@ms-esmon root]# ps -eaf | grep postgre\n> sroot 8646 9365 0 Apr18 pts/1 00:00:00 su - postgres\n> postgres 8647 8646 0 Apr18 pts/1 00:00:00 -bash\n> postgres 28009 1 2 08:17 ? 00:00:00 /usr/bin/postgres -p \n50432 -D /var/ericsson/esm-data/postgresql-data/ *--8.4*\n> postgres 28010 28009 0 08:17 ? 00:00:00 postgres: logger process\n> postgres 28012 28009 0 08:17 ? 00:00:00 postgres: writer process\n> postgres 28013 28009 0 08:17 ? 00:00:00 postgres: wal writer \nprocess\n> postgres 28014 28009 0 08:17 ? 00:00:00 postgres: autovacuum \nlauncher process\n> postgres 28015 28009 0 08:17 ? 00:00:00 postgres: stats \ncollector process\n> postgres 28048 1 0 08:17 ? 00:00:00 \n/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4/\n> postgres 28049 28048 0 08:17 ? 00:00:00 postgres: logger process\n> postgres 28051 28048 0 08:17 ? 00:00:00 postgres: checkpointer \nprocess\n> postgres 28052 28048 0 08:17 ? 00:00:00 postgres: writer process\n> postgres 28053 28048 0 08:17 ? 00:00:00 postgres: wal writer \nprocess\n> postgres 28054 28048 0 08:17 ? 00:00:00 postgres: autovacuum \nlauncher process\n> postgres 28055 28048 0 08:17 ? 00:00:00 postgres: stats \ncollector process\n> root 28057 2884 0 08:17 pts/0 00:00:00 grep --color=auto \npostgre\n> \n> \n> Also i am able to start db with the command provided by you and run \npsql.\n> \n> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl start -o \"-p 50432 -c \nlisten_addresses='' -c unix_socket_permissions=0700\" -D \n/var/ericsson/esm-data/postgresql-data-9.4/\n> pg_ctl: another server might be running; trying to start server anyway\n> server starting\n> -bash-4.2$ 2018-04-19 08:22:46.527 IST LOG: redirecting log output to \nlogging collector process\n> 2018-04-19 08:22:46.527 IST HINT: Future log output will appear in \ndirectory \"pg_log\".\n> \n> -bash-4.2$ ps -eaf | grep postg\n> root 8646 9365 0 Apr18 pts/1 00:00:00 su - postgres\n> postgres 8647 8646 0 Apr18 pts/1 00:00:00 -bash\n> postgres 28174 1 0 08:22 pts/1 00:00:00 \n/opt/rh/rh-postgresql94/root/usr/bin/postgres -D \n/var/ericsson/esm-data/postgresql-data-9.4 -p 50432 -c listen_addresses= \n-c unix_socket_permissions=0700\n> postgres 28175 28174 0 08:22 ? 00:00:00 postgres: logger process\n> postgres 28177 28174 0 08:22 ? 00:00:00 postgres: checkpointer \nprocess\n> postgres 28178 28174 0 08:22 ? 00:00:00 postgres: writer process\n> postgres 28179 28174 0 08:22 ? 00:00:00 postgres: wal writer \nprocess\n> postgres 28180 28174 0 08:22 ? 00:00:00 postgres: autovacuum \nlauncher process\n> postgres 28181 28174 0 08:22 ? 00:00:00 postgres: stats \ncollector process\n> postgres 28182 8647 0 08:22 pts/1 00:00:00 ps -eaf\n> postgres 28183 8647 0 08:22 pts/1 00:00:00 grep --color=auto postg\n> \n> -bash-4.2$ psql -p 50432 -h /var/run/postgresql -U rhqadmin -d rhq\n> psql (8.4.20, server 9.4.9)\n> WARNING: psql version 8.4, server version 9.4.\n> Some psql features might not work.\n> Type \"help\" for help.\n> \n> rhq=>\n> \n> \n> Still its failing...\n> \n> -bash-4.2$ ps -efa | grep postgre\n> root 8646 9365 0 Apr18 pts/1 00:00:00 su - postgres\n> postgres 8647 8646 0 Apr18 pts/1 00:00:00 -bash\n> postgres 28349 8647 0 08:34 pts/1 00:00:00 ps -efa\n> postgres 28350 8647 0 08:34 pts/1 00:00:00 grep --color=auto \npostgre\n> \n> -bash-4.2$ echo $OLDCLUSTER\n> /usr/bin/postgres\n> -bash-4.2$ echo $NEWCLUSTER\n> /opt/rh/rh-postgresql94/\n> \n> [root@ms-esmon rh-postgresql94]# \n/opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade --old-bindir=/usr/bin \n--new-bindir=/opt/rh/rh-postgresql94/root/usr/bin \n--old-datadir=/var/ericsson/esm-data/postgresql-data \n--new-datadir=/var/ericsson/esm-data/postgresql-data-9.4\n> \n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions ok\n> \n> connection to database failed: could not connect to server: No such file \nor directory\n> Is the server running locally and accepting\n> connections on Unix domain socket \n\"/var/run/postgresql/.s.PGSQL.50432\"?\n> \n> \n> could not connect to old postmaster started with the command:\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D \n\"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off -c \nautovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c \nunix_socket_permissions=0700\" start\n> Failure, exiting\n> \n> With Best Regards\n> Akshay\n> Ericsson OSS MON\n> Tata Consultancy Services\n> Mailto: [email protected]\n> Website: http://www.tcs.com <http://www.tcs.com/>\n> ____________________________________________\n> Experience certainty. IT Services\n> Business Solutions\n> Consulting\n> ____________________________________________\n> \n> \n> \n> \n> From: Fabio Pardi <[email protected]>\n> To: Akshay Ballarpure <[email protected]>\n> Cc: [email protected]\n> Date: 04/18/2018 06:17 PM\n> Subject: Re: pg_upgrade help\n> \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> \n> \n> \n> did you run initdb on the new db?\n> \n> what happens if you manually start the new db?\n> \n> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl start -o \"-p 50432 -c\n> listen_addresses='' -c unix_socket_permissions=0700\" -D $NEWCLUSTER\n> \n> after starting it, can you connect to it using psql?\n> \n> psql -p 50432 -h /var/run/postgresql -U your_user _db_\n> \n> \n> \n> regards,\n> \n> fabio pardi\n> \n> \n> On 04/18/2018 02:02 PM, Akshay Ballarpure wrote:\n>> Hi Fabio,\n>> sorry to bother you again, its still failing with stopping both server\n>> (8.4 and 9.4)\n>>\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>\n>> connection to database failed: could not connect to server: No such \nfile\n>> or directory\n>> Is the server running locally and accepting\n>> connections on Unix domain socket\n>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>\n>>\n>> could not connect to old postmaster started with the command:\n>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\n>> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n>> unix_socket_permissions=0700\" start\n>> Failure, exiting\n>>\n>>\n>> With Best Regards\n>> Akshay\n>> Ericsson OSS MON\n>> Tata Consultancy Services\n>> Mailto: [email protected]\n>> Website: http://www.tcs.com <http://www.tcs.com/><http://www.tcs.com/>\n>> ____________________________________________\n>> Experience certainty. IT Services\n>> Business Solutions\n>> Consulting\n>> ____________________________________________\n>>\n>>\n>>\n>>\n>> From: Fabio Pardi <[email protected]>\n>> To: Akshay Ballarpure <[email protected]>,\n>> [email protected]\n>> Date: 04/18/2018 02:35 PM\n>> Subject: Re: pg_upgrade help\n>> \n------------------------------------------------------------------------\n>>\n>>\n>>\n>> Hi,\n>>\n>> i was too fast in reply (and perhaps i should drink my morning coffee\n>> before replying), I will try to be more detailed:\n>>\n>> both servers should be able to run at the moment you run pg_upgrade,\n>> that means the 2 servers should have been correctly stopped in advance,\n>> should have their configuration files, and new cluster initialized too.\n>>\n>> Then, as Sergei highlights here below, pg_upgrade will take care of the\n>> upgrade process, starting the servers.\n>>\n>>\n>> Here there is a step by step guide, i considered my best ally when it\n>> was time to upgrade:\n>>\n>> https://www.postgresql.org/docs/9.4/static/pgupgrade.html\n>>\n>> note point 7:\n>>\n>> 'stop both servers'\n>>\n>>\n>> About the port the servers will run on, at point 9 there is some\n>> clarification:\n>>\n>> ' pg_upgrade defaults to running servers on port 50432 to avoid\n>> unintended client connections. You can use the same port number for \nboth\n>> clusters when doing an upgrade because the old and new clusters will \nnot\n>> be running at the same time. However, when checking an old running\n>> server, the old and new port numbers must be different.'\n>>\n>> Hope it helps,\n>>\n>> Fabio Pardi\n>>\n>>\n>> On 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\n>>> Thanks Fabio for instant reply.\n>>>\n>>> I now started 8.4 with 50432 and 9.4 with default port but still its\n>>> failing ...Can you please suggest what is wrong ?\n>>>\n>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>>> --old-bindir=/usr/bin \n--new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>>\n>>> *failure*\n>>> Consult the last few lines of \"pg_upgrade_server.log\" for\n>>> the probable cause of the failure.\n>>>\n>>> There seems to be a postmaster servicing the old cluster.\n>>> Please shutdown that postmaster and try again.\n>>> Failure, exiting\n>>> -bash-4.2$ ps -eaf | grep postgres\n>>> root 8646 9365 0 08:07 pts/1 00:00:00 su - postgres\n>>> postgres 8647 8646 0 08:07 pts/1 00:00:00 -bash\n>>> postgres 9778 1 0 09:17 ? 00:00:00 /usr/bin/postgres -p\n>>> 50432 -D /var/ericsson/esm-data/postgresql-data/\n>>> postgres 9779 9778 0 09:17 ? 00:00:00 postgres: logger \nprocess\n>>> postgres 9781 9778 0 09:17 ? 00:00:00 postgres: writer \nprocess\n>>> postgres 9782 9778 0 09:17 ? 00:00:00 postgres: wal writer\n>>> process\n>>> postgres 9783 9778 0 09:17 ? 00:00:00 postgres: autovacuum\n>>> launcher process\n>>> postgres 9784 9778 0 09:17 ? 00:00:00 postgres: stats\n>>> collector process\n>>> postgres 9900 1 0 09:20 ? 00:00:00\n>>> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\n>>> /var/ericsson/esm-data/postgresql-data-9.4/\n>>> postgres 9901 9900 0 09:20 ? 00:00:00 postgres: logger \nprocess\n>>> postgres 9903 9900 0 09:20 ? 00:00:00 postgres: checkpointer\n>>> process\n>>> postgres 9904 9900 0 09:20 ? 00:00:00 postgres: writer \nprocess\n>>> postgres 9905 9900 0 09:20 ? 00:00:00 postgres: wal writer\n>>> process\n>>> postgres 9906 9900 0 09:20 ? 00:00:00 postgres: autovacuum\n>>> launcher process\n>>> postgres 9907 9900 0 09:20 ? 00:00:00 postgres: stats\n>>> collector process\n>>> postgres 9926 8647 0 09:21 pts/1 00:00:00 ps -eaf\n>>> postgres 9927 8647 0 09:21 pts/1 00:00:00 grep --color=auto \npostgres\n>>>\n>>>\n>>> -bash-4.2$ netstat -antp | grep 50432\n>>> (Not all processes could be identified, non-owned process info\n>>> will not be shown, you would have to be root to see it all.)\n>>> tcp 0 0 127.0.0.1:50432 0.0.0.0:* \n>>> LISTEN 9778/postgres\n>>> tcp6 0 0 ::1:50432 :::* \n>>> LISTEN 9778/postgres\n>>> -bash-4.2$ netstat -antp | grep 5432\n>>> (Not all processes could be identified, non-owned process info\n>>> will not be shown, you would have to be root to see it all.)\n>>> tcp 0 0 127.0.0.1:5432 0.0.0.0:* \n>>> LISTEN 9900/postgres\n>>> tcp6 0 0 ::1:5432 :::* \n>>> LISTEN 9900/postgres\n>>>\n>>> -----------------------------------------------------------------\n>>> pg_upgrade run on Wed Apr 18 09:24:47 2018\n>>> -----------------------------------------------------------------\n>>>\n>>> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c \nautovacuum=off\n>>> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n>>> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\" 2>&1\n>>> pg_ctl: another server might be running; trying to start server anyway\n>>> FATAL: lock file \"postmaster.pid\" already exists\n>>> HINT: Is another postmaster (PID 9778) running in data directory\n>>> \"/var/ericsson/esm-data/postgresql-data\"?\n>>> pg_ctl: could not start server\n>>> Examine the log output.\n>>>\n>>>\n>>> [root@ms-esmon /]# cat\n>>> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n>>> 9900\n>>> /var/ericsson/esm-data/postgresql-data-9.4\n>>> 1524039630\n>>> 5432\n>>> /var/run/postgresql\n>>> localhost\n>>> 5432001 2031616\n>>> \n>>> \n>>> [root@ms-esmon /]# cat\n>>> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\n>>> 9778\n>>> /var/ericsson/esm-data/postgresql-data\n>>> 50432001 1998850\n>>>\n>>>\n>>>\n>>>\n>>> With Best Regards\n>>> Akshay\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> From: Fabio Pardi <[email protected]>\n>>> To: Akshay Ballarpure <[email protected]>,\n>>> [email protected]\n>>> Date: 04/18/2018 01:06 PM\n>>> Subject: Re: pg_upgrade help\n>>> \n------------------------------------------------------------------------\n>>>\n>>>\n>>>\n>>> Hi,\n>>>\n>>> please avoid crossposting to multiple mailing lists.\n>>>\n>>>\n>>> You need to run both versions of the database, the old and the new.\n>>>\n>>> They need to run on different ports (note that it is impossible to run \n2\n>>> different processes on the same port, that's not a postgresql thing)\n>>>\n>>>\n>>>\n>>> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n>>>> Hi all,\n>>>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent\n>>>> response.\n>>>> Installed both version and stopped it. Do i need to run both version \nor\n>>>> only one 8.4 or 9.4 . Both should run on 50432 ?\n>>>>\n>>>>\n>>>> -bash-4.2$ id\n>>>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n>>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n>>>>\n>>>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data \n \n>>>> -- 8.4 data\n>>>> -bash-4.2$ export \nNEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n>>>> -- 9.4 data\n>>>>\n>>>>\n>>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>>>> --old-bindir=/usr/bin \n--new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>>>\n>>>> *connection to database failed: could not connect to server: No such\n>>>> file or directory*\n>>>> Is the server running locally and accepting\n>>>> connections on Unix domain socket\n>>>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>>>\n>>>>\n>>>> could not connect to old postmaster started with the command:\n>>>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\n>>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c \nautovacuum=off\n>>>> -c autovacuum_freeze_max_age=2000000000 -c listen_addresses='' -c\n>>>> unix_socket_permissions=0700\" start\n>>>> Failure, exiting\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> With Best Regards\n>>>> Akshay\n>>>>\n>>>> =====-----=====-----=====\n>>>> Notice: The information contained in this e-mail\n>>>> message and/or attachments to it may contain\n>>>> confidential or privileged information. If you are\n>>>> not the intended recipient, any dissemination, use,\n>>>> review, distribution, printing or copying of the\n>>>> information contained in this e-mail message\n>>>> and/or attachments to it are strictly prohibited. If\n>>>> you have received this communication in error,\n>>>> please notify us by reply e-mail or telephone and\n>>>> immediately and permanently delete the message\n>>>> and any attachments. Thank you\n>>>>\n>>>\n>>\n> \n\n\nHi Fabio,\nThanks so much for figuring out an issue..!!!\nmuch appreciated.\ni have stopped both postgres version (8.4\nand 9.4) \n\n-bash-4.2$ export PGDATA=/var/ericsson/esm-data/postgresql-data\n  - postgresql 8.4\n-bash-4.2$ pg_ctl stop -mfast\nwaiting for server to shut down.... done\nserver stopped\n\n\n-bash-4.2$ export PGDATA=/var/ericsson/esm-data/postgresql-data-9.4/\n  - postgresql 9.4\n-bash-4.2$ ps -eaf | grep postgre^C\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl\nstop -mfast\nwaiting for server to shut down.... done\nserver stopped\n\n\nAnd set below environment variables on terminal\nwhere i ran pg_upgrade. and its working fine. thanks so much for figuring\nout an issue..!!! much appreciated.\n\n-bash-4.2$ echo $PGDATA\n/var/ericsson/esm-data/postgresql-data  -\npostgresql 8.4\n-bash-4.2$ echo $PGHOST\n/var/run/postgresql\n\n\n-bash-4.2$ env | grep PG\nPGHOST=/var/run/postgresql\nPGDATA=/var/ericsson/esm-data/postgresql-data\n\n\n/opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n--old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n--old-datadir=/var/ericsson/esm-data/postgresql-data --new-datadir=/var/ericsson/esm-data/postgresql-data-9.4\n\n\n\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions      \n                     \n      ok\nChecking database user is a superuser  \n                    ok\nChecking database connection settings  \n                    ok\nChecking for prepared transactions  \n                     \n ok\nChecking for reg* system OID user data types\n               ok\nChecking for contrib/isn with bigint-passing\nmismatch       ok\nChecking for invalid \"line\" user\ncolumns                  \n ok\nChecking for large objects    \n                     \n       ok\nCreating dump of global objects    \n                     \n  ok\nCreating dump of database schemas\n           \n                     \n                     \n    ok\nChecking for presence of required libraries\n                ok\nChecking database user is a superuser  \n                    ok\nChecking for prepared transactions  \n                     \n ok\n\nIf pg_upgrade fails after this point, you\nmust re-initdb the\nnew cluster before continuing.\n\nPerforming Upgrade\n------------------\nAnalyzing all rows in the new cluster  \n                    ok\nFreezing all rows on the new cluster  \n                     ok\nDeleting files from new pg_clog    \n                     \n  ok\nCopying old pg_clog to new server  \n                     \n  ok\nSetting next transaction ID and epoch for\nnew cluster       ok\nDeleting files from new pg_multixact/offsets\n               ok\nSetting oldest multixact ID on new cluster\n                 ok\nResetting WAL archives      \n                     \n         ok\nSetting frozenxid and minmxid counters in\nnew cluster       ok\nRestoring global objects in the new cluster\n                ok\nAdding support functions to new cluster  \n                  ok\nRestoring database schemas in the new cluster\n           \n                     \n                     \n    ok\nSetting minmxid counter in new cluster  \n                   ok\nRemoving support functions from new cluster\n                ok\nCopying user relation files\n           \n                     \n                     \n    ok\nSetting next OID for new cluster    \n                     \n ok\nSync data directory to disk    \n                     \n      ok\nCreating script to analyze new cluster  \n                   ok\nCreating script to delete old cluster  \n                    ok\nChecking for large objects    \n                     \n       ok\n\nUpgrade Complete\n----------------\nOptimizer statistics are not transferred\nby pg_upgrade so,\nonce you start the new server, consider running:\n    analyze_new_cluster.sh\n\nRunning this script will delete the old cluster's\ndata files:\n    delete_old_cluster.sh\n\n\n\nNow few more questions..\n\nI migrated  export\nPGDATA=/var/ericsson/esm-data/postgresql-data\n- postgresql 8.4 \nI can start 9.4 with above PGDATA right\n?\nanalyze_new_cluster.sh  -- is this\nscript will be from 9.4 ?\n\n\n\n\n\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty.        IT Services\n                \n       Business Solutions\n                \n       Consulting\n____________________________________________\n\n\n\n\nFrom:      \n Akshay Ballarpure/HYD/TCS\nTo:      \n Fabio Pardi <[email protected]>\nCc:      \n [email protected]\nDate:      \n 04/19/2018 06:24 PM\nSubject:    \n   Re: pg_upgrade\nhelp\n\n\n\nHi Fabio,\nI think you have found the problem. Please\nfind o/p below.\n\n\n-bash-4.2$ ps -aef | grep postgres\npostgres   478     1  0\n13:40 ?        00:00:00 /usr/bin/postgres -p 50432\n-D /var/ericsson/esm-data/postgresql-data/\npostgres   490   478  0 13:40\n?        00:00:00 postgres: logger process\npostgres   492   478  0 13:40\n?        00:00:00 postgres: writer process\npostgres   493   478  0 13:40\n?        00:00:00 postgres: wal writer process\npostgres   494   478  0 13:40\n?        00:00:00 postgres: autovacuum launcher process\npostgres   495   478  0 13:40\n?        00:00:00 postgres: stats collector process\npostgres   528     1  0\n13:40 ?        00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres\n-D /var/ericsson/esm-data/postgresql-data-9.4/\npostgres   529   528  0 13:40\n?        00:00:00 postgres: logger process\npostgres   531   528  0 13:40\n?        00:00:00 postgres: checkpointer process\npostgres   532   528  0 13:40\n?        00:00:00 postgres: writer process\npostgres   533   528  0 13:40\n?        00:00:00 postgres: wal writer process\npostgres   534   528  0 13:40\n?        00:00:00 postgres: autovacuum launcher process\npostgres   535   528  0 13:40\n?        00:00:00 postgres: stats collector process\npostgres   734  8647  0 13:50\npts/1    00:00:00 ps -aef\npostgres   735  8647  0 13:50\npts/1    00:00:00 grep --color=auto postgres\nroot      8646  9365\n 0 Apr18 pts/1    00:00:00 su - postgres\npostgres  8647  8646  0 Apr18\npts/1    00:00:00 -bash\n\n9.4\n===\n\n-bash-4.2$ psql\npsql (8.4.20, server 9.4.9)\nWARNING: psql version 8.4, server version\n9.4.\n         Some psql\nfeatures might not work.\nType \"help\" for help.\n\npostgres=#\n\n-bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/psql\npsql (9.4.9)\nType \"help\" for help.\n\npostgres=#\n\n8.4\n====\n\n-bash-4.2$  psql -p 50432\npsql: could not connect to server: No\nsuch file or directory\n        Is the server\nrunning locally and accepting\n        connections\non Unix domain socket \"/tmp/.s.PGSQL.50432\"?\n\n\n\n==========================================================================================================\n\nAfter setting PGHOST, i can connect to PSQL\n         \n      \n-bash-4.2$ echo $PGHOST\n/var/run/postgresql\n-bash-4.2$ psql -p 50432\npsql (8.4.20)\nType \"help\" for help.\n\npostgres=#\n\n         \n      \n         \n      \n\n\n         \n      \n\n\n\n\nWith Best Regards\nAkshay\nEricsson OSS MON\nTata Consultancy Services\nMailto: [email protected]\nWebsite: http://www.tcs.com\n____________________________________________\nExperience certainty.        IT Services\n                \n       Business Solutions\n                \n       Consulting\n____________________________________________\n\n\n\n\n\nFrom:      \n Fabio Pardi <[email protected]>\nTo:      \n Akshay Ballarpure <[email protected]>,\[email protected]\nDate:      \n 04/19/2018 03:45 PM\nSubject:    \n   Re: pg_upgrade\nhelp\n\n\n\n\nHi,\n\nwhile trying to reproduce your problem, i noticed that on my Centos 6 installations\nPostgres 8.4 and Postgres 9.6 (I do not have 9.4 readily available) store\nthe socket in different places:\n\nPostgres 9.6.6 uses /var/run/postgresql/\n\nPostgres 8.4 uses /tmp/\n\ntherefore using default settings, i can connect to 9.6 but not 8.4 without\nspecifying where the socket is\n\nConnect to 9.6\n\n12:01 postgres@machine:~# psql\npsql (8.4.20, server 9.6.6)\nWARNING: psql version 8.4, server version 9.6.\n         Some psql features might not work.\nType \"help\" for help.\n\n---------\n\nConnect to 8.4\n\n12:01 postgres@machine:~# psql\npsql: could not connect to server: No such file or directory\n        Is the server running locally and accepting\n        connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?\n\n12:04 postgres@machine:~# psql -h /tmp\npsql (8.4.20)\nType \"help\" for help.\n\n\n\n\nI think you might be incurring in the same problem.\n\nCan you confirm it?\n\n\nregards,\n\nfabio pardi \n\n\n\n\n\nOn 04/19/2018 09:37 AM, Akshay Ballarpure wrote:\n> Hi Fabio,\n> Yes i ran initdb on new database and able to start as below.\n> \n> [root@ms-esmon root]# su - postgres -c \"/usr/bin/postgres -p\n50432 -D /var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\n> [root@ms-esmon root]# su - postgres -c \"/opt/rh/rh-postgresql94/root/usr/bin/postgres\n-D /var/ericsson/esm-data/postgresql-data-9.4/ 2>&1 &\"\n> [root@ms-esmon root]# 2018-04-19 08:17:53.553 IST  LOG:  redirecting\nlog output to logging collector process\n> 2018-04-19 08:17:53.553 IST  HINT:  Future log output will\nappear in directory \"pg_log\".\n> \n> [root@ms-esmon root]#\n> [root@ms-esmon root]# ps -eaf | grep postgre\n> sroot      8646  9365  0 Apr18 pts/1  \n 00:00:00 su - postgres\n> postgres  8647  8646  0 Apr18 pts/1    00:00:00\n-bash\n> postgres 28009     1  2 08:17 ?      \n 00:00:00 /usr/bin/postgres -p 50432 -D /var/ericsson/esm-data/postgresql-data/\n *--8.4*\n> postgres 28010 28009  0 08:17 ?        00:00:00\npostgres: logger process\n> postgres 28012 28009  0 08:17 ?        00:00:00\npostgres: writer process\n> postgres 28013 28009  0 08:17 ?        00:00:00\npostgres: wal writer process\n> postgres 28014 28009  0 08:17 ?        00:00:00\npostgres: autovacuum launcher process\n> postgres 28015 28009  0 08:17 ?        00:00:00\npostgres: stats collector process\n> postgres 28048     1  0 08:17 ?      \n 00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4/\n> postgres 28049 28048  0 08:17 ?        00:00:00\npostgres: logger process\n> postgres 28051 28048  0 08:17 ?        00:00:00\npostgres: checkpointer process\n> postgres 28052 28048  0 08:17 ?        00:00:00\npostgres: writer process\n> postgres 28053 28048  0 08:17 ?        00:00:00\npostgres: wal writer process\n> postgres 28054 28048  0 08:17 ?        00:00:00\npostgres: autovacuum launcher process\n> postgres 28055 28048  0 08:17 ?        00:00:00\npostgres: stats collector process\n> root     28057  2884  0 08:17 pts/0    00:00:00\ngrep --color=auto postgre\n> \n> \n> Also i am able to start db with the command provided by you and run\npsql.\n> \n> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl  start -o \"-p\n50432 -c listen_addresses='' -c unix_socket_permissions=0700\"  -D\n/var/ericsson/esm-data/postgresql-data-9.4/\n> pg_ctl: another server might be running; trying to start server anyway\n> server starting\n> -bash-4.2$ 2018-04-19 08:22:46.527 IST  LOG:  redirecting\nlog output to logging collector process\n> 2018-04-19 08:22:46.527 IST  HINT:  Future log output will\nappear in directory \"pg_log\".\n> \n> -bash-4.2$ ps -eaf | grep postg\n> root      8646  9365  0 Apr18 pts/1  \n 00:00:00 su - postgres\n> postgres  8647  8646  0 Apr18 pts/1    00:00:00\n-bash\n> postgres 28174     1  0 08:22 pts/1    00:00:00\n/opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4\n-p 50432 -c listen_addresses= -c unix_socket_permissions=0700\n> postgres 28175 28174  0 08:22 ?        00:00:00\npostgres: logger process\n> postgres 28177 28174  0 08:22 ?        00:00:00\npostgres: checkpointer process\n> postgres 28178 28174  0 08:22 ?        00:00:00\npostgres: writer process\n> postgres 28179 28174  0 08:22 ?        00:00:00\npostgres: wal writer process\n> postgres 28180 28174  0 08:22 ?        00:00:00\npostgres: autovacuum launcher process\n> postgres 28181 28174  0 08:22 ?        00:00:00\npostgres: stats collector process\n> postgres 28182  8647  0 08:22 pts/1    00:00:00\nps -eaf\n> postgres 28183  8647  0 08:22 pts/1    00:00:00\ngrep --color=auto postg\n> \n> -bash-4.2$ psql -p 50432 -h /var/run/postgresql -U rhqadmin -d rhq\n> psql (8.4.20, server 9.4.9)\n> WARNING: psql version 8.4, server version 9.4.\n>          Some psql features might not work.\n> Type \"help\" for help.\n> \n> rhq=>\n> \n> \n> Still its failing...\n> \n> -bash-4.2$ ps -efa | grep postgre\n> root      8646  9365  0 Apr18 pts/1  \n 00:00:00 su - postgres\n> postgres  8647  8646  0 Apr18 pts/1    00:00:00\n-bash\n> postgres 28349  8647  0 08:34 pts/1    00:00:00\nps -efa\n> postgres 28350  8647  0 08:34 pts/1    00:00:00\ngrep --color=auto postgre\n> \n> -bash-4.2$ echo $OLDCLUSTER\n> /usr/bin/postgres\n> -bash-4.2$ echo $NEWCLUSTER\n> /opt/rh/rh-postgresql94/\n> \n> [root@ms-esmon rh-postgresql94]# /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n--old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n--old-datadir=/var/ericsson/esm-data/postgresql-data --new-datadir=/var/ericsson/esm-data/postgresql-data-9.4\n> \n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions            \n                     \nok\n> \n> connection to database failed: could not connect to server: No such\nfile or directory\n>         Is the server running locally and accepting\n>         connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.50432\"?\n> \n> \n> could not connect to old postmaster started with the command:\n> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432\n-c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c unix_socket_permissions=0700\" start\n> Failure, exiting\n> \n> With Best Regards\n> Akshay\n> Ericsson OSS MON\n> Tata Consultancy Services\n> Mailto: [email protected]\n> Website: http://www.tcs.com\n<http://www.tcs.com/>\n> ____________________________________________\n> Experience certainty.        IT Services\n>                    \n   Business Solutions\n>                    \n   Consulting\n> ____________________________________________\n> \n> \n> \n> \n> From:        Fabio Pardi <[email protected]>\n> To:        Akshay Ballarpure <[email protected]>\n> Cc:        [email protected]\n> Date:        04/18/2018 06:17 PM\n> Subject:        Re: pg_upgrade help\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n -----\n> \n> \n> \n> did you run initdb on the new db?\n> \n> what happens if you manually start the new db?\n> \n> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl  start -o \"-p\n50432 -c\n> listen_addresses='' -c unix_socket_permissions=0700\"  -D\n$NEWCLUSTER\n> \n> after starting it, can you connect to it using psql?\n> \n> psql -p 50432 -h /var/run/postgresql  -U your_user _db_\n> \n> \n> \n> regards,\n> \n> fabio pardi\n> \n> \n> On 04/18/2018 02:02 PM, Akshay Ballarpure wrote:\n>> Hi Fabio,\n>> sorry to bother you again, its still failing with stopping both\nserver\n>> (8.4 and 9.4)\n>>\n>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>\n>> connection to database failed: could not connect to server: No\nsuch file\n>> or directory\n>>         Is the server running locally and\naccepting\n>>         connections on Unix domain socket\n>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>\n>>\n>> could not connect to old postmaster started with the command:\n>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p\n50432 -c autovacuum=off\n>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n>> unix_socket_permissions=0700\" start\n>> Failure, exiting\n>>\n>>\n>> With Best Regards\n>> Akshay\n>> Ericsson OSS MON\n>> Tata Consultancy Services\n>> Mailto: [email protected]\n>> Website: http://www.tcs.com\n<http://www.tcs.com/><http://www.tcs.com/>\n>> ____________________________________________\n>> Experience certainty.        IT Services\n>>                  \n     Business Solutions\n>>                  \n     Consulting\n>> ____________________________________________\n>>\n>>\n>>\n>>\n>> From:        Fabio Pardi <[email protected]>\n>> To:        Akshay Ballarpure <[email protected]>,\n>> [email protected]\n>> Date:        04/18/2018 02:35 PM\n>> Subject:        Re: pg_upgrade help\n>> ------------------------------------------------------------------------\n>>\n>>\n>>\n>> Hi,\n>>\n>> i was too fast in reply (and perhaps i should drink my morning\ncoffee\n>> before replying), I will try to be more detailed:\n>>\n>> both servers should be able to run at the moment you run pg_upgrade,\n>> that means the 2 servers should have been correctly stopped in\nadvance,\n>> should have their configuration files, and new cluster initialized\ntoo.\n>>\n>> Then, as Sergei highlights here below, pg_upgrade will take care\nof the\n>> upgrade process, starting the servers.\n>>\n>>\n>> Here there is a step by step guide, i considered my best ally\nwhen it\n>> was time to upgrade:\n>>\n>> https://www.postgresql.org/docs/9.4/static/pgupgrade.html\n>>\n>> note point 7:\n>>\n>> 'stop both servers'\n>>\n>>\n>> About the port the servers will run on, at point 9 there is some\n>> clarification:\n>>\n>> ' pg_upgrade defaults to running servers on port 50432 to avoid\n>> unintended client connections. You can use the same port number\nfor both\n>> clusters when doing an upgrade because the old and new clusters\nwill not\n>> be running at the same time. However, when checking an old running\n>> server, the old and new port numbers must be different.'\n>>\n>> Hope it helps,\n>>\n>> Fabio Pardi\n>>\n>>\n>> On 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\n>>> Thanks Fabio for instant reply.\n>>>\n>>> I now started 8.4 with 50432 and 9.4 with default port but\nstill its\n>>> failing ...Can you please suggest what is wrong ?\n>>>\n>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>>\n>>> *failure*\n>>> Consult the last few lines of \"pg_upgrade_server.log\"\nfor\n>>> the probable cause of the failure.\n>>>\n>>> There seems to be a postmaster servicing the old cluster.\n>>> Please shutdown that postmaster and try again.\n>>> Failure, exiting\n>>> -bash-4.2$ ps -eaf | grep postgres\n>>> root      8646  9365  0 08:07 pts/1\n   00:00:00 su - postgres\n>>> postgres  8647  8646  0 08:07 pts/1  \n 00:00:00 -bash\n>>> postgres  9778     1  0 09:17 ?  \n     00:00:00 /usr/bin/postgres -p\n>>> 50432 -D /var/ericsson/esm-data/postgresql-data/\n>>> postgres  9779  9778  0 09:17 ?    \n   00:00:00 postgres: logger process\n>>> postgres  9781  9778  0 09:17 ?    \n   00:00:00 postgres: writer process\n>>> postgres  9782  9778  0 09:17 ?    \n   00:00:00 postgres: wal writer\n>>> process\n>>> postgres  9783  9778  0 09:17 ?    \n   00:00:00 postgres: autovacuum\n>>> launcher process\n>>> postgres  9784  9778  0 09:17 ?    \n   00:00:00 postgres: stats\n>>> collector process\n>>> postgres  9900     1  0 09:20 ?  \n     00:00:00\n>>> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\n>>> /var/ericsson/esm-data/postgresql-data-9.4/\n>>> postgres  9901  9900  0 09:20 ?    \n   00:00:00 postgres: logger process\n>>> postgres  9903  9900  0 09:20 ?    \n   00:00:00 postgres: checkpointer\n>>> process\n>>> postgres  9904  9900  0 09:20 ?    \n   00:00:00 postgres: writer process\n>>> postgres  9905  9900  0 09:20 ?    \n   00:00:00 postgres: wal writer\n>>> process\n>>> postgres  9906  9900  0 09:20 ?    \n   00:00:00 postgres: autovacuum\n>>> launcher process\n>>> postgres  9907  9900  0 09:20 ?    \n   00:00:00 postgres: stats\n>>> collector process\n>>> postgres  9926  8647  0 09:21 pts/1  \n 00:00:00 ps -eaf\n>>> postgres  9927  8647  0 09:21 pts/1  \n 00:00:00 grep --color=auto postgres\n>>>\n>>>\n>>> -bash-4.2$ netstat -antp | grep 50432\n>>> (Not all processes could be identified, non-owned process\ninfo\n>>>  will not be shown, you would have to be root to see\nit all.)\n>>> tcp        0      0 127.0.0.1:50432\n        0.0.0.0:*          \n   \n>>> LISTEN      9778/postgres\n>>> tcp6       0      0 ::1:50432\n              :::*      \n           \n>>>  LISTEN      9778/postgres\n>>> -bash-4.2$ netstat -antp | grep 5432\n>>> (Not all processes could be identified, non-owned process\ninfo\n>>>  will not be shown, you would have to be root to see\nit all.)\n>>> tcp        0      0 127.0.0.1:5432\n         0.0.0.0:*        \n     \n>>> LISTEN      9900/postgres\n>>> tcp6       0      0 ::1:5432\n               :::*    \n             \n>>>  LISTEN      9900/postgres\n>>>\n>>> -----------------------------------------------------------------\n>>>   pg_upgrade run on Wed Apr 18 09:24:47 2018\n>>> -----------------------------------------------------------------\n>>>\n>>> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p\n50432 -c autovacuum=off\n>>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n>>> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\"\n2>&1\n>>> pg_ctl: another server might be running; trying to start server\nanyway\n>>> FATAL:  lock file \"postmaster.pid\" already\nexists\n>>> HINT:  Is another postmaster (PID 9778) running in data\ndirectory\n>>> \"/var/ericsson/esm-data/postgresql-data\"?\n>>> pg_ctl: could not start server\n>>> Examine the log output.\n>>>\n>>>\n>>> [root@ms-esmon /]# cat\n>>> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\n>>> 9900\n>>> /var/ericsson/esm-data/postgresql-data-9.4\n>>> 1524039630\n>>> 5432\n>>> /var/run/postgresql\n>>> localhost\n>>>   5432001   2031616\n>>>  \n>>>  \n>>> [root@ms-esmon /]# cat\n>>> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\n>>> 9778\n>>> /var/ericsson/esm-data/postgresql-data\n>>>  50432001   1998850\n>>>\n>>>\n>>>\n>>>\n>>> With Best Regards\n>>> Akshay\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> From:        Fabio Pardi <[email protected]>\n>>> To:        Akshay Ballarpure <[email protected]>,\n>>> [email protected]\n>>> Date:        04/18/2018 01:06 PM\n>>> Subject:        Re: pg_upgrade help\n>>> ------------------------------------------------------------------------\n>>>\n>>>\n>>>\n>>> Hi,\n>>>\n>>> please avoid crossposting to multiple mailing lists.\n>>>\n>>>\n>>> You need to run both versions of the database, the old and\nthe new.\n>>>\n>>> They need to run on different ports (note that it is impossible\nto run 2\n>>> different processes on the same port, that's not a postgresql\nthing)\n>>>\n>>>\n>>>\n>>> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\n>>>> Hi all,\n>>>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate\nurgent\n>>>> response.\n>>>> Installed both version and stopped it. Do i need to run\nboth version or\n>>>> only one 8.4 or 9.4 . Both should run on 50432 ?\n>>>>\n>>>>\n>>>> -bash-4.2$ id\n>>>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\n>>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\n>>>>\n>>>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data\n   \n>>>>                \n       -- 8.4 data\n>>>> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\n>>>>                \n  -- 9.4 data\n>>>>\n>>>>\n>>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\n>>>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\n>>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\n>>>>\n>>>> *connection to database failed: could not connect to server:\nNo such\n>>>> file or directory*\n>>>>         Is the server running locally\nand accepting\n>>>>         connections on Unix domain\nsocket\n>>>> \"/var/run/postgresql/.s.PGSQL.50432\"?\n>>>>\n>>>>\n>>>> could not connect to old postmaster started with the command:\n>>>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\"\n-D\n>>>> \"/var/ericsson/esm-data/postgresql-data\" -o\n\"-p 50432 -c autovacuum=off\n>>>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses=''\n-c\n>>>> unix_socket_permissions=0700\" start\n>>>> Failure, exiting\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> With Best Regards\n>>>> Akshay\n>>>>\n>>>> =====-----=====-----=====\n>>>> Notice: The information contained in this e-mail\n>>>> message and/or attachments to it may contain\n>>>> confidential or privileged information. If you are\n>>>> not the intended recipient, any dissemination, use,\n>>>> review, distribution, printing or copying of the\n>>>> information contained in this e-mail message\n>>>> and/or attachments to it are strictly prohibited. If\n>>>> you have received this communication in error,\n>>>> please notify us by reply e-mail or telephone and\n>>>> immediately and permanently delete the message\n>>>> and any attachments. Thank you\n>>>>\n>>>\n>>\n>", "msg_date": "Fri, 20 Apr 2018 14:54:30 +0530", "msg_from": "Akshay Ballarpure <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade help" }, { "msg_contents": "Hi Akshay,\r\n\r\nI m glad it worked. \r\n\r\n* Your new data folder will be on /var/ericsson/esm-data/postgresql-data-9.4/ therefore you should set PGDATA accordingly\r\n\r\n* analyze_new_cluster.sh runs on the new cluster, 9.4. Indeed you should start the db first, as mentioned in the upgrade message.\r\n\r\n\r\nIf you are happy with your upgrade, you can cleanup the leftovers running:\r\n\r\n delete_old_cluster.sh\r\n\r\n\r\n\r\nregards,\r\n\r\nfabio pardi\r\n\r\nOn 04/20/2018 11:24 AM, Akshay Ballarpure wrote:\r\n> Hi Fabio,\r\n> *Thanks so much for figuring out an issue..!!! much appreciated.*\r\n> i have stopped both postgres version (8.4 and 9.4)\r\n> \r\n> -bash-4.2$ export PGDATA=/var/ericsson/esm-data/postgresql-data   - postgresql 8.4\r\n> -bash-4.2$ pg_ctl stop -mfast\r\n> waiting for server to shut down.... done\r\n> server stopped\r\n> \r\n> \r\n> -bash-4.2$ export PGDATA=/var/ericsson/esm-data/postgresql-data-9.4/   - postgresql 9.4\r\n> -bash-4.2$ ps -eaf | grep postgre^C\r\n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl stop -mfast\r\n> waiting for server to shut down.... done\r\n> server stopped\r\n> \r\n> \r\n> And set below environment variables on terminal where i ran pg_upgrade. and*its working fine. thanks so much for figuring out an issue..!!! much appreciated.*\r\n> \r\n> -bash-4.2$ echo $PGDATA\r\n> /var/ericsson/esm-data/postgresql-data  - postgresql 8.4\r\n> -bash-4.2$ echo $PGHOST\r\n> /var/run/postgresql\r\n> \r\n> \r\n> -bash-4.2$ env | grep PG\r\n> PGHOST=/var/run/postgresql\r\n> PGDATA=/var/ericsson/esm-data/postgresql-data\r\n> \r\n> \r\n> /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin --old-datadir=/var/ericsson/esm-data/postgresql-data --new-datadir=/var/ericsson/esm-data/postgresql-data-9.4\r\n> \r\n> \r\n> \r\n> Performing Consistency Checks\r\n> -----------------------------\r\n> Checking cluster versions                                   ok\r\n> Checking database user is a superuser                       ok\r\n> Checking database connection settings                       ok\r\n> Checking for prepared transactions                          ok\r\n> Checking for reg* system OID user data types                ok\r\n> Checking for contrib/isn with bigint-passing mismatch       ok\r\n> Checking for invalid \"line\" user columns                    ok\r\n> Checking for large objects                                  ok\r\n> Creating dump of global objects                             ok\r\n> Creating dump of database schemas\r\n>                                                             ok\r\n> Checking for presence of required libraries                 ok\r\n> Checking database user is a superuser                       ok\r\n> Checking for prepared transactions                          ok\r\n> \r\n> If pg_upgrade fails after this point, you must re-initdb the\r\n> new cluster before continuing.\r\n> \r\n> Performing Upgrade\r\n> ------------------\r\n> Analyzing all rows in the new cluster                       ok\r\n> Freezing all rows on the new cluster                        ok\r\n> Deleting files from new pg_clog                             ok\r\n> Copying old pg_clog to new server                           ok\r\n> Setting next transaction ID and epoch for new cluster       ok\r\n> Deleting files from new pg_multixact/offsets                ok\r\n> Setting oldest multixact ID on new cluster                  ok\r\n> Resetting WAL archives                                      ok\r\n> Setting frozenxid and minmxid counters in new cluster       ok\r\n> Restoring global objects in the new cluster                 ok\r\n> Adding support functions to new cluster                     ok\r\n> Restoring database schemas in the new cluster\r\n>                                                             ok\r\n> Setting minmxid counter in new cluster                      ok\r\n> Removing support functions from new cluster                 ok\r\n> Copying user relation files\r\n>                                                             ok\r\n> Setting next OID for new cluster                            ok\r\n> Sync data directory to disk                                 ok\r\n> Creating script to analyze new cluster                      ok\r\n> Creating script to delete old cluster                       ok\r\n> Checking for large objects                                  ok\r\n> \r\n> Upgrade Complete\r\n> ----------------\r\n> Optimizer statistics are not transferred by pg_upgrade so,\r\n> once you start the new server, consider running:\r\n>     analyze_new_cluster.sh\r\n> \r\n> Running this script will delete the old cluster's data files:\r\n>     delete_old_cluster.sh\r\n> \r\n> \r\n> \r\n> Now few more questions..\r\n> \r\n> I migrated  export PGDATA=/var/ericsson/esm-data/postgresql-data - postgresql 8.4\r\n> I can start 9.4 with above PGDATA right ?\r\n> analyze_new_cluster.sh  -- is this script will be from 9.4 ?\r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> \r\n> With Best Regards\r\n> Akshay\r\n> Ericsson OSS MON\r\n> Tata Consultancy Services\r\n> Mailto: [email protected]\r\n> Website: http://www.tcs.com <http://www.tcs.com/>\r\n> ____________________________________________\r\n> Experience certainty.        IT Services\r\n>                        Business Solutions\r\n>                        Consulting\r\n> ____________________________________________\r\n> \r\n> \r\n> \r\n> \r\n> From:        Akshay Ballarpure/HYD/TCS\r\n> To:        Fabio Pardi <[email protected]>\r\n> Cc:        [email protected]\r\n> Date:        04/19/2018 06:24 PM\r\n> Subject:        Re: pg_upgrade help\r\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n> \r\n> \r\n> Hi Fabio,\r\n> I think you have found the problem. Please find o/p below.\r\n> \r\n> \r\n> -bash-4.2$ ps -aef | grep postgres\r\n> postgres   478     1  0 13:40 ?        00:00:00 /usr/bin/postgres -p 50432 -D /var/ericsson/esm-data/postgresql-data/\r\n> postgres   490   478  0 13:40 ?        00:00:00 postgres: logger process\r\n> postgres   492   478  0 13:40 ?        00:00:00 postgres: writer process\r\n> postgres   493   478  0 13:40 ?        00:00:00 postgres: wal writer process\r\n> postgres   494   478  0 13:40 ?        00:00:00 postgres: autovacuum launcher process\r\n> postgres   495   478  0 13:40 ?        00:00:00 postgres: stats collector process\r\n> postgres   528     1  0 13:40 ?        00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4/\r\n> postgres   529   528  0 13:40 ?        00:00:00 postgres: logger process\r\n> postgres   531   528  0 13:40 ?        00:00:00 postgres: checkpointer process\r\n> postgres   532   528  0 13:40 ?        00:00:00 postgres: writer process\r\n> postgres   533   528  0 13:40 ?        00:00:00 postgres: wal writer process\r\n> postgres   534   528  0 13:40 ?        00:00:00 postgres: autovacuum launcher process\r\n> postgres   535   528  0 13:40 ?        00:00:00 postgres: stats collector process\r\n> postgres   734  8647  0 13:50 pts/1    00:00:00 ps -aef\r\n> postgres   735  8647  0 13:50 pts/1    00:00:00 grep --color=auto postgres\r\n> root      8646  9365  0 Apr18 pts/1    00:00:00 su - postgres\r\n> postgres  8647  8646  0 Apr18 pts/1    00:00:00 -bash\r\n> \r\n> 9.4\r\n> ===\r\n> \r\n> -bash-4.2$ psql\r\n> psql (8.4.20, server 9.4.9)\r\n> WARNING: psql version 8.4, server version 9.4.\r\n>          Some psql features might not work.\r\n> Type \"help\" for help.\r\n> \r\n> postgres=#\r\n> \r\n> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/psql\r\n> psql (9.4.9)\r\n> Type \"help\" for help.\r\n> \r\n> postgres=#\r\n> \r\n> 8.4\r\n> ====\r\n> \r\n> -bash-4.2$  psql -p 50432\r\n> *psql: could not connect to server: No such file or directory*\r\n> *        Is the server running locally and accepting*\r\n> *        connections on Unix domain socket \"/tmp/.s.PGSQL.50432\"?*\r\n> \r\n> \r\n> \r\n> ==========================================================================================================\r\n> \r\n> After setting PGHOST, i can connect to PSQL\r\n>                \r\n> -bash-4.2$ echo $PGHOST\r\n> /var/run/postgresql\r\n> -bash-4.2$ psql -p 50432\r\n> psql (8.4.20)\r\n> Type \"help\" for help.\r\n> \r\n> postgres=#\r\n> \r\n>                \r\n>                \r\n> \r\n> \r\n>                \r\n> \r\n> \r\n> \r\n> \r\n> With Best Regards\r\n> Akshay\r\n> Ericsson OSS MON\r\n> Tata Consultancy Services\r\n> Mailto: [email protected]\r\n> Website: http://www.tcs.com <http://www.tcs.com/>\r\n> ____________________________________________\r\n> Experience certainty.        IT Services\r\n>                        Business Solutions\r\n>                        Consulting\r\n> ____________________________________________\r\n> \r\n> \r\n> \r\n> \r\n> \r\n> From:        Fabio Pardi <[email protected]>\r\n> To:        Akshay Ballarpure <[email protected]>, [email protected]\r\n> Date:        04/19/2018 03:45 PM\r\n> Subject:        Re: pg_upgrade help\r\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n> \r\n> \r\n> \r\n> Hi,\r\n> \r\n> while trying to reproduce your problem, i noticed that on my Centos 6 installations Postgres 8.4 and Postgres 9.6 (I do not have 9.4 readily available) store the socket in different places:\r\n> \r\n> Postgres 9.6.6 uses /var/run/postgresql/\r\n> \r\n> Postgres 8.4 uses /tmp/\r\n> \r\n> therefore using default settings, i can connect to 9.6 but not 8.4 without specifying where the socket is\r\n> \r\n> Connect to 9.6\r\n> \r\n> 12:01 postgres@machine:~# psql\r\n> psql (8.4.20, server 9.6.6)\r\n> WARNING: psql version 8.4, server version 9.6.\r\n>         Some psql features might not work.\r\n> Type \"help\" for help.\r\n> \r\n> ---------\r\n> \r\n> Connect to 8.4\r\n> \r\n> 12:01 postgres@machine:~# psql\r\n> psql: could not connect to server: No such file or directory\r\n>        Is the server running locally and accepting\r\n>        connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?\r\n> \r\n> 12:04 postgres@machine:~# psql -h /tmp\r\n> psql (8.4.20)\r\n> Type \"help\" for help.\r\n> \r\n> \r\n> \r\n> \r\n> I think you might be incurring in the same problem.\r\n> \r\n> Can you confirm it?\r\n> \r\n> \r\n> regards,\r\n> \r\n> fabio pardi\r\n> \r\n> \r\n> \r\n> \r\n> \r\n> On 04/19/2018 09:37 AM, Akshay Ballarpure wrote:\r\n>> Hi Fabio,\r\n>> Yes i ran initdb on new database and able to start as below.\r\n>>\r\n>> [root@ms-esmon root]# su - postgres -c \"/usr/bin/postgres -p 50432 -D /var/ericsson/esm-data/postgresql-data/ 2>&1 &\"\r\n>> [root@ms-esmon root]# su - postgres -c \"/opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4/ 2>&1 &\"\r\n>> [root@ms-esmon root]# 2018-04-19 08:17:53.553 IST  LOG:  redirecting log output to logging collector process\r\n>> 2018-04-19 08:17:53.553 IST  HINT:  Future log output will appear in directory \"pg_log\".\r\n>>\r\n>> [root@ms-esmon root]#\r\n>> [root@ms-esmon root]# ps -eaf | grep postgre\r\n>> sroot      8646  9365  0 Apr18 pts/1    00:00:00 su - postgres\r\n>> postgres  8647  8646  0 Apr18 pts/1    00:00:00 -bash\r\n>> postgres 28009     1  2 08:17 ?        00:00:00 /usr/bin/postgres -p 50432 -D /var/ericsson/esm-data/postgresql-data/  *--8.4*\r\n>> postgres 28010 28009  0 08:17 ?        00:00:00 postgres: logger process\r\n>> postgres 28012 28009  0 08:17 ?        00:00:00 postgres: writer process\r\n>> postgres 28013 28009  0 08:17 ?        00:00:00 postgres: wal writer process\r\n>> postgres 28014 28009  0 08:17 ?        00:00:00 postgres: autovacuum launcher process\r\n>> postgres 28015 28009  0 08:17 ?        00:00:00 postgres: stats collector process\r\n>> postgres 28048     1  0 08:17 ?        00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4/\r\n>> postgres 28049 28048  0 08:17 ?        00:00:00 postgres: logger process\r\n>> postgres 28051 28048  0 08:17 ?        00:00:00 postgres: checkpointer process\r\n>> postgres 28052 28048  0 08:17 ?        00:00:00 postgres: writer process\r\n>> postgres 28053 28048  0 08:17 ?        00:00:00 postgres: wal writer process\r\n>> postgres 28054 28048  0 08:17 ?        00:00:00 postgres: autovacuum launcher process\r\n>> postgres 28055 28048  0 08:17 ?        00:00:00 postgres: stats collector process\r\n>> root     28057  2884  0 08:17 pts/0    00:00:00 grep --color=auto postgre\r\n>>\r\n>>\r\n>> Also i am able to start db with the command provided by you and run psql.\r\n>>\r\n>> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl  start -o \"-p 50432 -c listen_addresses='' -c unix_socket_permissions=0700\"  -D /var/ericsson/esm-data/postgresql-data-9.4/\r\n>> pg_ctl: another server might be running; trying to start server anyway\r\n>> server starting\r\n>> -bash-4.2$ 2018-04-19 08:22:46.527 IST  LOG:  redirecting log output to logging collector process\r\n>> 2018-04-19 08:22:46.527 IST  HINT:  Future log output will appear in directory \"pg_log\".\r\n>>\r\n>> -bash-4.2$ ps -eaf | grep postg\r\n>> root      8646  9365  0 Apr18 pts/1    00:00:00 su - postgres\r\n>> postgres  8647  8646  0 Apr18 pts/1    00:00:00 -bash\r\n>> postgres 28174     1  0 08:22 pts/1    00:00:00 /opt/rh/rh-postgresql94/root/usr/bin/postgres -D /var/ericsson/esm-data/postgresql-data-9.4 -p 50432 -c listen_addresses= -c unix_socket_permissions=0700\r\n>> postgres 28175 28174  0 08:22 ?        00:00:00 postgres: logger process\r\n>> postgres 28177 28174  0 08:22 ?        00:00:00 postgres: checkpointer process\r\n>> postgres 28178 28174  0 08:22 ?        00:00:00 postgres: writer process\r\n>> postgres 28179 28174  0 08:22 ?        00:00:00 postgres: wal writer process\r\n>> postgres 28180 28174  0 08:22 ?        00:00:00 postgres: autovacuum launcher process\r\n>> postgres 28181 28174  0 08:22 ?        00:00:00 postgres: stats collector process\r\n>> postgres 28182  8647  0 08:22 pts/1    00:00:00 ps -eaf\r\n>> postgres 28183  8647  0 08:22 pts/1    00:00:00 grep --color=auto postg\r\n>>\r\n>> -bash-4.2$ psql -p 50432 -h /var/run/postgresql -U rhqadmin -d rhq\r\n>> psql (8.4.20, server 9.4.9)\r\n>> WARNING: psql version 8.4, server version 9.4.\r\n>>          Some psql features might not work.\r\n>> Type \"help\" for help.\r\n>>\r\n>> rhq=>\r\n>>\r\n>>\r\n>> Still its failing...\r\n>>\r\n>> -bash-4.2$ ps -efa | grep postgre\r\n>> root      8646  9365  0 Apr18 pts/1    00:00:00 su - postgres\r\n>> postgres  8647  8646  0 Apr18 pts/1    00:00:00 -bash\r\n>> postgres 28349  8647  0 08:34 pts/1    00:00:00 ps -efa\r\n>> postgres 28350  8647  0 08:34 pts/1    00:00:00 grep --color=auto postgre\r\n>>\r\n>> -bash-4.2$ echo $OLDCLUSTER\r\n>> /usr/bin/postgres\r\n>> -bash-4.2$ echo $NEWCLUSTER\r\n>> /opt/rh/rh-postgresql94/\r\n>>\r\n>> [root@ms-esmon rh-postgresql94]# /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin --old-datadir=/var/ericsson/esm-data/postgresql-data --new-datadir=/var/ericsson/esm-data/postgresql-data-9.4\r\n>>\r\n>> Performing Consistency Checks\r\n>> -----------------------------\r\n>> Checking cluster versions                                   ok\r\n>>\r\n>> connection to database failed: could not connect to server: No such file or directory\r\n>>         Is the server running locally and accepting\r\n>>         connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.50432\"?\r\n>>\r\n>>\r\n>> could not connect to old postmaster started with the command:\r\n>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c unix_socket_permissions=0700\" start\r\n>> Failure, exiting\r\n>>\r\n>> With Best Regards\r\n>> Akshay\r\n>> Ericsson OSS MON\r\n>> Tata Consultancy Services\r\n>> Mailto: [email protected]\r\n>> Website: http://www.tcs.com <http://www.tcs.com/><http://www.tcs.com/>\r\n>> ____________________________________________\r\n>> Experience certainty.        IT Services\r\n>>                        Business Solutions\r\n>>                        Consulting\r\n>> ____________________________________________\r\n>>\r\n>>\r\n>>\r\n>>\r\n>> From:        Fabio Pardi <[email protected]>\r\n>> To:        Akshay Ballarpure <[email protected]>\r\n>> Cc:        [email protected]\r\n>> Date:        04/18/2018 06:17 PM\r\n>> Subject:        Re: pg_upgrade help\r\n>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----\r\n>>\r\n>>\r\n>>\r\n>> did you run initdb on the new db?\r\n>>\r\n>> what happens if you manually start the new db?\r\n>>\r\n>> /opt/rh/rh-postgresql94/root/usr/bin/pg_ctl  start -o \"-p 50432 -c\r\n>> listen_addresses='' -c unix_socket_permissions=0700\"  -D $NEWCLUSTER\r\n>>\r\n>> after starting it, can you connect to it using psql?\r\n>>\r\n>> psql -p 50432 -h /var/run/postgresql  -U your_user _db_\r\n>>\r\n>>\r\n>>\r\n>> regards,\r\n>>\r\n>> fabio pardi\r\n>>\r\n>>\r\n>> On 04/18/2018 02:02 PM, Akshay Ballarpure wrote:\r\n>>> Hi Fabio,\r\n>>> sorry to bother you again, its still failing with stopping both server\r\n>>> (8.4 and 9.4)\r\n>>>\r\n>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\r\n>>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\r\n>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\r\n>>>\r\n>>> connection to database failed: could not connect to server: No such file\r\n>>> or directory\r\n>>>         Is the server running locally and accepting\r\n>>>         connections on Unix domain socket\r\n>>> \"/var/run/postgresql/.s.PGSQL.50432\"?\r\n>>>\r\n>>>\r\n>>> could not connect to old postmaster started with the command:\r\n>>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\r\n>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\r\n>>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c\r\n>>> unix_socket_permissions=0700\" start\r\n>>> Failure, exiting\r\n>>>\r\n>>>\r\n>>> With Best Regards\r\n>>> Akshay\r\n>>> Ericsson OSS MON\r\n>>> Tata Consultancy Services\r\n>>> Mailto: [email protected]\r\n>>> Website: http://www.tcs.com <http://www.tcs.com/><http://www.tcs.com/><http://www.tcs.com/>\r\n>>> ____________________________________________\r\n>>> Experience certainty.        IT Services\r\n>>>                        Business Solutions\r\n>>>                        Consulting\r\n>>> ____________________________________________\r\n>>>\r\n>>>\r\n>>>\r\n>>>\r\n>>> From:        Fabio Pardi <[email protected]>\r\n>>> To:        Akshay Ballarpure <[email protected]>,\r\n>>> [email protected]\r\n>>> Date:        04/18/2018 02:35 PM\r\n>>> Subject:        Re: pg_upgrade help\r\n>>> ------------------------------------------------------------------------\r\n>>>\r\n>>>\r\n>>>\r\n>>> Hi,\r\n>>>\r\n>>> i was too fast in reply (and perhaps i should drink my morning coffee\r\n>>> before replying), I will try to be more detailed:\r\n>>>\r\n>>> both servers should be able to run at the moment you run pg_upgrade,\r\n>>> that means the 2 servers should have been correctly stopped in advance,\r\n>>> should have their configuration files, and new cluster initialized too.\r\n>>>\r\n>>> Then, as Sergei highlights here below, pg_upgrade will take care of the\r\n>>> upgrade process, starting the servers.\r\n>>>\r\n>>>\r\n>>> Here there is a step by step guide, i considered my best ally when it\r\n>>> was time to upgrade:\r\n>>>\r\n>>> https://www.postgresql.org/docs/9.4/static/pgupgrade.html\r\n>>>\r\n>>> note point 7:\r\n>>>\r\n>>> 'stop both servers'\r\n>>>\r\n>>>\r\n>>> About the port the servers will run on, at point 9 there is some\r\n>>> clarification:\r\n>>>\r\n>>> ' pg_upgrade defaults to running servers on port 50432 to avoid\r\n>>> unintended client connections. You can use the same port number for both\r\n>>> clusters when doing an upgrade because the old and new clusters will not\r\n>>> be running at the same time. However, when checking an old running\r\n>>> server, the old and new port numbers must be different.'\r\n>>>\r\n>>> Hope it helps,\r\n>>>\r\n>>> Fabio Pardi\r\n>>>\r\n>>>\r\n>>> On 04/18/2018 10:34 AM, Akshay Ballarpure wrote:\r\n>>>> Thanks Fabio for instant reply.\r\n>>>>\r\n>>>> I now started 8.4 with 50432 and 9.4 with default port but still its\r\n>>>> failing ...Can you please suggest what is wrong ?\r\n>>>>\r\n>>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\r\n>>>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\r\n>>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\r\n>>>>\r\n>>>> *failure*\r\n>>>> Consult the last few lines of \"pg_upgrade_server.log\" for\r\n>>>> the probable cause of the failure.\r\n>>>>\r\n>>>> There seems to be a postmaster servicing the old cluster.\r\n>>>> Please shutdown that postmaster and try again.\r\n>>>> Failure, exiting\r\n>>>> -bash-4.2$ ps -eaf | grep postgres\r\n>>>> root      8646  9365  0 08:07 pts/1    00:00:00 su - postgres\r\n>>>> postgres  8647  8646  0 08:07 pts/1    00:00:00 -bash\r\n>>>> postgres  9778     1  0 09:17 ?        00:00:00 /usr/bin/postgres -p\r\n>>>> 50432 -D /var/ericsson/esm-data/postgresql-data/\r\n>>>> postgres  9779  9778  0 09:17 ?        00:00:00 postgres: logger process\r\n>>>> postgres  9781  9778  0 09:17 ?        00:00:00 postgres: writer process\r\n>>>> postgres  9782  9778  0 09:17 ?        00:00:00 postgres: wal writer\r\n>>>> process\r\n>>>> postgres  9783  9778  0 09:17 ?        00:00:00 postgres: autovacuum\r\n>>>> launcher process\r\n>>>> postgres  9784  9778  0 09:17 ?        00:00:00 postgres: stats\r\n>>>> collector process\r\n>>>> postgres  9900     1  0 09:20 ?        00:00:00\r\n>>>> /opt/rh/rh-postgresql94/root/usr/bin/postgres -D\r\n>>>> /var/ericsson/esm-data/postgresql-data-9.4/\r\n>>>> postgres  9901  9900  0 09:20 ?        00:00:00 postgres: logger process\r\n>>>> postgres  9903  9900  0 09:20 ?        00:00:00 postgres: checkpointer\r\n>>>> process\r\n>>>> postgres  9904  9900  0 09:20 ?        00:00:00 postgres: writer process\r\n>>>> postgres  9905  9900  0 09:20 ?        00:00:00 postgres: wal writer\r\n>>>> process\r\n>>>> postgres  9906  9900  0 09:20 ?        00:00:00 postgres: autovacuum\r\n>>>> launcher process\r\n>>>> postgres  9907  9900  0 09:20 ?        00:00:00 postgres: stats\r\n>>>> collector process\r\n>>>> postgres  9926  8647  0 09:21 pts/1    00:00:00 ps -eaf\r\n>>>> postgres  9927  8647  0 09:21 pts/1    00:00:00 grep --color=auto postgres\r\n>>>>\r\n>>>>\r\n>>>> -bash-4.2$ netstat -antp | grep 50432\r\n>>>> (Not all processes could be identified, non-owned process info\r\n>>>>  will not be shown, you would have to be root to see it all.)\r\n>>>> tcp        0      0 127.0.0.1:50432         0.0.0.0:*              \r\n>>>> LISTEN      9778/postgres\r\n>>>> tcp6       0      0 ::1:50432               :::*                  \r\n>>>>  LISTEN      9778/postgres\r\n>>>> -bash-4.2$ netstat -antp | grep 5432\r\n>>>> (Not all processes could be identified, non-owned process info\r\n>>>>  will not be shown, you would have to be root to see it all.)\r\n>>>> tcp        0      0 127.0.0.1:5432          0.0.0.0:*              \r\n>>>> LISTEN      9900/postgres\r\n>>>> tcp6       0      0 ::1:5432                :::*                  \r\n>>>>  LISTEN      9900/postgres\r\n>>>>\r\n>>>> -----------------------------------------------------------------\r\n>>>>   pg_upgrade run on Wed Apr 18 09:24:47 2018\r\n>>>> -----------------------------------------------------------------\r\n>>>>\r\n>>>> command: \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\r\n>>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\r\n>>>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c\r\n>>>> unix_socket_permissions=0700\" start >> \"pg_upgrade_server.log\" 2>&1\r\n>>>> pg_ctl: another server might be running; trying to start server anyway\r\n>>>> FATAL:  lock file \"postmaster.pid\" already exists\r\n>>>> HINT:  Is another postmaster (PID 9778) running in data directory\r\n>>>> \"/var/ericsson/esm-data/postgresql-data\"?\r\n>>>> pg_ctl: could not start server\r\n>>>> Examine the log output.\r\n>>>>\r\n>>>>\r\n>>>> [root@ms-esmon /]# cat\r\n>>>> ./var/ericsson/esm-data/postgresql-data-9.4/postmaster.pid\r\n>>>> 9900\r\n>>>> /var/ericsson/esm-data/postgresql-data-9.4\r\n>>>> 1524039630\r\n>>>> 5432\r\n>>>> /var/run/postgresql\r\n>>>> localhost\r\n>>>>   5432001   2031616\r\n>>>>  \r\n>>>>  \r\n>>>> [root@ms-esmon /]# cat\r\n>>>> ./var/ericsson/esm-data/postgresql-data/postmaster.pid\r\n>>>> 9778\r\n>>>> /var/ericsson/esm-data/postgresql-data\r\n>>>>  50432001   1998850\r\n>>>>\r\n>>>>\r\n>>>>\r\n>>>>\r\n>>>> With Best Regards\r\n>>>> Akshay\r\n>>>>\r\n>>>>\r\n>>>>\r\n>>>>\r\n>>>>\r\n>>>> From:        Fabio Pardi <[email protected]>\r\n>>>> To:        Akshay Ballarpure <[email protected]>,\r\n>>>> [email protected]\r\n>>>> Date:        04/18/2018 01:06 PM\r\n>>>> Subject:        Re: pg_upgrade help\r\n>>>> ------------------------------------------------------------------------\r\n>>>>\r\n>>>>\r\n>>>>\r\n>>>> Hi,\r\n>>>>\r\n>>>> please avoid crossposting to multiple mailing lists.\r\n>>>>\r\n>>>>\r\n>>>> You need to run both versions of the database, the old and the new.\r\n>>>>\r\n>>>> They need to run on different ports (note that it is impossible to run 2\r\n>>>> different processes on the same port, that's not a postgresql thing)\r\n>>>>\r\n>>>>\r\n>>>>\r\n>>>> On 04/18/2018 09:30 AM, Akshay Ballarpure wrote:\r\n>>>>> Hi all,\r\n>>>>> I need help on pg_upgrade from 8.4 to 9.4 version. Appreciate urgent\r\n>>>>> response.\r\n>>>>> Installed both version and stopped it. Do i need to run both version or\r\n>>>>> only one 8.4 or 9.4 . Both should run on 50432 ?\r\n>>>>>\r\n>>>>>\r\n>>>>> -bash-4.2$ id\r\n>>>>> uid=26(postgres) gid=26(postgres) groups=26(postgres)\r\n>>>>> context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023\r\n>>>>>\r\n>>>>> -bash-4.2$ export OLDCLUSTER=/var/ericsson/esm-data/postgresql-data    \r\n>>>>>                        -- 8.4 data\r\n>>>>> -bash-4.2$ export NEWCLUSTER=/var/ericsson/esm-data/postgresql-data-9.4\r\n>>>>>                   -- 9.4 data\r\n>>>>>\r\n>>>>>\r\n>>>>> -bash-4.2$ /opt/rh/rh-postgresql94/root/usr/bin/pg_upgrade\r\n>>>>> --old-bindir=/usr/bin --new-bindir=/opt/rh/rh-postgresql94/root/usr/bin\r\n>>>>> --old-datadir=$OLDCLUSTER --new-datadir=$NEWCLUSTER\r\n>>>>>\r\n>>>>> *connection to database failed: could not connect to server: No such\r\n>>>>> file or directory*\r\n>>>>>         Is the server running locally and accepting\r\n>>>>>         connections on Unix domain socket\r\n>>>>> \"/var/run/postgresql/.s.PGSQL.50432\"?\r\n>>>>>\r\n>>>>>\r\n>>>>> could not connect to old postmaster started with the command:\r\n>>>>> \"/usr/bin/pg_ctl\" -w -l \"pg_upgrade_server.log\" -D\r\n>>>>> \"/var/ericsson/esm-data/postgresql-data\" -o \"-p 50432 -c autovacuum=off\r\n>>>>> -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c\r\n>>>>> unix_socket_permissions=0700\" start\r\n>>>>> Failure, exiting\r\n>>>>>\r\n>>>>>\r\n>>>>>\r\n>>>>>\r\n>>>>> With Best Regards\r\n>>>>> Akshay\r\n>>>>>\r\n>>>>> =====-----=====-----=====\r\n>>>>> Notice: The information contained in this e-mail\r\n>>>>> message and/or attachments to it may contain\r\n>>>>> confidential or privileged information. If you are\r\n>>>>> not the intended recipient, any dissemination, use,\r\n>>>>> review, distribution, printing or copying of the\r\n>>>>> information contained in this e-mail message\r\n>>>>> and/or attachments to it are strictly prohibited. If\r\n>>>>> you have received this communication in error,\r\n>>>>> please notify us by reply e-mail or telephone and\r\n>>>>> immediately and permanently delete the message\r\n>>>>> and any attachments. Thank you\r\n>>>>>\r\n>>>>\r\n>>>\r\n>>\r\n> \r\n", "msg_date": "Fri, 20 Apr 2018 11:48:38 +0200", "msg_from": "Fabio Pardi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade help" } ]
[ { "msg_contents": "Hi,\n\nI'm running the same query with \"set enable_seqscan = on;\" and \"set \nenable_seqscan = off;\":\n\n->  Nested Loop Left Join  (cost=0.00..89642.86 rows=1 width=30) (actual \ntime=1.612..6924.232 rows=3289 loops=1)\n       Join Filter: (sys_user.user_id = j_6634.id)\n       Rows Removed by Join Filter: 14330174\n       ->  Seq Scan on sys_user  (cost=0.00..89449.85 rows=1 width=16) \n(actual time=0.117..39.802 rows=3289 loops=1)\n             Filter: ...\n       ->  Seq Scan on cmn_user j_6634  (cost=0.00..138.56 rows=4356 \nwidth=22) (actual time=0.001..0.973 rows=4358 loops=3289)\n\n(Full plan: https://explain.depesz.com/s/plAO)\n\n->  Nested Loop Left Join  (cost=0.56..89643.52 rows=1 width=30) (actual \ntime=0.589..39.674 rows=3288 loops=1)\n       ->  Index Scan using sys_user_pkey on sys_user \n(cost=0.28..89635.21 rows=1 width=16) (actual time=0.542..29.435 \nrows=3288 loops=1)\n             Filter: ...\n       ->  Index Scan using cmn_user_pkey on cmn_user j_6634 \n(cost=0.28..8.30 rows=1 width=22) (actual time=0.002..0.002 rows=1 \nloops=3288)\n             Index Cond: (sys_user.user_id = id)\n\n(Full plan: https://explain.depesz.com/s/4QXy)\n\nWhy optimizer is choosing SeqScan (on cmn_user) in the first query, \ninstead of an IndexScan, despite of SeqScan being more costly?\n\nRegards,\nVitaliy\n\n", "msg_date": "Thu, 19 Apr 2018 01:14:48 +0300", "msg_from": "Vitaliy Garnashevich <[email protected]>", "msg_from_op": true, "msg_subject": "SeqScan vs. IndexScan" }, { "msg_contents": "Vitaliy Garnashevich <[email protected]> writes:\n> I'm running the same query with \"set enable_seqscan = on;\" and \"set \n> enable_seqscan = off;\":\n> ...\n> Why optimizer is choosing SeqScan (on cmn_user) in the first query, \n> instead of an IndexScan, despite of SeqScan being more costly?\n\nBecause it cares about the total plan cost, not the cost of any one\nsub-node. In this case, the total costs at the join level are fuzzily\nthe same, but the indexscan-based join has worse estimated startup cost,\nso it prefers the first choice.\n\nThe real problem here is the discrepancy between estimate and reality\nfor the number of rows out of the sys_user scan; because of that, you're\ngoing to get garbage choices at the join level no matter what :-(.\nYou should look into what's causing that misestimate and whether you\ncan reduce the error, perhaps by providing better stats or reformulating\nthe filter conditions in a way the optimizer understands better.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 18 Apr 2018 18:33:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SeqScan vs. IndexScan" } ]
[ { "msg_contents": "Hi,\n\nI am running postgresql db server 9.4.14 on AWS of C4.2xlarge instance type\nwith 500 GB volume of volume type io1 with 25000 IOPS and I am seeing\nperformance issues. The sql query response takes around *127713.413 ms *time\n*.* Is there a way to find out the bottleneck?\n\nThe select sql query are as below :-\n\n# SELECT serorgid,appname,sum(message_count) AS mtrc0,date_trunc('day',\nclient_received_start_timestamp at time zone '+5:30:0')::timestamp without\ntime zone AS time_unit FROM analytics.\"test.prod.fact\" WHERE\nclient_received_start_timestamp >= '2018-3-28 18:30:0' AND\nclient_received_start_timestamp < '2018-4-11 18:30:0' AND ((apiproxy in\n('test-service' ) ) and (exchangeinstance != '(not set)' ) and (devemail\n!= '[email protected]' ) and (devemail != '[email protected]' ) and\n(devemail != '[email protected]' ) and (devemail != '[email protected]' ) and\n(apistatus = 'Success' ) and (apiaction not in\n('LRN','finder','ManuallySelect' ) ) and (appname not in ('Mobile Connect\nDeveloper Portal (Int(', 'MinskHBM', 'LondonHBM', 'SeoulHBM', 'MumbaiHBM',\n'NVirginiaHBM','SPauloHBM', 'Mobile Connect HeartBeat Monitor',\n'PDMAOpenSDKTest1', 'PDMAOpenSDKTest2', 'PDMASDKTest', 'APIHealth',\n'A1qaDemoApp','test', 'dublin o2o test tool', 'Test from John do not\nprovision' ) ) and (serorgid = 'aircel' )) GROUP BY\nserorgid,appname,time_unit ORDER BY time_unit DESC LIMIT 14400 OFFSET 0;\n\n\n*Time: 127713.413 ms*\n\nAny help will be highly appreciable. I look forward to hearing from you.\n\nBest Regards,\n\nKaushal\n\nHi,I am running postgresql db server 9.4.14 on AWS of C4.2xlarge instance type with 500 GB volume of volume type io1 with 25000 IOPS and I am seeing performance issues. The sql query response takes around 127713.413 ms time. Is there a way to find out the bottleneck?The select sql query are as below :-# SELECT serorgid,appname,sum(message_count) AS mtrc0,date_trunc('day', client_received_start_timestamp at time zone '+5:30:0')::timestamp without time zone AS time_unit FROM analytics.\"test.prod.fact\" WHERE client_received_start_timestamp >= '2018-3-28 18:30:0' AND client_received_start_timestamp < '2018-4-11 18:30:0' AND  ((apiproxy in ('test-service' )  ) and (exchangeinstance != '(not set)'  ) and (devemail != '[email protected]'  ) and (devemail != '[email protected]'  ) and (devemail != '[email protected]'  ) and (devemail != '[email protected]'  ) and (apistatus = 'Success'  ) and (apiaction not in ('LRN','finder','ManuallySelect' )  ) and (appname not in ('Mobile Connect Developer Portal (Int(', 'MinskHBM', 'LondonHBM', 'SeoulHBM', 'MumbaiHBM', 'NVirginiaHBM','SPauloHBM', 'Mobile Connect HeartBeat Monitor', 'PDMAOpenSDKTest1', 'PDMAOpenSDKTest2', 'PDMASDKTest', 'APIHealth', 'A1qaDemoApp','test', 'dublin o2o test tool', 'Test from John do not provision' )  ) and (serorgid = 'aircel'  ))  GROUP BY serorgid,appname,time_unit ORDER BY time_unit DESC LIMIT 14400 OFFSET 0;Time: 127713.413 msAny help will be highly appreciable. I look forward to hearing from you.Best Regards,Kaushal", "msg_date": "Sun, 29 Apr 2018 10:05:23 +0530", "msg_from": "Kaushal Shriyan <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issues while running select sql query" }, { "msg_contents": "On Sun, Apr 29, 2018 at 10:05:23AM +0530, Kaushal Shriyan wrote:\n> Hi,\n> \n> I am running postgresql db server 9.4.14 on AWS of C4.2xlarge instance type\n> with 500 GB volume of volume type io1 with 25000 IOPS and I am seeing\n> performance issues. The sql query response takes around *127713.413 ms *time\n> *.* Is there a way to find out the bottleneck?\n\nSend the output of \"explain(analyze,buffers)\" for the query?\n\nJustin\n\n", "msg_date": "Sat, 28 Apr 2018 23:40:19 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues while running select sql query" }, { "msg_contents": "On Sun, Apr 29, 2018 at 10:10 AM, Justin Pryzby <[email protected]>\nwrote:\n\n> On Sun, Apr 29, 2018 at 10:05:23AM +0530, Kaushal Shriyan wrote:\n> > Hi,\n> >\n> > I am running postgresql db server 9.4.14 on AWS of C4.2xlarge instance\n> type\n> > with 500 GB volume of volume type io1 with 25000 IOPS and I am seeing\n> > performance issues. The sql query response takes around *127713.413 ms\n> *time\n> > *.* Is there a way to find out the bottleneck?\n>\n> Send the output of \"explain(analyze,buffers)\" for the query?\n>\n> Justin\n>\n\nHi Justin,\n\nDo i need to run the below sql query? Please comment.\n\nexplain(analyze,buffers) SELECT serorgid,appname,sum(message_count) AS\n> mtrc0,date_trunc('day', client_received_start_timestamp at time zone\n> '+5:30:0')::timestamp without time zone AS time_unit FROM\n> analytics.\"test.prod.fact\" WHERE client_received_start_timestamp >=\n> '2018-3-28 18:30:0' AND client_received_start_timestamp < '2018-4-11\n> 18:30:0' AND ((apiproxy in ('test-service' ) ) and (exchangeinstance !=\n> '(not set)' ) and (devemail != '[email protected]' ) and (devemail != '\n> [email protected]' ) and (devemail != '[email protected]' ) and (devemail\n> != '[email protected]' ) and (apistatus = 'Success' ) and (apiaction not\n> in ('LRN','finder','ManuallySelect' ) ) and (appname not in ('Mobile\n> Connect Developer Portal (Int(', 'MinskHBM', 'LondonHBM', 'SeoulHBM',\n> 'MumbaiHBM', 'NVirginiaHBM','SPauloHBM', 'Mobile Connect HeartBeat\n> Monitor', 'PDMAOpenSDKTest1', 'PDMAOpenSDKTest2', 'PDMASDKTest',\n> 'APIHealth', 'A1qaDemoApp','test', 'dublin o2o test tool', 'Test from John\n> do not provision' ) ) and (serorgid = 'aircel' )) GROUP BY\n> serorgid,appname,time_unit ORDER BY time_unit DESC LIMIT 14400 OFFSET 0;\n\n\nI look forward to hearing from you.\n\nBest Regards,\n\nOn Sun, Apr 29, 2018 at 10:10 AM, Justin Pryzby <[email protected]> wrote:On Sun, Apr 29, 2018 at 10:05:23AM +0530, Kaushal Shriyan wrote:\n> Hi,\n> \n> I am running postgresql db server 9.4.14 on AWS of C4.2xlarge instance type\n> with 500 GB volume of volume type io1 with 25000 IOPS and I am seeing\n> performance issues. The sql query response takes around *127713.413 ms *time\n> *.* Is there a way to find out the bottleneck?\n\nSend the output of \"explain(analyze,buffers)\" for the query?\n\nJustin\nHi Justin,Do i need to run the below sql query? Please comment.explain(analyze,buffers) SELECT serorgid,appname,sum(message_count) AS mtrc0,date_trunc('day', client_received_start_timestamp at time zone '+5:30:0')::timestamp without time zone AS time_unit FROM analytics.\"test.prod.fact\" WHERE client_received_start_timestamp >= '2018-3-28 18:30:0' AND client_received_start_timestamp < '2018-4-11 18:30:0' AND  ((apiproxy in ('test-service' )  ) and (exchangeinstance != '(not set)'  ) and (devemail != '[email protected]'  ) and (devemail != '[email protected]'  ) and (devemail != '[email protected]'  ) and (devemail != '[email protected]'  ) and (apistatus = 'Success'  ) and (apiaction not in ('LRN','finder','ManuallySelect' )  ) and (appname not in ('Mobile Connect Developer Portal (Int(', 'MinskHBM', 'LondonHBM', 'SeoulHBM', 'MumbaiHBM', 'NVirginiaHBM','SPauloHBM', 'Mobile Connect HeartBeat Monitor', 'PDMAOpenSDKTest1', 'PDMAOpenSDKTest2', 'PDMASDKTest', 'APIHealth', 'A1qaDemoApp','test', 'dublin o2o test tool', 'Test from John do not provision' )  ) and (serorgid = 'aircel'  ))  GROUP BY serorgid,appname,time_unit ORDER BY time_unit DESC LIMIT 14400 OFFSET 0;I look forward to hearing from you.Best Regards,", "msg_date": "Sun, 29 Apr 2018 10:33:11 +0530", "msg_from": "Kaushal Shriyan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues while running select sql query" }, { "msg_contents": "On Sun, Apr 29, 2018 at 10:33 AM, Kaushal Shriyan <[email protected]>\nwrote:\n\n>\n>\n> On Sun, Apr 29, 2018 at 10:10 AM, Justin Pryzby <[email protected]>\n> wrote:\n>\n>> On Sun, Apr 29, 2018 at 10:05:23AM +0530, Kaushal Shriyan wrote:\n>> > Hi,\n>> >\n>> > I am running postgresql db server 9.4.14 on AWS of C4.2xlarge instance\n>> type\n>> > with 500 GB volume of volume type io1 with 25000 IOPS and I am seeing\n>> > performance issues. The sql query response takes around *127713.413 ms\n>> *time\n>> > *.* Is there a way to find out the bottleneck?\n>>\n>> Send the output of \"explain(analyze,buffers)\" for the query?\n>>\n>> Justin\n>>\n>\n> Hi Justin,\n>\n> Do i need to run the below sql query? Please comment.\n>\n> explain(analyze,buffers) SELECT serorgid,appname,sum(message_count) AS\n>> mtrc0,date_trunc('day', client_received_start_timestamp at time zone\n>> '+5:30:0')::timestamp without time zone AS time_unit FROM\n>> analytics.\"test.prod.fact\" WHERE client_received_start_timestamp >=\n>> '2018-3-28 18:30:0' AND client_received_start_timestamp < '2018-4-11\n>> 18:30:0' AND ((apiproxy in ('test-service' ) ) and (exchangeinstance !=\n>> '(not set)' ) and (devemail != '[email protected]' ) and (devemail != '\n>> [email protected]' ) and (devemail != '[email protected]' ) and (devemail\n>> != '[email protected]' ) and (apistatus = 'Success' ) and (apiaction not\n>> in ('LRN','finder','ManuallySelect' ) ) and (appname not in ('Mobile\n>> Connect Developer Portal (Int(', 'MinskHBM', 'LondonHBM', 'SeoulHBM',\n>> 'MumbaiHBM', 'NVirginiaHBM','SPauloHBM', 'Mobile Connect HeartBeat\n>> Monitor', 'PDMAOpenSDKTest1', 'PDMAOpenSDKTest2', 'PDMASDKTest',\n>> 'APIHealth', 'A1qaDemoApp','test', 'dublin o2o test tool', 'Test from John\n>> do not provision' ) ) and (serorgid = 'aircel' )) GROUP BY\n>> serorgid,appname,time_unit ORDER BY time_unit DESC LIMIT 14400 OFFSET 0;\n>\n>\n> I look forward to hearing from you.\n>\n> Best Regards,\n>\n>\nHi Justin,\n\nPlease find the below details and let me know if you need any additional\ninformation.\n\n\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> -------------------------------------------------------------------------------------------------------\n> Limit (cost=2568002.26..2568038.26 rows=14400 width=35) (actual\n> time=127357.296..127357.543 rows=231 loops=1)\n> Buffers: shared hit=28019 read=1954681\n> -> Sort (cost=2568002.26..2568389.38 rows=154849 width=35) (actual\n> time=127357.294..127357.383 rows=231 loops=1)\n> Sort Key: ((date_trunc('day'::text, timezone('+5:30:0'::text,\n> \"test.prod.fact\".client_received_start_timestamp)))::timestamp without time\n> zone)\n> Sort Method: quicksort Memory: 45kB\n> Buffers: shared hit=28019 read=1954681\n> -> HashAggregate (cost=2553822.90..2556532.76 rows=154849\n> width=35) (actual time=127356.707..127357.103 rows=231 loops=1)\n> Group Key: (date_trunc('day'::text,\n> timezone('+5:30:0'::text,\n> \"test.prod.fact\".client_received_start_timestamp)))::timestamp without time\n> zone, \"test.prod.fact\".serorgid, \"excha\n> nge-p.prod.fact\".appname\n> Buffers: shared hit=28016 read=1954681\n> -> Result (cost=0.43..2551252.21 rows=257069 width=35)\n> (actual time=2.399..126960.471 rows=311015 loops=1)\n> Buffers: shared hit=28016 read=1954681\n> -> Append (cost=0.43..2549324.20 rows=257069\n> width=35) (actual time=2.294..126163.689 rows=311015 loops=1)\n> Buffers: shared hit=28016 read=1954681\n> -> Index Scan using\n> \"exchange-pprodfactclrecsts\" on \"test.prod.fact\" (cost=0.43..6644.45\n> rows=64 width=33) (actual time=2.292..3.887 rows=2 loops=1)\n> Index Cond:\n> ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp\n> without time zone) AND (client_received_start_timestamp < '2018-04-11\n> 18:30:00'::timestam\n> p without time zone))\n> Filter: ((exchangeinstance <> '(not\n> set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '\n> [email protected]'::text) AND (devemail <> '[email protected]'::\n> text) AND (devemail <> '[email protected]'::text) AND (apiproxy =\n> 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid =\n> 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder\n> ,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect\n> Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,\n> PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test\n> tool\",\"Test from John do not provision\"}'::text[])))\n> Rows Removed by Filter: 61\n> Buffers: shared hit=25 read=6\n> -> Index Scan using\n> \"test.prod.fact_624_client_received_start_timestamp_idx\" on\n> \"test.prod.fact_624\" (cost=0.42..10948.27 rows=1002 width=34) (actual\n> time=3.034..2\n> 78.320 rows=1231 loops=1)\n> Index Cond:\n> ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp\n> without time zone) AND (client_received_start_timestamp < '2018-04-11\n> 18:30:00'::timestam\n> p without time zone))\n> Filter: ((exchangeinstance <> '(not\n> set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '\n> [email protected]'::text) AND (devemail <> '[email protected]'::\n> text) AND (devemail <> '[email protected]'::text) AND (apiproxy =\n> 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid =\n> 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder\n> ,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect\n> Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,\n> PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test\n> tool\",\"Test from John do not provision\"}'::text[])))\n> Rows Removed by Filter: 42629\n> Buffers: shared hit=27966 read=498\n> -> Seq Scan on \"test.prod.fact_631\"\n> (cost=0.00..171447.63 rows=16464 width=34) (actual time=0.070..7565.812\n> rows=20609 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 645406\n> Buffers: shared hit=2 read=132279\n> -> Seq Scan on \"test.prod.fact_640\"\n> (cost=0.00..147539.09 rows=16739 width=34) (actual time=2.976..7356.452\n> rows=20407 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 553930\n> Buffers: shared hit=2 read=113768\n> -> Seq Scan on \"test.prod.fact_647\"\n> (cost=0.00..148973.30 rows=16365 width=34) (actual time=2.274..7433.607\n> rows=19296 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 560618\n> Buffers: shared hit=2 read=114852\n> -> Seq Scan on \"test.prod.fact_652\"\n> (cost=0.00..148086.43 rows=14102 width=34) (actual time=2.165..7423.880\n> rows=16735 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 557229\n> Buffers: shared hit=1 read=114353\n> -> Seq Scan on \"test.prod.fact_661\"\n> (cost=0.00..172116.37 rows=15973 width=35) (actual time=0.091..8616.119\n> rows=17820 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 649730\n> Buffers: shared hit=2 read=132886\n> -> Seq Scan on \"test.prod.fact_668\"\n> (cost=0.00..174813.25 rows=15675 width=35) (actual time=1.537..8751.908\n> rows=16881 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 661068\n> Buffers: shared hit=2 read=134969\n> -> Seq Scan on \"test.prod.fact_674\"\n> (cost=0.00..199633.65 rows=22840 width=34) (actual time=0.017..9936.557\n> rows=30245 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 745118\n> Buffers: shared hit=2 read=154045\n> -> Seq Scan on \"test.prod.fact_682\"\n> (cost=0.00..253714.68 rows=26677 width=35) (actual time=0.693..12670.194\n> rows=33679 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 950037\n> Buffers: shared hit=2 read=195927\n> -> Seq Scan on \"test.prod.fact_688\"\n> (cost=0.00..239629.23 rows=26485 width=33) (actual time=0.627..11931.789\n> rows=36929 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 893363\n> Buffers: shared hit=2 read=184963\n> -> Seq Scan on \"test.prod.fact_696\"\n> (cost=0.00..233816.76 rows=25627 width=34) (actual time=0.809..11647.744\n> rows=36409 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 871346\n> Buffers: shared hit=2 read=180422\n> -> Seq Scan on \"test.prod.fact_701\"\n> (cost=0.00..177624.27 rows=15959 width=36) (actual time=1.174..8911.760\n> rows=16227 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 671146\n> Buffers: shared hit=2 read=137231\n> -> Seq Scan on \"test.prod.fact_709\"\n> (cost=0.00..181100.86 rows=14987 width=36) (actual time=2.614..9080.548\n> rows=15270 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 686447\n> Buffers: shared hit=2 read=139861\n> -> Seq Scan on \"test.prod.fact_716\"\n> (cost=0.00..155888.30 rows=13752 width=36) (actual time=2.874..7810.737\n> rows=14500 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 589910\n> Buffers: shared hit=1 read=120362\n> -> Seq Scan on \"test.prod.fact_723\"\n> (cost=0.00..127347.65 rows=14358 width=36) (actual time=2.279..6364.821\n> rows=14775 loops=1)\n> Filter: ((client_received_start_timestamp\n> >= '2018-03-28 18:30:00'::timestamp without time zone) AND\n> (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp wi\n> thout time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail\n> <> '[email protected]'::text) AND (devemail <> '[email protected]'::text)\n> AND (devemail <> '[email protected]'::text) AND (devemail <\n> > '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND\n> (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND\n> (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::te\n> xt[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal\n> (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile\n> Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMA\n> SDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John\n> do not provision\"}'::text[])))\n> Rows Removed by Filter: 480327\n> Buffers: shared hit=1 read=98259\n> Planning time: 395.624 ms\n> Execution time: 127362.763 ms\n> (81 rows)\n\n\nThanks in Advance. I look forward to hearing from you.\n\nBest Regards,\n\nOn Sun, Apr 29, 2018 at 10:33 AM, Kaushal Shriyan <[email protected]> wrote:On Sun, Apr 29, 2018 at 10:10 AM, Justin Pryzby <[email protected]> wrote:On Sun, Apr 29, 2018 at 10:05:23AM +0530, Kaushal Shriyan wrote:\n> Hi,\n> \n> I am running postgresql db server 9.4.14 on AWS of C4.2xlarge instance type\n> with 500 GB volume of volume type io1 with 25000 IOPS and I am seeing\n> performance issues. The sql query response takes around *127713.413 ms *time\n> *.* Is there a way to find out the bottleneck?\n\nSend the output of \"explain(analyze,buffers)\" for the query?\n\nJustin\nHi Justin,Do i need to run the below sql query? Please comment.explain(analyze,buffers) SELECT serorgid,appname,sum(message_count) AS mtrc0,date_trunc('day', client_received_start_timestamp at time zone '+5:30:0')::timestamp without time zone AS time_unit FROM analytics.\"test.prod.fact\" WHERE client_received_start_timestamp >= '2018-3-28 18:30:0' AND client_received_start_timestamp < '2018-4-11 18:30:0' AND  ((apiproxy in ('test-service' )  ) and (exchangeinstance != '(not set)'  ) and (devemail != '[email protected]'  ) and (devemail != '[email protected]'  ) and (devemail != '[email protected]'  ) and (devemail != '[email protected]'  ) and (apistatus = 'Success'  ) and (apiaction not in ('LRN','finder','ManuallySelect' )  ) and (appname not in ('Mobile Connect Developer Portal (Int(', 'MinskHBM', 'LondonHBM', 'SeoulHBM', 'MumbaiHBM', 'NVirginiaHBM','SPauloHBM', 'Mobile Connect HeartBeat Monitor', 'PDMAOpenSDKTest1', 'PDMAOpenSDKTest2', 'PDMASDKTest', 'APIHealth', 'A1qaDemoApp','test', 'dublin o2o test tool', 'Test from John do not provision' )  ) and (serorgid = 'aircel'  ))  GROUP BY serorgid,appname,time_unit ORDER BY time_unit DESC LIMIT 14400 OFFSET 0;I look forward to hearing from you.Best Regards, Hi Justin,Please find the below details and let me know if you need any additional information.                                              QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=2568002.26..2568038.26 rows=14400 width=35) (actual time=127357.296..127357.543 rows=231 loops=1)   Buffers: shared hit=28019 read=1954681   ->  Sort  (cost=2568002.26..2568389.38 rows=154849 width=35) (actual time=127357.294..127357.383 rows=231 loops=1)         Sort Key: ((date_trunc('day'::text, timezone('+5:30:0'::text, \"test.prod.fact\".client_received_start_timestamp)))::timestamp without time zone)         Sort Method: quicksort  Memory: 45kB         Buffers: shared hit=28019 read=1954681         ->  HashAggregate  (cost=2553822.90..2556532.76 rows=154849 width=35) (actual time=127356.707..127357.103 rows=231 loops=1)               Group Key: (date_trunc('day'::text, timezone('+5:30:0'::text, \"test.prod.fact\".client_received_start_timestamp)))::timestamp without time zone, \"test.prod.fact\".serorgid, \"exchange-p.prod.fact\".appname               Buffers: shared hit=28016 read=1954681               ->  Result  (cost=0.43..2551252.21 rows=257069 width=35) (actual time=2.399..126960.471 rows=311015 loops=1)                     Buffers: shared hit=28016 read=1954681                     ->  Append  (cost=0.43..2549324.20 rows=257069 width=35) (actual time=2.294..126163.689 rows=311015 loops=1)                           Buffers: shared hit=28016 read=1954681                           ->  Index Scan using \"exchange-pprodfactclrecsts\" on \"test.prod.fact\"  (cost=0.43..6644.45 rows=64 width=33) (actual time=2.292..3.887 rows=2 loops=1)                                 Index Cond: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone))                                 Filter: ((exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 61                                 Buffers: shared hit=25 read=6                           ->  Index Scan using \"test.prod.fact_624_client_received_start_timestamp_idx\" on \"test.prod.fact_624\"  (cost=0.42..10948.27 rows=1002 width=34) (actual time=3.034..278.320 rows=1231 loops=1)                                 Index Cond: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone))                                 Filter: ((exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 42629                                 Buffers: shared hit=27966 read=498                           ->  Seq Scan on \"test.prod.fact_631\"  (cost=0.00..171447.63 rows=16464 width=34) (actual time=0.070..7565.812 rows=20609 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 645406                                 Buffers: shared hit=2 read=132279                           ->  Seq Scan on \"test.prod.fact_640\"  (cost=0.00..147539.09 rows=16739 width=34) (actual time=2.976..7356.452 rows=20407 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 553930                                 Buffers: shared hit=2 read=113768                           ->  Seq Scan on \"test.prod.fact_647\"  (cost=0.00..148973.30 rows=16365 width=34) (actual time=2.274..7433.607 rows=19296 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 560618                                 Buffers: shared hit=2 read=114852                           ->  Seq Scan on \"test.prod.fact_652\"  (cost=0.00..148086.43 rows=14102 width=34) (actual time=2.165..7423.880 rows=16735 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 557229                                 Buffers: shared hit=1 read=114353                           ->  Seq Scan on \"test.prod.fact_661\"  (cost=0.00..172116.37 rows=15973 width=35) (actual time=0.091..8616.119 rows=17820 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 649730                                 Buffers: shared hit=2 read=132886                           ->  Seq Scan on \"test.prod.fact_668\"  (cost=0.00..174813.25 rows=15675 width=35) (actual time=1.537..8751.908 rows=16881 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 661068                                 Buffers: shared hit=2 read=134969                           ->  Seq Scan on \"test.prod.fact_674\"  (cost=0.00..199633.65 rows=22840 width=34) (actual time=0.017..9936.557 rows=30245 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 745118                                 Buffers: shared hit=2 read=154045                           ->  Seq Scan on \"test.prod.fact_682\"  (cost=0.00..253714.68 rows=26677 width=35) (actual time=0.693..12670.194 rows=33679 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 950037                                 Buffers: shared hit=2 read=195927                           ->  Seq Scan on \"test.prod.fact_688\"  (cost=0.00..239629.23 rows=26485 width=33) (actual time=0.627..11931.789 rows=36929 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 893363                                 Buffers: shared hit=2 read=184963                           ->  Seq Scan on \"test.prod.fact_696\"  (cost=0.00..233816.76 rows=25627 width=34) (actual time=0.809..11647.744 rows=36409 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 871346                                 Buffers: shared hit=2 read=180422                           ->  Seq Scan on \"test.prod.fact_701\"  (cost=0.00..177624.27 rows=15959 width=36) (actual time=1.174..8911.760 rows=16227 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 671146                                 Buffers: shared hit=2 read=137231                           ->  Seq Scan on \"test.prod.fact_709\"  (cost=0.00..181100.86 rows=14987 width=36) (actual time=2.614..9080.548 rows=15270 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 686447                                 Buffers: shared hit=2 read=139861                           ->  Seq Scan on \"test.prod.fact_716\"  (cost=0.00..155888.30 rows=13752 width=36) (actual time=2.874..7810.737 rows=14500 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 589910                                 Buffers: shared hit=1 read=120362                           ->  Seq Scan on \"test.prod.fact_723\"  (cost=0.00..127347.65 rows=14358 width=36) (actual time=2.279..6364.821 rows=14775 loops=1)                                 Filter: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone) AND (exchangeinstance <> '(not set)'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (devemail <> '[email protected]'::text) AND (apiproxy = 'test-service'::text) AND (apistatus = 'Success'::text) AND (serorgid = 'aircel'::text) AND (apiaction <> ALL ('{LRN,Pathfinder,ManuallySelect}'::text[])) AND (appname <> ALL ('{\"Mobile Connect Developer Portal (Int(\",MinskHBM,LondonHBM,SeoulHBM,MumbaiHBM,NVirginiaHBM,SPauloHBM,\"Mobile Connect HeartBeat Monitor\",PDMAOpenSDKTest1,PDMAOpenSDKTest2,PDMASDKTest,APIHealth,A1qaDemoApp,test,\"india e2e test tool\",\"Test from John do not provision\"}'::text[])))                                 Rows Removed by Filter: 480327                                 Buffers: shared hit=1 read=98259 Planning time: 395.624 ms Execution time: 127362.763 ms(81 rows) Thanks in Advance. I look forward to hearing from you.Best Regards,", "msg_date": "Sun, 29 Apr 2018 10:48:48 +0530", "msg_from": "Kaushal Shriyan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues while running select sql query" }, { "msg_contents": "On Saturday, April 28, 2018, Kaushal Shriyan <[email protected]>\nwrote:\n\n> Hi,\n>\n> I am running postgresql db server 9.4.14 on AWS of C4.2xlarge instance\n> type with 500 GB volume of volume type io1 with 25000 IOPS and I am\n> seeing performance issues. The sql query response takes around *127713.413\n> ms *time*.* Is there a way to find out the bottleneck?\n>\n\nI would suggest reading the following and providing some additional\ndetails, in particular your table and/or view definitions. Specifically\nI'd be looking for indexes on \"serorgid\", your apparent partitioning setup,\nand your use of indexes in general.\n\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nYou may also wish to attach the explain output as a text file.\n\nDavid J.\n\nOn Saturday, April 28, 2018, Kaushal Shriyan <[email protected]> wrote:Hi,I am running postgresql db server 9.4.14 on AWS of C4.2xlarge instance type with 500 GB volume of volume type io1 with 25000 IOPS and I am seeing performance issues. The sql query response takes around 127713.413 ms time. Is there a way to find out the bottleneck?I would suggest reading the following and providing some additional details, in particular your table and/or view definitions.  Specifically I'd be looking for indexes on \"serorgid\", your apparent partitioning setup, and your use of indexes in general.https://wiki.postgresql.org/wiki/Slow_Query_QuestionsYou may also wish to attach the explain output as a text file.David J.", "msg_date": "Sat, 28 Apr 2018 23:10:37 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Performance issues while running select sql query" }, { "msg_contents": "On Sun, Apr 29, 2018 at 10:05:23AM +0530, Kaushal Shriyan wrote:\n> # SELECT serorgid,appname,sum(message_count) AS mtrc0,date_trunc('day',\n> client_received_start_timestamp at time zone '+5:30:0')::timestamp without\n> time zone AS time_unit FROM analytics.\"test.prod.fact\" WHERE\n> client_received_start_timestamp >= '2018-3-28 18:30:0' AND\n> client_received_start_timestamp < '2018-4-11 18:30:0' AND ((apiproxy in\n> ('test-service' ) ) and (exchangeinstance != '(not set)' ) and (devemail\n> != '[email protected]' ) and (devemail != '[email protected]' ) and\n> (devemail != '[email protected]' ) and (devemail != '[email protected]' ) and\n> (apistatus = 'Success' ) and (apiaction not in\n> ('LRN','finder','ManuallySelect' ) ) and (appname not in ('Mobile Connect\n> Developer Portal (Int(', 'MinskHBM', 'LondonHBM', 'SeoulHBM', 'MumbaiHBM',\n> 'NVirginiaHBM','SPauloHBM', 'Mobile Connect HeartBeat Monitor',\n> 'PDMAOpenSDKTest1', 'PDMAOpenSDKTest2', 'PDMASDKTest', 'APIHealth',\n> 'A1qaDemoApp','test', 'dublin o2o test tool', 'Test from John do not\n> provision' ) ) and (serorgid = 'aircel' )) GROUP BY\n> serorgid,appname,time_unit ORDER BY time_unit DESC LIMIT 14400 OFFSET 0;\n\nThis table has inheritence children. Do they have constraints? On what\ncolumn? Is constraint_exclusion enabled and working for that?\n\nIt looks like test.prod.fact_624 is being read using index in under 1sec, and\nthe rest using seq scan, taking 5-10sec.\n\nSo what are the table+index definitions of the parent and childs (say fact_624\nand 631).\n\nHave the child tables been recently ANALYZE ?\nAlso, have you manually ANALYZE the parent table?\n\nOn Sun, Apr 29, 2018 at 10:48:48AM +0530, Kaushal Shriyan wrote:\n> > QUERY PLAN\n> > Limit (cost=2568002.26..2568038.26 rows=14400 width=35) (actual time=127357.296..127357.543 rows=231 loops=1)\n> > Buffers: shared hit=28019 read=1954681\n...\n\n> > -> Index Scan using \"test.prod.fact_624_client_received_start_timestamp_idx\" on \"test.prod.fact_624\" (cost=0.42..10948.27 rows=1002 width=34) (actual time=3.034..278.320 rows=1231 loops=1)\n> > Index Cond: ((client_received_start_timestamp >= '2018-03-28 18:30:00'::timestamp without time zone) AND (client_received_start_timestamp < '2018-04-11 18:30:00'::timestamp without time zone))\n> > Rows Removed by Filter: 42629\n> > Buffers: shared hit=27966 read=498\n> > -> Seq Scan on \"test.prod.fact_631\" (cost=0.00..171447.63 rows=16464 width=34) (actual time=0.070..7565.812 rows=20609 loops=1)\n> > Rows Removed by Filter: 645406\n> > Buffers: shared hit=2 read=132279\n...\n\n\n", "msg_date": "Sun, 29 Apr 2018 09:18:11 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues while running select sql query" }, { "msg_contents": "On Sun, Apr 29, 2018 at 7:48 PM, Justin Pryzby <[email protected]> wrote:\n\n> On Sun, Apr 29, 2018 at 10:05:23AM +0530, Kaushal Shriyan wrote:\n> > # SELECT serorgid,appname,sum(message_count) AS mtrc0,date_trunc('day',\n> > client_received_start_timestamp at time zone '+5:30:0')::timestamp\n> without\n> > time zone AS time_unit FROM analytics.\"test.prod.fact\" WHERE\n> > client_received_start_timestamp >= '2018-3-28 18:30:0' AND\n> > client_received_start_timestamp < '2018-4-11 18:30:0' AND ((apiproxy in\n> > ('test-service' ) ) and (exchangeinstance != '(not set)' ) and\n> (devemail\n> > != '[email protected]' ) and (devemail != '[email protected]' ) and\n> > (devemail != '[email protected]' ) and (devemail != '[email protected]' )\n> and\n> > (apistatus = 'Success' ) and (apiaction not in\n> > ('LRN','finder','ManuallySelect' ) ) and (appname not in ('Mobile\n> Connect\n> > Developer Portal (Int(', 'MinskHBM', 'LondonHBM', 'SeoulHBM',\n> 'MumbaiHBM',\n> > 'NVirginiaHBM','SPauloHBM', 'Mobile Connect HeartBeat Monitor',\n> > 'PDMAOpenSDKTest1', 'PDMAOpenSDKTest2', 'PDMASDKTest', 'APIHealth',\n> > 'A1qaDemoApp','test', 'dublin o2o test tool', 'Test from John do not\n> > provision' ) ) and (serorgid = 'aircel' )) GROUP BY\n> > serorgid,appname,time_unit ORDER BY time_unit DESC LIMIT 14400 OFFSET 0;\n>\n> This table has inheritence children. Do they have constraints? On what\n> column? Is constraint_exclusion enabled and working for that?\n>\n> It looks like test.prod.fact_624 is being read using index in under 1sec,\n> and\n> the rest using seq scan, taking 5-10sec.\n>\n> So what are the table+index definitions of the parent and childs (say\n> fact_624\n> and 631).\n>\n> Have the child tables been recently ANALYZE ?\n> Also, have you manually ANALYZE the parent table?\n>\n\nHi Justin,\n\nThis table has inheritence children. Do they have constraints? On what\ncolumn? Is constraint_exclusion enabled and working for that?\n\nAnswer :- Is there a way to find out?\n\nSo what are the table+index definitions of the parent and childs (say\nfact_624\nand 631).\n\nAnswer :- Is there a way to find out?\n\nHave the child tables been recently ANALYZE ?\nAnswer :- I have not done anything and is there a way to find out.\n\nAlso, have you manually ANALYZE the parent table?\nAnswer :- Nope\n\nAny help will be highly appreciable. I look forward to hearing from you.\n\nBest Regards,\n\nKaushal\n\nOn Sun, Apr 29, 2018 at 7:48 PM, Justin Pryzby <[email protected]> wrote:On Sun, Apr 29, 2018 at 10:05:23AM +0530, Kaushal Shriyan wrote:\n> # SELECT serorgid,appname,sum(message_count) AS mtrc0,date_trunc('day',\n> client_received_start_timestamp at time zone '+5:30:0')::timestamp without\n> time zone AS time_unit FROM analytics.\"test.prod.fact\" WHERE\n> client_received_start_timestamp >= '2018-3-28 18:30:0' AND\n> client_received_start_timestamp < '2018-4-11 18:30:0' AND  ((apiproxy in\n> ('test-service' )  ) and (exchangeinstance != '(not set)'  ) and (devemail\n> != '[email protected]'  ) and (devemail != '[email protected]'  ) and\n> (devemail != '[email protected]'  ) and (devemail != '[email protected]'  ) and\n> (apistatus = 'Success'  ) and (apiaction not in\n> ('LRN','finder','ManuallySelect' )  ) and (appname not in ('Mobile Connect\n> Developer Portal (Int(', 'MinskHBM', 'LondonHBM', 'SeoulHBM', 'MumbaiHBM',\n> 'NVirginiaHBM','SPauloHBM', 'Mobile Connect HeartBeat Monitor',\n> 'PDMAOpenSDKTest1', 'PDMAOpenSDKTest2', 'PDMASDKTest', 'APIHealth',\n> 'A1qaDemoApp','test', 'dublin o2o test tool', 'Test from John do not\n> provision' )  ) and (serorgid = 'aircel'  ))  GROUP BY\n> serorgid,appname,time_unit ORDER BY time_unit DESC LIMIT 14400 OFFSET 0;\n\nThis table has inheritence children.  Do they have constraints?  On what\ncolumn?  Is constraint_exclusion enabled and working for that?\n\nIt looks like test.prod.fact_624 is being read using index in under 1sec, and\nthe rest using seq scan, taking 5-10sec.\n\nSo what are the table+index definitions of the parent and childs (say fact_624\nand 631).\n\nHave the child tables been recently ANALYZE ?\nAlso, have you manually ANALYZE the parent table?Hi Justin,This table has inheritence children.  Do they have constraints?  On whatcolumn?  Is constraint_exclusion enabled and working for that?Answer :- Is there a way to find out?So what are the table+index definitions of the parent and childs (say fact_624and 631).Answer :- Is there a way to find out?Have the child tables been recently ANALYZE ?Answer :- I have not done anything and is there a way to find out.Also, have you manually ANALYZE the parent table?Answer :- NopeAny help will be highly appreciable. I look forward to hearing from you.Best Regards,Kaushal", "msg_date": "Sun, 29 Apr 2018 23:40:58 +0530", "msg_from": "Kaushal Shriyan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues while running select sql query" } ]
[ { "msg_contents": "I’m trying to get a query to use the index for sorting. As far as I can understand it should be possible. Since you’re reading this you’ve probably guessed that I’m stuck.\n\nI’ve boiled down my issue to the script below. Note that my real query needs about 80MB for the quick sort. The version using the index for sorting runs in about 300ms while the version that sorts uses about 700ms.\n\nDoes anyone have a good explanation for why the two queries behave differently and if there is something I can do to get rid of the memory sort?\n\nI’m running this on PostgreSQL 10.3 on x86_64-apple-darwin16.7.0, compiled by Apple LLVM version 9.0.0 (clang-900.0.39.2), 64-bit. Let me know if you need to know any configuration options.\n\n— \nThank you,\nAlf Lervåg\n\n\nBEGIN;\nCREATE TABLE reading (\n reading_id integer NOT NULL,\n datetime timestamp with time zone NOT NULL,\n value double precision NOT NULL);\n\nINSERT INTO reading (reading_id, datetime, value)\n SELECT reading_id, datetime, (random() - 0.9) * 100\n FROM generate_series('2016-01-01 00:00Z'::timestamptz, CURRENT_TIMESTAMP, '5 min') a(datetime)\n CROSS JOIN generate_series(1, 100, 1) b(reading_id);\n\nALTER TABLE reading ADD PRIMARY KEY (reading_id, datetime);\nANALYZE reading;\n\nEXPLAIN ANALYZE\nSELECT reading_id, datetime, value\nFROM reading WHERE reading_id IN (176, 155, 156)\nORDER BY reading_id, datetime;\n\n QUERY PLAN\nIndex Scan using reading_pkey on reading (cost=0.56..5.72 rows=1 width=20) (actual time=0.044..0.044 rows=0 loops=1)\n Index Cond: (reading_id = ANY ('{176,155,156}'::integer[]))\nPlanning time: 0.195 ms\nExecution time: 0.058 ms\n(4 rows)\n\nEXPLAIN ANALYZE\nSELECT reading_id, datetime, value\nFROM reading WHERE reading_id IN (VALUES (176), (155), (156))\nORDER BY reading_id, datetime;\n\n QUERY PLAN\nSort (cost=250704.99..252542.72 rows=735093 width=20) (actual time=0.030..0.030 rows=0 loops=1)\n Sort Key: reading.reading_id, reading.datetime\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.61..179079.12 rows=735093 width=20) (actual time=0.026..0.026 rows=0 loops=1)\n -> HashAggregate (cost=0.05..0.08 rows=3 width=4) (actual time=0.006..0.007 rows=3 loops=1)\n Group Key: \"*VALUES*\".column1\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.04 rows=3 width=4) (actual time=0.001..0.002 rows=3 loops=1)\n -> Index Scan using reading_pkey on reading (cost=0.56..57242.70 rows=245031 width=20) (actual time=0.005..0.005 rows=0 loops=3)\n Index Cond: (reading_id = \"*VALUES*\".column1)\nPlanning time: 0.162 ms\nExecution time: 0.062 ms\n(11 rows)\n\nROLLBACK;\n\n", "msg_date": "Mon, 30 Apr 2018 21:43:19 +0200", "msg_from": "=?utf-8?Q?Alf_Lerv=C3=A5g?= <[email protected]>", "msg_from_op": true, "msg_subject": "Why doesn't the second query use the index for sorting?" } ]
[ { "msg_contents": "Dear all\n\nCould you help me understand these two execution plans for the same\nquery (query 3 benchmark TPCH www.tpc.org/tpch), executed in two\ndifferent environments of Postgresql, as described below. These plans\nwere generated by the EXPLAIN ANALYZE command, and the time of plan 1\nwas 4.7 minutes and plan 2 was 2.95 minutes.\n\nExecution Plan 1 (query execution time 4.7 minutes):\n- https://explain.depesz.com/s/Ughh\n- Postgresql version 10.1 (default) with index on l_shipdate (table lineitem)\n\nExecution Plan 2 (query execution time 2.95 minutes):\n- https://explain.depesz.com/s/7Zb7\n- Postgresql version 9.5 (version with source code changed by me) with\nindex on l_orderkey (table lineitem).\n\nSome doubts\n- Difference between GroupAggregate and Finalize GroupAggregate\n- because some algorithms show measurements on \"Disk\" and others on\n\"Memory\" example:\n - External sort Disk: 52784kB\n - quicksort Memory: 47770kB\n\nBecause one execution plan was much smaller than the other,\nconsidering that the query is the same and the data are the same.\n--------------------------------------------------\nselect\n l_orderkey,\n sum(l_extendedprice * (1 - l_discount)) as revenue,\n o_orderdate,\n o_shippriority\nfrom\n customer,\n orders,\n lineitem\nwhere\n c_mktsegment = 'HOUSEHOLD'\n and c_custkey = o_custkey\n and l_orderkey = o_orderkey\n and o_orderdate < date '1995-03-21'\n and l_shipdate > date '1995-03-21'\ngroup by\n l_orderkey,\n o_orderdate,\n o_shippriority\norder by\n revenue desc,\n o_orderdate\n--------------------------------------------------\n\nbest regards\n\n", "msg_date": "Sat, 5 May 2018 08:16:42 -0700", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "help in analysis of execution plans" }, { "msg_contents": "Further information is that th Postgresql with modified source code, is\nthat I modified some internal functions of cost (source code) and\nparameters in Postgresql.conf so that it is possible for the DBMS to\ndifferentiate cost of read (random and sequence) and write (random and\nsequence), this is because reading in SSDs' and more than 400 times faster\nthan HDD. This is due to academic research that I am doing.\n\nSee schema of the tables used below:\nhttps://docs.snowflake.net/manuals/_images/sample-data-tpch-schema.png\n\nI am using 40g scale, in this way the lineitem table has (40 * 6 million)\n240 million of the rows.\n\nRegards\nNeto\n\n2018-05-05 8:16 GMT-07:00 Neto pr <[email protected]>:\n\n> Dear all\n>\n> Could you help me understand these two execution plans for the same\n> query (query 3 benchmark TPCH www.tpc.org/tpch), executed in two\n> different environments of Postgresql, as described below. These plans\n> were generated by the EXPLAIN ANALYZE command, and the time of plan 1\n> was 4.7 minutes and plan 2 was 2.95 minutes.\n>\n> Execution Plan 1 (query execution time 4.7 minutes):\n> - https://explain.depesz.com/s/Ughh\n> - Postgresql version 10.1 (default) with index on l_shipdate (table\n> lineitem)\n>\n> Execution Plan 2 (query execution time 2.95 minutes):\n> - https://explain.depesz.com/s/7Zb7\n> - Postgresql version 9.5 (version with source code changed by me) with\n> index on l_orderkey (table lineitem).\n>\n> Some doubts\n> - Difference between GroupAggregate and Finalize GroupAggregate\n> - because some algorithms show measurements on \"Disk\" and others on\n> \"Memory\" example:\n> - External sort Disk: 52784kB\n> - quicksort Memory: 47770kB\n>\n> Because one execution plan was much smaller than the other,\n> considering that the query is the same and the data are the same.\n> --------------------------------------------------\n> select\n> l_orderkey,\n> sum(l_extendedprice * (1 - l_discount)) as revenue,\n> o_orderdate,\n> o_shippriority\n> from\n> customer,\n> orders,\n> lineitem\n> where\n> c_mktsegment = 'HOUSEHOLD'\n> and c_custkey = o_custkey\n> and l_orderkey = o_orderkey\n> and o_orderdate < date '1995-03-21'\n> and l_shipdate > date '1995-03-21'\n> group by\n> l_orderkey,\n> o_orderdate,\n> o_shippriority\n> order by\n> revenue desc,\n> o_orderdate\n> --------------------------------------------------\n>\n> best regards\n>\n\nFurther information is that th Postgresql with modified source code, is that I modified some internal functions of cost (source code) and parameters in Postgresql.conf so that it is possible for the DBMS to differentiate cost of read (random and sequence) and write (random and sequence), this is because reading in SSDs' and more than 400 times faster than HDD. This is due to academic research that I am doing.See schema of the tables used below:https://docs.snowflake.net/manuals/_images/sample-data-tpch-schema.pngI am using 40g scale, in this way the lineitem table has (40 * 6 million) 240 million of the rows.RegardsNeto2018-05-05 8:16 GMT-07:00 Neto pr <[email protected]>:Dear all\n\nCould you help me understand these two execution plans for the same\nquery (query 3 benchmark TPCH www.tpc.org/tpch), executed in two\ndifferent environments of Postgresql, as described below. These plans\nwere generated by the EXPLAIN ANALYZE command, and the time of plan 1\nwas 4.7 minutes and plan 2 was 2.95 minutes.\n\nExecution Plan 1 (query execution time 4.7 minutes):\n- https://explain.depesz.com/s/Ughh\n- Postgresql version 10.1 (default) with index on l_shipdate (table lineitem)\n\nExecution Plan 2 (query execution time 2.95 minutes):\n- https://explain.depesz.com/s/7Zb7\n- Postgresql version 9.5 (version with source code changed by me) with\nindex on l_orderkey (table lineitem).\n\nSome doubts\n- Difference between GroupAggregate and Finalize GroupAggregate\n- because some algorithms show measurements on \"Disk\" and others on\n\"Memory\" example:\n     - External sort Disk: 52784kB\n     - quicksort Memory: 47770kB\n\nBecause one execution plan was much smaller than the other,\nconsidering that the query is the same and the data are the same.\n--------------------------------------------------\nselect\n    l_orderkey,\n    sum(l_extendedprice * (1 - l_discount)) as revenue,\n    o_orderdate,\n    o_shippriority\nfrom\n    customer,\n    orders,\n    lineitem\nwhere\n    c_mktsegment = 'HOUSEHOLD'\n    and c_custkey = o_custkey\n    and l_orderkey = o_orderkey\n    and o_orderdate < date '1995-03-21'\n    and l_shipdate > date '1995-03-21'\ngroup by\n    l_orderkey,\n    o_orderdate,\n    o_shippriority\norder by\n    revenue desc,\n    o_orderdate\n--------------------------------------------------\n\nbest regards", "msg_date": "Sat, 5 May 2018 11:07:26 -0700", "msg_from": "Neto pr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: help in analysis of execution plans" }, { "msg_contents": "On 6 May 2018 at 03:16, Neto pr <[email protected]> wrote:\n> Execution Plan 1 (query execution time 4.7 minutes):\n> - https://explain.depesz.com/s/Ughh\n> - Postgresql version 10.1 (default) with index on l_shipdate (table lineitem)\n>\n> Execution Plan 2 (query execution time 2.95 minutes):\n> - https://explain.depesz.com/s/7Zb7\n> - Postgresql version 9.5 (version with source code changed by me) with\n> index on l_orderkey (table lineitem).\n>\n> Some doubts\n> - Difference between GroupAggregate and Finalize GroupAggregate\n\nA Finalize Aggregate node is required to combine the Partially\nAggregated records. A Partially Aggregated result differs from normal\naggregation as the final function of the aggregate is not called\nduring partial aggregation. This allows the finalize aggregate to\ncombine the partially aggregated results then call the final function.\nImagine an aggregate like AVG() where it goes and internally\ncalculates the sum and the count of non-null records. A partial\naggregate node would return {sum, count}, but a normal aggregate would\nreturn {sum / count}. Having {sum, count} allows each partial\naggregated result to be combined allowing the average to be calculated\nwith the total_sum / total_count.\n\n> - because some algorithms show measurements on \"Disk\" and others on\n> \"Memory\" example:\n> - External sort Disk: 52784kB\n> - quicksort Memory: 47770kB\n\nPlease read about work_mem in\nhttps://www.postgresql.org/docs/current/static/runtime-config-resource.html\n\nThe reason 10.1 is slower with the parallel query is down to the\nbitmap heap scan going lossy and scanning many more heap pages than it\nexpected. You could solve this in various ways:\n\n1. Increase work_mem enough to prevent the scan from going lossy (see\nlossy=1605531 in your plan)\n2. turn off enable_bitmapscans (set enable_Bitmapscan = off);\n3. Cluster the table on l_shipdate\n\nUnfortunately, parallel query often will choose to use a parallel plan\nutilising multiple workers on a less efficient plan when it estimates\nthe cost / n_workers is lower than the cheapest serial plan's cost.\nThis appears to also be a contributor to your problem. You may get the\n9.5 performance if you disabled parallel query, or did one of the 3\noptions above.\n\nYou may also want to consider using a BRIN index on the l_shipdate\ninstead of a BTREE index. The cost estimation for BRIN may realise\nthat the bitmap heap scan is not a good option, although I'm not sure\nit'll be better than what the current v10 plan is using.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n", "msg_date": "Sun, 6 May 2018 12:33:36 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help in analysis of execution plans" } ]
[ { "msg_contents": "Hi,\n\nI'm trying to debug improve the performance of my time bucketing query.\nWhat I'm trying to do is essentially bucket by an arbitrary interval and\nthen do some aggregations within that interval (min,max,sum, etc). I am\nusing a `max` in the query I posted. For context in the data, it is 1\nminute candles of cryptocurrency data (open price, high price, low price,\nclose price, volume, for an interval). I want to transform this to a 10\nminute interval, on demand, and that is what this query is meant to do.\n\nI understand the slow part of my query is in the LEFT JOIN, but I just\ncan't quite figure out how to do it without the LEFT JOIN.\n\nHere is my pastebin with all the details so I don't clutter the message. I\ntried to follow everything in the 'Slow Query Questions' WIKI page. There\nis also a depesz link there.\n\nhttps://ybin.me/p/9d3f52d88b4b2a46#kYLotYpNuIjjbp2P4l3la8fGSJIV0p+opH4sPq1m2/Y=\n\nThank for your help,\n\nJulian\n\nHi,I'm trying to debug improve the performance of my time bucketing query. What I'm trying to do is essentially bucket by an arbitrary interval and then do some aggregations within that interval (min,max,sum, etc). I am using a `max` in the query I posted. For context in the data, it is 1 minute candles of cryptocurrency data (open price, high price, low price, close price, volume, for an interval). I want to transform this to a 10 minute interval, on demand, and that is what this query is meant to do. I understand the slow part of my query is in the LEFT JOIN, but I just can't quite figure out how to do it without the LEFT JOIN.Here is my pastebin with all the details so I don't clutter the message. I tried to follow everything in the 'Slow Query Questions' WIKI page. There is also a depesz link there. https://ybin.me/p/9d3f52d88b4b2a46#kYLotYpNuIjjbp2P4l3la8fGSJIV0p+opH4sPq1m2/Y=Thank for your help,Julian", "msg_date": "Mon, 7 May 2018 19:33:17 -0400", "msg_from": "Julian Wilson <[email protected]>", "msg_from_op": true, "msg_subject": "Time bucketing query performance" }, { "msg_contents": "On Mon, May 07, 2018 at 07:33:17PM -0400, Julian Wilson wrote:\n> Hi,\n> \n> I'm trying to debug improve the performance of my time bucketing query.\n> What I'm trying to do is essentially bucket by an arbitrary interval and\n> then do some aggregations within that interval (min,max,sum, etc). I am\n> using a `max` in the query I posted. For context in the data, it is 1\n> minute candles of cryptocurrency data (open price, high price, low price,\n> close price, volume, for an interval). I want to transform this to a 10\n> minute interval, on demand, and that is what this query is meant to do.\n> \n> I understand the slow part of my query is in the LEFT JOIN, but I just\n> can't quite figure out how to do it without the LEFT JOIN.\n> \n> Here is my pastebin with all the details so I don't clutter the message. I\n> tried to follow everything in the 'Slow Query Questions' WIKI page. There\n> is also a depesz link there.\n> \n> https://ybin.me/p/9d3f52d88b4b2a46#kYLotYpNuIjjbp2P4l3la8fGSJIV0p+opH4sPq1m2/Y=\n\nThsse may not be a substantial part of the issue, but I have some suggestions:\n\n0) You're using CTE, which cannot have stats (unlike temporary table). Can you\nrewrite without, perhaps with GROUP BY date_trunc('hour', time_open) ?\n\n1) you're querying on start_time AND end_time, and I believe the planner thinks\nthose conditions are independent, but they're not. Try getting rid of the\nframe_end and move the \"5 months\" into the main query using BETWEEN or two\nANDed conditions on the same variable. See if the rowcount estimate is more\naccurate:\n\n -> Index Scan using historical_ohlcv_pkey on historical_ohlcv ohlcv (cost=0.56..2488.58 ROWS=12110 width=22) (actual time=3.709..4.403 ROWS=9 loops=3625)\n Index Cond: ((exchange_symbol = 'BINANCE'::text) AND (symbol_id = 'ETHBTC'::text) AND (time_open >= g.start_time))\n Filter: (time_close < g.end_time)\n\nAlternately, you could try:\nCREATE STATISTICS (dependencies) ON (time_open,time_close) FROM historical_ohlcv ;\nANALYZE historical_ohlcv;\n\n2) Is your work_mem really default? 64kb? Recommend changing it to see if the\nplan changes (although it looks like that's not the issue).\n\n3) If you have SSD, you should probably CREATE TABLESPACE tmp LOCATION /srv/pgsql_tmp and\nALTER SYSTEM SET temp_tablespaces='tmp' and SELECT pg_reload_conf();\n\n4) If those don't help, as a test, try running with SET enable_nestloop=off.\nI'm guessing that fixing rowcount estimate in (1) will be sufficient.\n\n5) May not be important, but rerun with explain (ANALYZE,BUFFERS) and show the\nresults.\n\nJustin\n\n", "msg_date": "Mon, 7 May 2018 20:08:57 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time bucketing query performance" } ]
[ { "msg_contents": "Hi Team,\r\n\r\nWe are facing issues with one of our query, when we use order by count it is taking lot of time to execute the query. To be precise it is taking 9 min to execute the query from table which has ~220 million records. Is there a way to make this query run faster and efficiently using order by count. Below is the query which I’m trying to run\r\n\r\nSelect account_number, sum(count_of_event) as \"error_count\"\r\nFROM event_daily_summary\r\ngroup by account_number,event_date,process_name\r\nhaving event_date >= '2018-05-07'\r\nand process_name='exp90d_xreerror'\r\norder by sum(count_of_event) desc\r\nlimit 5000\r\n\r\n\r\nThanks,\r\nAnil\r\n\n\n\n\n\n\n\n\n\nHi Team,\n \nWe are facing issues with one of our query, when we use order by count it is taking lot of time to execute the query. To be precise it is taking 9 min to execute the query from table which has ~220 million\r\n records. Is there a way to make this query run faster and efficiently using order by count. Below is the query which I’m trying to run\n \nSelect account_number, sum(count_of_event) as \"error_count\"\nFROM event_daily_summary\ngroup by account_number,event_date,process_name\nhaving event_date >= '2018-05-07'\nand process_name='exp90d_xreerror'\norder by sum(count_of_event) desc\nlimit 5000\n \n \nThanks,\nAnil", "msg_date": "Fri, 18 May 2018 20:32:55 +0000", "msg_from": "\"Kotapati, Anil\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help with tuning slow query " }, { "msg_contents": "On Fri, May 18, 2018 at 08:32:55PM +0000, Kotapati, Anil wrote:\n> We are facing issues with one of our query, when we use order by count it is taking lot of time to execute the query. To be precise it is taking 9 min to execute the query from table which has ~220 million records. Is there a way to make this query run faster and efficiently using order by count. Below is the query which I’m trying to run\n> \n> Select account_number, sum(count_of_event) as \"error_count\"\n> FROM event_daily_summary\n> group by account_number,event_date,process_name\n> having event_date >= '2018-05-07'\n> and process_name='exp90d_xreerror'\n> order by sum(count_of_event) desc\n> limit 5000\n\nWould you provide the information listed here ? Table definition, query plan, etc\nhttps://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nAlso, why \"HAVING\" ? Shouldn't you use WHERE ?\n\nDoes the real query have conditions on event_date and process name or is that\njust for testing purposes?\n\nJustin\n\n", "msg_date": "Sat, 19 May 2018 10:57:49 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with tuning slow query" } ]
[ { "msg_contents": "Hi all,\n\nHope my mail finds you in good time. I had a problem with a query which is\nhitting the production seriously.\nThe below is the sub part of the query for which I cannot reduce the CPU\ncost. \n\nPlease check and verify whether I'm doing wrong or whether that type index\ntype suits it or not. \n\nKindly help me resolve this issue.\n\n*Query*:\n\nexplain select sum(CASE\n WHEN MOD(cast(effort_hours as decimal),1) =\n0.45 THEN\n cast(effort_hours as int)+0.75\n ELSE\n CASE\n WHEN MOD(cast(effort_hours as decimal),1) =\n0.15 THEN\n cast(effort_hours as int) + 0.25\n \n ELSE\n CASE\n WHEN MOD(cast(effort_hours as decimal),1) =\n0.30 THEN\n cast(effort_hours as int) + 0.5\n \n ELSE\n CASE\n WHEN MOD(cast(effort_hours as decimal),1) =\n0 THEN\n cast(effort_hours as int) \n end\n END\n END\n END) from tms_timesheet_details, tms_wsr_header\nheader where wsr_header_id=header.id and work_order_no != 'CORPORATE';\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Aggregate (cost=9868.91..9868.92 rows=1 width=8)\n -> Hash Join (cost=608.27..5647.67 rows=70354 width=8)\n Hash Cond: (tms_timesheet_details.wsr_header_id = header.id)\n -> Seq Scan on tms_timesheet_details (cost=0.00..3431.14\nrows=72378 width=12)\n Filter: ((work_order_no)::text <> 'CORPORATE'::text)\n -> Hash (cost=399.23..399.23 rows=16723 width=4)\n -> Seq Scan on tms_wsr_header header (cost=0.00..399.23\nrows=16723 width=4)\n(7 rows)\n\n\nThe count of number of rows in the tables used are:\n\n1) tms_timesheet_details:\n\namp_test=# select count(*) from tms_timesheet_details;\n count\n--------\n 110411\n(1 row)\n\n2) tms_wsr_header:\n\namp_test=# select count(*) from tms_wsr_header;\n count\n-------\n 16723\n(1 row)\n\n\nThe details of the tables and the columns used are as below:\n\n1) tms_timesheet_details:\n\namp_test=# \\d tms_timesheet_details\n Table\n\"public.tms_timesheet_details\"\n Column | Type | \nModifiers\n---------------------+-----------------------------+--------------------------------------------------------------------\n id | integer | not null default\nnextval('tms_timesheet_details_id_seq'::regclass)\n status | character varying |\n create_uid | integer |\n effort_hours | double precision |\n work_order_no | character varying |\n res_employee_id | character varying |\n wsr_header_id | integer |\n remarks | character varying |\n write_date | timestamp without time zone |\n timesheet_header_id | integer |\n date | date |\n create_date | timestamp without time zone |\n write_uid | integer |\n release_no | character varying |\n project_id | character varying |\n loc_name | character varying |\n user_id | integer |\n ao_emp_id | character varying |\nIndexes:\n \"tms_timesheet_details_pkey\" PRIMARY KEY, btree (id)\n \"tms_timesheet_details_uniq_res_employee_id_efforts\" UNIQUE, btree\n(res_employee_id, work_order_no, release_no, date, project_id)\n \"timesheet_detail_inx\" btree (wsr_header_id, timesheet_header_id)\n \"ts_detail_date_idx\" btree (date)\n \"ts_detail_hdr_id_idx\" btree (timesheet_header_id)\n \"ts_detail_release_no_idx\" btree (release_no)\n \"work_order_no_idx\" btree (work_order_no)\nForeign-key constraints:\n \"tms_timesheet_details_create_uid_fkey\" FOREIGN KEY (create_uid)\nREFERENCES res_users(id) ON DELETE SET NULL\n \"tms_timesheet_details_timesheet_header_id_fkey\" FOREIGN KEY\n(timesheet_header_id) REFERENCES tms_timesheet_header(id) ON DELETE SET NULL\n \"tms_timesheet_details_user_id_fkey\" FOREIGN KEY (user_id) REFERENCES\nres_users(id) ON DELETE SET NULL\n \"tms_timesheet_details_write_uid_fkey\" FOREIGN KEY (write_uid)\nREFERENCES res_users(id) ON DELETE SET NULL\n \"tms_timesheet_details_wsr_header_id_fkey\" FOREIGN KEY (wsr_header_id)\nREFERENCES tms_wsr_header(id) ON DELETE SET NULL\n\n\n2) tms_wsr_header:\n\namp_test=# \\d tms_wsr_header\n Table \"public.tms_wsr_header\"\n Column | Type | \nModifiers\n---------------------+-----------------------------+-------------------------------------------------------------\n id | integer | not null default\nnextval('tms_wsr_header_id_seq'::regclass)\n create_uid | integer |\n status_id | integer |\n ao_emp_name | character varying |\n ao_emp_id | character varying |\n res_employee_id | character varying |\n comments | text |\n write_uid | integer |\n write_date | timestamp without time zone |\n create_date | timestamp without time zone |\n timesheet_period_id | integer |\n user_id | integer |\nIndexes:\n \"tms_wsr_header_pkey\" PRIMARY KEY, btree (id)\n \"res_employee_idx\" btree (res_employee_id)\n \"tmesheet_perd_idx\" btree (timesheet_period_id)\nForeign-key constraints:\n \"tms_wsr_header_create_uid_fkey\" FOREIGN KEY (create_uid) REFERENCES\nres_users(id) ON DELETE SET NULL\n \"tms_wsr_header_status_id_fkey\" FOREIGN KEY (status_id) REFERENCES\ntms_timesheet_status(id) ON DELETE SET NULL\n \"tms_wsr_header_timesheet_period_id_fkey\" FOREIGN KEY\n(timesheet_period_id) REFERENCES tms_timesheet_period(id) ON DELETE SET NULL\n \"tms_wsr_header_user_id_fkey\" FOREIGN KEY (user_id) REFERENCES\nres_users(id) ON DELETE SET NULL\n \"tms_wsr_header_write_uid_fkey\" FOREIGN KEY (write_uid) REFERENCES\nres_users(id) ON DELETE SET NULL\nReferenced by:\n TABLE \"tms_release_allocation_comments\" CONSTRAINT\n\"tms_release_allocation_comments_wsr_header_id_fkey\" FOREIGN KEY\n(wsr_header_id) REFERENCES tms_wsr_header(id) ON DELETE SET NULL\n TABLE \"tms_timesheet_details\" CONSTRAINT\n\"tms_timesheet_details_wsr_header_id_fkey\" FOREIGN KEY (wsr_header_id)\nREFERENCES tms_wsr_header(id) ON DELETE SET NULL\n TABLE \"tms_workflow_history\" CONSTRAINT\n\"tms_workflow_history_wsr_id_fkey\" FOREIGN KEY (wsr_id) REFERENCES\ntms_wsr_header(id) ON DELETE SET NULL\n\n\nHope the above information is sufficient. Kindly show me a way to reduce the\ncost of this query ASAP.\n\nThanks in advance.\n\nRegards,\nPavan\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Sun, 20 May 2018 23:15:56 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Help me in reducing the CPU cost for the high cost query below, as\n it is hitting production seriously!!" }, { "msg_contents": "pavan95 wrote\n> Hi all,\n> \n> Hope my mail finds you in good time. I had a problem with a query which is\n> hitting the production seriously.\n> The below is the sub part of the query for which I cannot reduce the CPU\n> cost. \n> \n> Please check and verify whether I'm doing wrong or whether that type index\n> type suits it or not. \n> \n> Kindly help me resolve this issue.\n> \n> *Query*:\n> \n> explain select sum(CASE\n> WHEN MOD(cast(effort_hours as decimal),1) =\n> 0.45 THEN\n> cast(effort_hours as int)+0.75\n> ELSE\n> CASE\n> WHEN MOD(cast(effort_hours as decimal),1)\n> =\n> 0.15 THEN\n> cast(effort_hours as int) + 0.25\n> \n> ELSE\n> CASE\n> WHEN MOD(cast(effort_hours as decimal),1)\n> =\n> 0.30 THEN\n> cast(effort_hours as int) + 0.5\n> \n> ELSE\n> CASE\n> WHEN MOD(cast(effort_hours as decimal),1)\n> =\n> 0 THEN\n> cast(effort_hours as int) \n> end\n> END\n> END\n> END) from tms_timesheet_details,\n> tms_wsr_header\n> header where wsr_header_id=header.id and work_order_no != 'CORPORATE';\n> \n> \n> \n> --\n> Sent from:\n> http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\nTo start with you can try re-writing this so that it only does the mod cast\nonce. e.g:\nsum ( \nCASE MOD(cast(effort_hours as decimal),1)\n\tWHEN 0.45 THEN cast(effort_hours as int)+0.75\n\tWHEN 0.15 THEN cast(effort_hours as int)+0.25\n\tWHEN 0.30 THEN cast(effort_hours as int)+0.5\n\tWHEN 0 THEN cast(effort_hours as int)\nEND\n)\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Mon, 21 May 2018 01:03:50 -0700 (MST)", "msg_from": "mlunnon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi mlunon,\n\nA great thanks for your timely response. And yes it worked when I rewritten\nthe query.\n\nThe query got enhanced with approximate of 1000 planner seeks. You can find\nit from the explain plan below:\n\namp_test=# explain select\nsum (\nCASE MOD(cast(effort_hours as decimal),1)\n WHEN 0.45 THEN cast(effort_hours as int)+0.75\n WHEN 0.15 THEN cast(effort_hours as int)+0.25\n WHEN 0.30 THEN cast(effort_hours as int)+0.5\n WHEN 0 THEN cast(effort_hours as int)\nEND\n)\nfrom tms_timesheet_details detail , tms_wsr_header header where\nwsr_header_id=header.id and work_order_no != 'CORPORATE';\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Aggregate (cost=8813.60..8813.61 rows=1 width=8)\n -> Hash Join (cost=608.27..5647.67 rows=70354 width=8)\n Hash Cond: (detail.wsr_header_id = header.id)\n -> Seq Scan on tms_timesheet_details detail (cost=0.00..3431.14\nrows=72378 width=12)\n Filter: ((work_order_no)::text <> 'CORPORATE'::text)\n -> Hash (cost=399.23..399.23 rows=16723 width=4)\n -> Seq Scan on tms_wsr_header header (cost=0.00..399.23\nrows=16723 width=4)\n(7 rows)\n\n\nBut is this the optimum, can we reduce the cost more at least to around 5000\nplanner seeks. As it is only a subpart of the query which is called multiple\nnumber of times in the main query.\n\nAnd to send the main query along with tables description and explain plan it\nwill be a vast message so send you a sub-part.\n\nPlease help me to tune it more. Thanks in Advance.\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Mon, 21 May 2018 04:13:20 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi. Basically you want to convert a base 60 number to a decimal. So you \ndon't need conditionals. See if this works for you:\n\nSELECT floor(effort_hours) + ( (effort_hours - floor(effort_hours)) / \n0.6 )\nfrom tms_timesheet_details detail , tms_wsr_header header where\nwsr_header_id=header.id and work_order_no != 'CORPORATE';\n\nRegards,\nAbbas\n\nOn Mon, May 21, 2018 at 3:43 PM, pavan95 <[email protected]> \nwrote:\n> Hi mlunon,\n> \n> A great thanks for your timely response. And yes it worked when I \n> rewritten\n> the query.\n> \n> The query got enhanced with approximate of 1000 planner seeks. You \n> can find\n> it from the explain plan below:\n> \n> amp_test=# explain select\n> sum (\n> CASE MOD(cast(effort_hours as decimal),1)\n> WHEN 0.45 THEN cast(effort_hours as int)+0.75\n> WHEN 0.15 THEN cast(effort_hours as int)+0.25\n> WHEN 0.30 THEN cast(effort_hours as int)+0.5\n> WHEN 0 THEN cast(effort_hours as int)\n> END\n> )\n> from tms_timesheet_details detail , tms_wsr_header header where\n> wsr_header_id=header.id and work_order_no != 'CORPORATE';\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------\n> Aggregate (cost=8813.60..8813.61 rows=1 width=8)\n> -> Hash Join (cost=608.27..5647.67 rows=70354 width=8)\n> Hash Cond: (detail.wsr_header_id = header.id)\n> -> Seq Scan on tms_timesheet_details detail \n> (cost=0.00..3431.14\n> rows=72378 width=12)\n> Filter: ((work_order_no)::text <> 'CORPORATE'::text)\n> -> Hash (cost=399.23..399.23 rows=16723 width=4)\n> -> Seq Scan on tms_wsr_header header \n> (cost=0.00..399.23\n> rows=16723 width=4)\n> (7 rows)\n> \n> \n> But is this the optimum, can we reduce the cost more at least to \n> around 5000\n> planner seeks. As it is only a subpart of the query which is called \n> multiple\n> number of times in the main query.\n> \n> And to send the main query along with tables description and explain \n> plan it\n> will be a vast message so send you a sub-part.\n> \n> Please help me to tune it more. Thanks in Advance.\n> \n> Regards,\n> Pavan\n> \n> \n> \n> --\n> Sent from: \n> http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n> \n\n\n\n\nHi. Basically you want to convert a base 60 number to a decimal. So you don't need conditionals. See if this works for you:SELECT floor(effort_hours) + ( (effort_hours - floor(effort_hours)) / 0.6 )from tms_timesheet_details detail , tms_wsr_header header  wherewsr_header_id=header.id and work_order_no != 'CORPORATE';Regards,Abbas\n\nOn Mon, May 21, 2018 at 3:43 PM, pavan95 <[email protected]> wrote:\nHi mlunon,\n\nA great thanks for your timely response. And yes it worked when I rewritten\nthe query.\n\nThe query got enhanced with approximate of 1000 planner seeks. You can find\nit from the explain plan below:\n\namp_test=# explain select\nsum (\nCASE MOD(cast(effort_hours as decimal),1)\n WHEN 0.45 THEN cast(effort_hours as int)+0.75\n WHEN 0.15 THEN cast(effort_hours as int)+0.25\n WHEN 0.30 THEN cast(effort_hours as int)+0.5\n WHEN 0 THEN cast(effort_hours as int)\nEND\n)\nfrom tms_timesheet_details detail , tms_wsr_header header where\nwsr_header_id=header.id and work_order_no != 'CORPORATE';\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Aggregate (cost=8813.60..8813.61 rows=1 width=8)\n -> Hash Join (cost=608.27..5647.67 rows=70354 width=8)\n Hash Cond: (detail.wsr_header_id = header.id)\n -> Seq Scan on tms_timesheet_details detail (cost=0.00..3431.14\nrows=72378 width=12)\n Filter: ((work_order_no)::text <> 'CORPORATE'::text)\n -> Hash (cost=399.23..399.23 rows=16723 width=4)\n -> Seq Scan on tms_wsr_header header (cost=0.00..399.23\nrows=16723 width=4)\n(7 rows)\n\n\nBut is this the optimum, can we reduce the cost more at least to around 5000\nplanner seeks. As it is only a subpart of the query which is called multiple\nnumber of times in the main query.\n\nAnd to send the main query along with tables description and explain plan it\nwill be a vast message so send you a sub-part.\n\nPlease help me to tune it more. Thanks in Advance.\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html", "msg_date": "Mon, 21 May 2018 15:28:07 +0400", "msg_from": "Abbas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi Abbas,\n\nThanks for your valuable suggestions. To my surprise I got the same output\nas what I have executed before. \n\nBut unfortunately I'm unable to understand the logic of the code, in\nspecific what is base 60 number? The used data type for \"effort_hours\"\ncolumn is 'double precision'. \n\nKindly help me in understanding the logic. Thanks in advance.\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Mon, 21 May 2018 06:39:55 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "On Mon, May 21, 2018 at 6:39 AM, pavan95 <[email protected]>\nwrote:\n\n> Hi Abbas,\n>\n> Thanks for your valuable suggestions. To my surprise I got the same output\n> as what I have executed before.\n>\n> But unfortunately I'm unable to understand the logic of the code, in\n> specific what is base 60 number? The used data type for \"effort_hours\"\n> column is 'double precision'.\n>\n> Kindly help me in understanding the logic. Thanks in advance.\n\n\nThis is not converting a \"base 60 number to base 10\" - this is computing a\npercentage, which is indeed what you want to do.\n\nSince 0.60 is the maximum value of the fraction in this encoding scheme\ndividing the actual value by 0.60 tells you what percentage (between 0 and\n1) your value is of the maximum. But you have to get rid of the hours\ncomponent first, and floor truncates the minutes leaving just the hours\nwhich you can subtract out from the original leaving only the minutes.\n​\nDavid J.​\n\nP.S. ​You could consider adding a new column to the table, along with a\ntrigger, and compute and store the derived value upon insert.\n​\n\nOn Mon, May 21, 2018 at 6:39 AM, pavan95 <[email protected]> wrote:Hi Abbas,\n\nThanks for your valuable suggestions. To my surprise I got the same output\nas what I have executed before. \n\nBut unfortunately I'm unable to understand the logic of the code, in\nspecific what is base 60 number? The used data type for \"effort_hours\"\ncolumn is 'double precision'. \n\nKindly help me in understanding the logic. Thanks in advance.This is not converting a \"base 60 number to base 10\" - this is computing a percentage, which is indeed what you want to do.Since 0.60 is the maximum value of the fraction in this encoding scheme dividing the actual value by 0.60 tells you what percentage (between 0 and 1) your value is of the maximum.  But you have to get rid of the hours component first, and floor truncates the minutes leaving just the hours which you can subtract out from the original leaving only the minutes.​David J.​\nP.S. ​You could consider adding a new column to the table, along with a trigger, and compute and store the derived value upon insert.​", "msg_date": "Mon, 21 May 2018 07:03:19 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Sure thing. Base 60 or Sexagesimal is the numerical system used for \nmeasuring time (1 hour equals to 60 minutes and so on). But this case \nis even simpler, so without going into much detail about bases, you're \nmapping between two sets of numbers:\n\n0 -> 0\n.15 -> .25\n.30 -> .50\n.45 -> .75\n\n From working with clocks, we know that 15 minutes is .25 hours, 30 \nminutes is .5 hours and so on. So you only need to divide the \nfractional part ( effort_hours - floor(effort_hours) ) by .6 to get \nwhat you want.\n\nFor example, let's say effort_hours = 1.15; then floor(1.15) is 1; so:\n\nfloor(1.15) + ( (1.15 - floor(1.15)) / 0.6 ) = 1 + ( (1.15 - 1) / 0.6 ) \n= 1 + ( 0.15 / 0.60 ) = 1.25\n\nHope it helps. Feel free to ask a question if it's still unclear. :)\n\n\nOn Mon, May 21, 2018 at 6:09 PM, pavan95 <[email protected]> \nwrote:\n> Hi Abbas,\n> \n> Thanks for your valuable suggestions. To my surprise I got the same \n> output\n> as what I have executed before.\n> \n> But unfortunately I'm unable to understand the logic of the code, in\n> specific what is base 60 number? The used data type for \"effort_hours\"\n> column is 'double precision'.\n> \n> Kindly help me in understanding the logic. Thanks in advance.\n> \n> Regards,\n> Pavan\n> \n> \n> \n> --\n> Sent from: \n> http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n> \n\n\n\n\nSure thing. Base 60 or Sexagesimal is the numerical system used for measuring time (1 hour equals to 60 minutes and so on). But this case is even simpler, so without going into much detail about bases, you're mapping between two sets of numbers:0 -> 0.15 -> .25.30 -> .50.45 -> .75From working with clocks, we know that 15 minutes is .25 hours, 30 minutes is .5 hours and so on. So you only need to divide the fractional part ( effort_hours - floor(effort_hours) ) by .6 to get what you want.For example, let's say effort_hours = 1.15; then floor(1.15) is 1; so:floor(1.15) + ( (1.15 - floor(1.15)) / 0.6 ) = 1 + ( (1.15 - 1) / 0.6 ) = 1 + ( 0.15 / 0.60 ) = 1.25Hope it helps. Feel free to ask a question if it's still unclear. :)\n\nOn Mon, May 21, 2018 at 6:09 PM, pavan95 <[email protected]> wrote:\nHi Abbas,\n\nThanks for your valuable suggestions. To my surprise I got the same output\nas what I have executed before. \n\nBut unfortunately I'm unable to understand the logic of the code, in\nspecific what is base 60 number? The used data type for \"effort_hours\"\ncolumn is 'double precision'. \n\nKindly help me in understanding the logic. Thanks in advance.\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html", "msg_date": "Mon, 21 May 2018 18:12:06 +0400", "msg_from": "Abbas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi abbas,\n\nThank you so much. I've got this query from my development team asking to\nimprove its performance. \n\nNow I got pretty much clear idea of it. And it will be the final extent to\nwhich we can tune the performance right?\n\nIf there is still a way give me some tips to enhance the query performance. \n\nBut kudos for your \"floor\" function. After a long struggle with the indexes,\njoins and the hints I came to know that there is also a way to tune the\nquery performance by rewriting the query.\n\nThanks in advance.\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Mon, 21 May 2018 07:34:35 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi David,\n\nThank you so much for your valuable inputs. Is there anything that I need\nto look from Indexes perspective or Join order ??\n\nKindly let me know if it can be tuned further.\n\nThank you very much. \n\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Mon, 21 May 2018 07:43:57 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "On Mon, May 21, 2018 at 7:43 AM, pavan95 <[email protected]>\nwrote:\n\n> Hi David,\n>\n> Thank you so much for your valuable inputs. Is there anything that I need\n> to look from Indexes perspective or Join order ??\n>\n> Kindly let me know if it can be tuned further.\n>\n\nWhat I've got to give here is what you've received.\n\nDavid J.\n​\n\nOn Mon, May 21, 2018 at 7:43 AM, pavan95 <[email protected]> wrote:Hi David,\n\nThank you so much for your valuable inputs.  Is there anything that I need\nto look from Indexes perspective or Join order ??\n\nKindly let me know if  it can be tuned further.What I've got to give here is what you've received.David J.​", "msg_date": "Mon, 21 May 2018 08:06:51 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "pavan95 wrote\n> *Query*:\n> \n> explain select ... from tms_timesheet_details, tms_wsr_header header \n> where wsr_header_id=header.id and work_order_no != 'CORPORATE';\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------\n> Aggregate (cost=9868.91..9868.92 rows=1 width=8)\n> -> Hash Join (cost=608.27..5647.67 rows=70354 width=8)\n> Hash Cond: (tms_timesheet_details.wsr_header_id = header.id)\n> -> Seq Scan on tms_timesheet_details (cost=0.00..3431.14\n> rows=72378 width=12)\n> Filter: ((work_order_no)::text <> 'CORPORATE'::text)\n> -> Hash (cost=399.23..399.23 rows=16723 width=4)\n> -> Seq Scan on tms_wsr_header header (cost=0.00..399.23\n> rows=16723 width=4)\n> (7 rows)\n> \n> \n> --\n> Sent from:\n> http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\nWhy is the table tms_wsr_header in the from clause as it is not used in the\nselect columns? A simple \"wsr_header_id is not null\" would do the same as\nthis is a foreign key into the tms_wsr_header table. An index with on\ntms_timesheet_details.id \"where wsr_header_id is not null\" might then speed\nthe query up if there were significant numbers of rows with a null\nwsr_header_id.\nCheers\nMatthew\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Mon, 21 May 2018 08:48:13 -0700 (MST)", "msg_from": "mlunnon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi all, \nThank you so much for your valuable responses.Tried every aspect which you\nhave said for my sub-query. \nI hoped a better decrease in cost for my main query. But yes it decreased\nbut not to a great extent.\nWhat I felt is to provide the main query and the associated table\ndefinitions in the query. Please help me to tune the following big query. \nselect res.id id,\n row_number() OVER () as sno,\n res.header_id,\n res.emp_id,\n res.alias alias,\n res.name as name,\n res.billed_hrs billed_hrs,\n res.unbilled_hrs unbilled_hrs,\n res.paid_time_off paid_time_off,\n res.unpaid_leave unpaid_leave,\n res.breavement_time breavement_time,\n res.leave leave,\n res.state,\n count(*) OVER() AS full_count,\n res.header_emp_id,\n res.header_status\n from (\n select \n history.id as id,\n 0 as header_id,\n '0' as emp_id,\n row_number() OVER () as sno,\n user1.alias_id as alias,\n partner.name as name,\n ( select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and\nwork_order_no != 'CORPORATE') billed_hrs,\n\t\t\t\t\t\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and\nrelease_no = 'unbillable_time') as unbilled_hrs,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and\nrelease_no = 'paid_time_off') as paid_time_off,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and\nrelease_no = 'unpaid_leave') as unpaid_leave,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and\nrelease_no = 'bereavement_time') as breavement_time,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and date\n>='2018-04-16' and date <='2018-04-30' and release_no in\n('sick_leave','casual_leave','privilege_leave','optional_holiday') ) as\nleave,\n (case when tl_status.state = '' then 'Waiting for approval'\nelse tl_status.state end) as state,\n header.res_employee_id as header_emp_id,\n status.name as header_status \n from tms_workflow_history history, \n res_users users,\n res_users user1,\n res_partner partner,\n tms_timesheet_status status,\n tms_timesheet_header header\n left join tms_workflow_history tl_status on\ntl_status.timesheet_id=header.id\n and\ntl_status.active=True\n and\ntl_status.group_id=13\n \n where \n history.timesheet_id=header.id\n and header.res_employee_id=user1.res_employee_id\n and status.id=header.status_id\n and history.user_id=users.id\n and user1.partner_id=partner.id\n and header.timesheet_period_id = 127\n and (history.state = 'Approved' )\n and history.current_activity='N'\n and history.is_final_approver=True \n and history.active = True\n union \n select \n 0 as id,\n header.id as header_id,\n '0' as emp_id,\n 0 as sno,\n users.alias_id as alias,\n partner.name as name,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where work_order_no != 'CORPORATE' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) billed_hrs,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'unbillable_time' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as unbilled_hrs,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'paid_time_off' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as paid_time_off,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'unpaid_leave' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as unpaid_leave,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'bereavement_time' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as breavement_time,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where res_employee_id=users.res_employee_id\nand date >='2018-04-16' and date <='2018-04-30' and release_no in\n('sick_leave','casual_leave','privilege_leave','optional_holiday') ) as\nleave,\n 'Not Submitted' state,\n header.res_employee_id as header_emp_id,\n 'Not Submitted' as header_status \n from res_users users,\n res_partner partner,\n tms_timesheet_status status,\n tms_timesheet_header header \n where \n header.res_employee_id=users.res_employee_id\n and status.id=header.status_id\n and users.partner_id=partner.id\n and status.name='Draft'\n and header.timesheet_period_id=127\n and header.res_employee_id in (some ids) \n union \n select\n 0 as id,\n 0 as header_id,\n users.res_employee_id as emp_id,\n 0 as sno,\n users.alias_id as alias,\n partner.name as name,\n 0 as billed_hrs,\n 0 as unbilled_hrs,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'paid_time_off' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as paid_time_off,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'unpaid_leave' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as unpaid_leave,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'bereavement_time' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as breavement_time,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where res_employee_id=users.res_employee_id\nand date >='2018-04-16' and date <='2018-04-30' and release_no in\n('sick_leave','casual_leave','privilege_leave','optional_holiday') ) as\nleave,\n 'Not Submitted' state,\n users.res_employee_id as header_emp_id,\n 'Not Submitted' as header_status\n \n from res_users users,\n res_partner partner\n \n where users.res_employee_id not in (select res_employee_id\n from \n tms_timesheet_header\n where \n timesheet_period_id=127\n and res_employee_id in\n('A1','B1','C2323',--some 2000 id's))\n and users.partner_id=partner.id\n and users.res_employee_id is not null\n and users.res_employee_id in ('A1','B1','C2323',--some 2000\nid's)\n order by name ) res order by name limit 10 offset 0\n\nNote: As it is a big query posted only a meaningful part. There 5 unions of\nsimilar type and same are the tables involved in the entire query.\n\nSample query plan: \nLimit (cost=92129.35..92129.63 rows=10 width=248)\n -> WindowAgg (cost=92129.35..92138.46 rows=331 width=248)\n -> Subquery Scan on res (cost=92129.35..92133.49 rows=331\nwidth=248)\n -> Sort (cost=92129.35..92130.18 rows=331 width=33)\n Sort Key: partner.name\n -> HashAggregate (cost=92112.19..92115.50 rows=331\nwidth=33)\n ->* Append (cost=340.02..92099.78 rows=331\nwidth=33)*\n -> WindowAgg (cost=340.02..1591.76 rows=1\nwidth=54)\n \n\n(396 rows)\nProblem started with append in the plan.\n\nPlease help me tune this query!!!!\n\nThanks in Advance.\n\nRegards,\nPavan\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Tue, 22 May 2018 03:32:48 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi all, \nThank you so much for your valuable responses.Tried every aspect which you\nhave said for my sub-query. \nI hoped a better decrease in cost for my main query. But yes it decreased\nbut not to a great extent.\nWhat I felt is to provide the main query and the associated table\ndefinitions in the query. Please help me to tune the following big query. \nselect res.id id,\n row_number() OVER () as sno,\n res.header_id,\n res.emp_id,\n res.alias alias,\n res.name as name,\n res.billed_hrs billed_hrs,\n res.unbilled_hrs unbilled_hrs,\n res.paid_time_off paid_time_off,\n res.unpaid_leave unpaid_leave,\n res.breavement_time breavement_time,\n res.leave leave,\n res.state,\n count(*) OVER() AS full_count,\n res.header_emp_id,\n res.header_status\n from (\n select \n history.id as id,\n 0 as header_id,\n '0' as emp_id,\n row_number() OVER () as sno,\n user1.alias_id as alias,\n partner.name as name,\n ( select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and\nwork_order_no != 'CORPORATE') billed_hrs,\n\t\t\t\t\t\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and\nrelease_no = 'unbillable_time') as unbilled_hrs,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and\nrelease_no = 'paid_time_off') as paid_time_off,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and\nrelease_no = 'unpaid_leave') as unpaid_leave,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and\nrelease_no = 'bereavement_time') as breavement_time,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where timesheet_header_id=header.id and date\n>='2018-04-16' and date <='2018-04-30' and release_no in\n('sick_leave','casual_leave','privilege_leave','optional_holiday') ) as\nleave,\n (case when tl_status.state = '' then 'Waiting for approval'\nelse tl_status.state end) as state,\n header.res_employee_id as header_emp_id,\n status.name as header_status \n from tms_workflow_history history, \n res_users users,\n res_users user1,\n res_partner partner,\n tms_timesheet_status status,\n tms_timesheet_header header\n left join tms_workflow_history tl_status on\ntl_status.timesheet_id=header.id\n and\ntl_status.active=True\n and\ntl_status.group_id=13\n \n where \n history.timesheet_id=header.id\n and header.res_employee_id=user1.res_employee_id\n and status.id=header.status_id\n and history.user_id=users.id\n and user1.partner_id=partner.id\n and header.timesheet_period_id = 127\n and (history.state = 'Approved' )\n and history.current_activity='N'\n and history.is_final_approver=True \n and history.active = True\n union \n select \n 0 as id,\n header.id as header_id,\n '0' as emp_id,\n 0 as sno,\n users.alias_id as alias,\n partner.name as name,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where work_order_no != 'CORPORATE' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) billed_hrs,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'unbillable_time' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as unbilled_hrs,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'paid_time_off' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as paid_time_off,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'unpaid_leave' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as unpaid_leave,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'bereavement_time' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as breavement_time,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where res_employee_id=users.res_employee_id\nand date >='2018-04-16' and date <='2018-04-30' and release_no in\n('sick_leave','casual_leave','privilege_leave','optional_holiday') ) as\nleave,\n 'Not Submitted' state,\n header.res_employee_id as header_emp_id,\n 'Not Submitted' as header_status \n from res_users users,\n res_partner partner,\n tms_timesheet_status status,\n tms_timesheet_header header \n where \n header.res_employee_id=users.res_employee_id\n and status.id=header.status_id\n and users.partner_id=partner.id\n and status.name='Draft'\n and header.timesheet_period_id=127\n and header.res_employee_id in (some ids) \n union \n select\n 0 as id,\n 0 as header_id,\n users.res_employee_id as emp_id,\n 0 as sno,\n users.alias_id as alias,\n partner.name as name,\n 0 as billed_hrs,\n 0 as unbilled_hrs,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'paid_time_off' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as paid_time_off,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'unpaid_leave' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as unpaid_leave,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'bereavement_time' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as breavement_time,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where res_employee_id=users.res_employee_id\nand date >='2018-04-16' and date <='2018-04-30' and release_no in\n('sick_leave','casual_leave','privilege_leave','optional_holiday') ) as\nleave,\n 'Not Submitted' state,\n users.res_employee_id as header_emp_id,\n 'Not Submitted' as header_status\n \n from res_users users,\n res_partner partner\n \n where users.res_employee_id not in (select res_employee_id\n from \n tms_timesheet_header\n where \n timesheet_period_id=127\n and res_employee_id in\n('A1','B1','C2323',--some 2000 id's))\n and users.partner_id=partner.id\n and users.res_employee_id is not null\n and users.res_employee_id in ('A1','B1','C2323',--some 2000\nid's)\n order by name ) res order by name limit 10 offset 0\n\nNote: As it is a big query posted only a meaningful part. There 5 unions of\nsimilar type and same are the tables involved in the entire query.\n\nSample query plan: \nLimit (cost=92129.35..92129.63 rows=10 width=248)\n -> WindowAgg (cost=92129.35..92138.46 rows=331 width=248)\n -> Subquery Scan on res (cost=92129.35..92133.49 rows=331\nwidth=248)\n -> Sort (cost=92129.35..92130.18 rows=331 width=33)\n Sort Key: partner.name\n -> HashAggregate (cost=92112.19..92115.50 rows=331\nwidth=33)\n ->* Append (cost=340.02..92099.78 rows=331\nwidth=33)*\n -> WindowAgg (cost=340.02..1591.76 rows=1\nwidth=54)\n \n\n(396 rows)\nProblem started with append in the plan.\n\nPlease help me tune this query!!!!\n\nThanks in Advance.\n\nRegards,\nPavan\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Tue, 22 May 2018 03:32:59 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "On Tue, May 22, 2018 at 03:32:59AM -0700, pavan95 wrote:\n> Sample query plan: \n> Limit (cost=92129.35..92129.63 rows=10 width=248)\n\nWould you send the output of explain(analyze,buffers) for the whole query ?\nAnd/or paste it into explain.depesz site and send a link.\n\nJustin\n\n", "msg_date": "Tue, 22 May 2018 05:39:06 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi Justin,\n\nPlease find the output of explain(analyze,buffers) for the whole query in\nthe below link.\n\nLink: https://explain.depesz.com/s/dNkb <https://explain.depesz.com/s/dNkb> \n\nThanks in Advance!\n\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Tue, 22 May 2018 03:51:44 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "On Tue, May 22, 2018 at 03:51:44AM -0700, pavan95 wrote:\n> Please find the output of explain(analyze,buffers) for the whole query in\n> the below link.\n\n> Seq Scan on res_users users (cost=750.92..1,836.69 rows=249 width=15) (actual time=3.962..17.544 rows=67 loops=1) \n\nNot sure but would you try creating an index on:\nres_users.res_employee_id\n\n> Seq Scan on res_users user1 (cost=0.00..58.03 rows=1,303 width=15) (actual time=0.002..0.002 rows=1 loops=1)\n\nAlso the planner's estimate for table:res_users is off by 1300x..so you should\nprobably vacuum analyze it then recheck. I don't think we know what version\npostgres you have, but last week's patch releases include a fix which may be\nrelevant (reltuples including dead tuples).\n\nAlso I don't know the definition of this table or its indices:\ntms_workflow_history\n\n..but it looks like an additional or modified index or maybe clustering the\ntable on existing index might help (active? is_final_approver?)\nOr maybe this should be 3 separate indices rather than composite index?\nPerhaps some of those could be BRIN indices, depending on postgres version\n\nJustin\n\n", "msg_date": "Tue, 22 May 2018 13:23:07 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Thanks a lot! I will have a look\n\nOn Tue, May 22, 2018, 11:53 PM Justin Pryzby <[email protected]> wrote:\n\n> On Tue, May 22, 2018 at 03:51:44AM -0700, pavan95 wrote:\n> > Please find the output of explain(analyze,buffers) for the whole query in\n> > the below link.\n>\n> > Seq Scan on res_users users (cost=750.92..1,836.69 rows=249 width=15)\n> (actual time=3.962..17.544 rows=67 loops=1)\n>\n> Not sure but would you try creating an index on:\n> res_users.res_employee_id\n>\n> > Seq Scan on res_users user1 (cost=0.00..58.03 rows=1,303 width=15)\n> (actual time=0.002..0.002 rows=1 loops=1)\n>\n> Also the planner's estimate for table:res_users is off by 1300x..so you\n> should\n> probably vacuum analyze it then recheck. I don't think we know what\n> version\n> postgres you have, but last week's patch releases include a fix which may\n> be\n> relevant (reltuples including dead tuples).\n>\n> Also I don't know the definition of this table or its indices:\n> tms_workflow_history\n>\n> ..but it looks like an additional or modified index or maybe clustering the\n> table on existing index might help (active? is_final_approver?)\n> Or maybe this should be 3 separate indices rather than composite index?\n> Perhaps some of those could be BRIN indices, depending on postgres version\n>\n> Justin\n>\n\nThanks a lot!  I will have a lookOn Tue, May 22, 2018, 11:53 PM Justin Pryzby <[email protected]> wrote:On Tue, May 22, 2018 at 03:51:44AM -0700, pavan95 wrote:\n> Please find the output of explain(analyze,buffers) for the whole query in\n> the below link.\n\n> Seq Scan on res_users users (cost=750.92..1,836.69 rows=249 width=15) (actual time=3.962..17.544 rows=67 loops=1) \n\nNot sure but would you try creating an index on:\nres_users.res_employee_id\n\n> Seq Scan on res_users user1 (cost=0.00..58.03 rows=1,303 width=15) (actual time=0.002..0.002 rows=1 loops=1)\n\nAlso the planner's estimate for table:res_users is off by 1300x..so you should\nprobably vacuum analyze it then recheck.  I don't think we know what version\npostgres you have, but last week's patch releases include a fix which may be\nrelevant (reltuples including dead tuples).\n\nAlso I don't know the definition of this table or its indices:\ntms_workflow_history\n\n..but it looks like an additional or modified index or maybe clustering the\ntable on existing index might help (active? is_final_approver?)\nOr maybe this should be 3 separate indices rather than composite index?\nPerhaps some of those could be BRIN indices, depending on postgres version\n\nJustin", "msg_date": "Tue, 22 May 2018 23:55:39 +0530", "msg_from": "Pavan Teja <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi all/Justin,\n\nAs said, created index on the res_users.res_employee_id and the below link\nis the explain plan result.\n\nLink: https://explain.depesz.com/s/hoct <http://> .\n\nAnd the cost of Previous query is 92,129 and the cost of current modified\nquery after creating the above said index is 91,462. But good thing is we\ncan see a very small improvement..!. \n\nPlease find the table definitions which are used in the query(which you\nasked for tms_worflow_history).\n\n1. tms_timesheet_details:\n\namp_test=# \\d tms_timesheet_details\n Table\n\"public.tms_timesheet_details\"\n Column | Type | \nModifiers\n---------------------+-----------------------------+--------------------------------------------------------------------\n id | integer | not null default\nnextval('tms_timesheet_details_id_seq'::regclass)\n status | character varying |\n create_uid | integer |\n effort_hours | double precision |\n work_order_no | character varying |\n res_employee_id | character varying |\n wsr_header_id | integer |\n remarks | character varying |\n write_date | timestamp without time zone |\n timesheet_header_id | integer |\n date | date |\n create_date | timestamp without time zone |\n write_uid | integer |\n release_no | character varying |\n project_id | character varying |\n loc_name | character varying |\n user_id | integer |\n ao_emp_id | character varying |\nIndexes:\n \"tms_timesheet_details_pkey\" PRIMARY KEY, btree (id)\n \"tms_timesheet_details_uniq_res_employee_id_efforts\" UNIQUE, btree\n(res_employee_id, work_order_no, release_no, date, project_id)\n \"timesheet_detail_inx\" btree (wsr_header_id, timesheet_header_id)\n \"tms_timesheet_details_all_idx\" btree (wsr_header_id, work_order_no,\nrelease_no, date, effort_hours)\n \"tms_timesheet_details_id_idx\" btree (id) WHERE wsr_header_id IS NOT\nNULL\n \"ts_detail_date_idx\" btree (date)\n \"ts_detail_hdr_id_idx\" btree (timesheet_header_id)\n \"ts_detail_release_no_idx\" btree (release_no)\n \"work_order_no_idx\" btree (work_order_no)\n\n\n2. tms_workflow_history:\n\namp_test=# \\d tms_workflow_history\n Table \"public.tms_workflow_history\"\n Column | Type | \nModifiers\n-------------------+-----------------------------+-------------------------------------------------------------------\n id | integer | not null default\nnextval('tms_workflow_history_id_seq'::regclass)\n create_uid | integer |\n current_activity | character varying |\n user_id | integer |\n sequence | integer |\n is_final_approver | boolean |\n wsr_id | integer |\n write_uid | integer |\n timesheet_id | integer |\n state | character varying |\n write_date | timestamp without time zone |\n remarks | character varying |\n create_date | timestamp without time zone |\n group_id | integer |\n active | boolean |\nIndexes:\n \"tms_workflow_history_pkey\" PRIMARY KEY, btree (id)\n \"curract_state_isfinal_app_idx\" btree (current_activity, state,\nis_final_approver)\n \"timesheet_id_group_id_active_idx\" btree (timesheet_id, group_id,\nactive)\n \"tms_wkf_his_active_is_final_approveridx\" btree (active,\nis_final_approver)\n \"tms_wkf_his_group_id_idx\" btree (group_id)\n \"tms_wkf_his_timesheet_id_idx\" btree (timesheet_id)\n \"tms_wkf_hist_current_activity_idx\" btree (current_activity)\n \"tms_wkf_hist_state_idx\" btree (state)\n \"wsr_id_idx\" btree (wsr_id)\n\n3. res_users:\n\n Table \"public.res_users\"\n Column | Type | \nModifiers\n-------------------+-----------------------------+--------------------------------------------------------\n id | integer | not null default\nnextval('res_users_id_seq'::regclass)\n active | boolean | default true\n login | character varying | not null\n password | character varying |\n company_id | integer | not null\n partner_id | integer | not null\n create_date | timestamp without time zone |\n share | boolean |\n write_uid | integer |\n create_uid | integer |\n action_id | integer |\n write_date | timestamp without time zone |\n signature | text |\n password_crypt | character varying |\n res_employee_name | character varying |\n res_employee_id | character varying |\n role | character varying |\n skills | character varying |\n holiday_header_id | integer |\n alias_id | character varying |\n loc_name | character varying |\nIndexes:\n \"res_users_pkey\" PRIMARY KEY, btree (id)\n \"res_users_login_key\" UNIQUE, btree (login)\n \"res_users_res_employee_id_idx\" btree (res_employee_id)\n\n4. res_partner:\n\namp_test=# \\d res_partner\n Table \"public.res_partner\"\n Column | Type | \nModifiers\n-------------------------+-----------------------------+----------------------------------------------------------\n id | integer | not null default\nnextval('res_partner_id_seq'::regclass)\n name | character varying |\n company_id | integer |\n comment | text |\n website | character varying |\n create_date | timestamp without time zone |\n color | integer |\n active | boolean |\n street | character varying |\n supplier | boolean |\n city | character varying |\n display_name | character varying |\n zip | character varying |\n title | integer |\n country_id | integer |\n commercial_company_name | character varying |\n parent_id | integer |\n company_name | character varying |\n employee | boolean |\n ref | character varying |\n email | character varying |\n is_company | boolean |\n function | character varying |\n lang | character varying |\n fax | character varying |\n street2 | character varying |\n barcode | character varying |\n phone | character varying |\n write_date | timestamp without time zone |\n date | date |\n tz | character varying |\n write_uid | integer |\n customer | boolean |\n create_uid | integer |\n credit_limit | double precision |\n user_id | integer |\n mobile | character varying |\n type | character varying |\n partner_share | boolean |\n vat | character varying |\n state_id | integer |\n commercial_partner_id | integer |\nIndexes:\n \"res_partner_pkey\" PRIMARY KEY, btree (id)\n \"res_partner_commercial_partner_id_index\" btree (commercial_partner_id)\n \"res_partner_company_id_index\" btree (company_id)\n \"res_partner_date_index\" btree (date)\n \"res_partner_display_name_index\" btree (display_name)\n \"res_partner_name_index\" btree (name)\n \"res_partner_parent_id_index\" btree (parent_id)\n \"res_partner_ref_index\" btree (ref)\nCheck constraints:\n \"res_partner_check_name\" CHECK (type::text = 'contact'::text AND name IS\nNOT NULL OR type::text <> 'contact'::text)\n\n5. tms_timesheet_status\n\namp_test=# \\d tms_timesheet_status\n Table \"public.tms_timesheet_status\"\n Column | Type | \nModifiers\n-------------+-----------------------------+-------------------------------------------------------------------\n id | integer | not null default\nnextval('tms_timesheet_status_id_seq'::regclass)\n status | character varying |\n create_uid | integer |\n description | text |\n sequence | integer |\n write_uid | integer |\n write_date | timestamp without time zone |\n create_date | timestamp without time zone |\n name | character varying |\nIndexes:\n \"tms_timesheet_status_pkey\" PRIMARY KEY, btree (id)\n\n6. tms_timesheet_header:\n\n Table\n\"public.tms_timesheet_header\"\n Column | Type | \nModifiers\n---------------------+-----------------------------+-------------------------------------------------------------------\n id | integer | not null default\nnextval('tms_timesheet_header_id_seq'::regclass)\n create_uid | integer |\n status_id | integer |\n ao_emp_name | character varying |\n ao_emp_id | character varying |\n over | double precision |\n res_employee_id | character varying |\n regular_pay_hours | double precision |\n write_uid | integer |\n comments | text |\n write_date | timestamp without time zone |\n under | double precision |\n create_date | timestamp without time zone |\n timesheet_period_id | integer |\n user_id | integer |\nIndexes:\n \"tms_timesheet_header_pkey\" PRIMARY KEY, btree (id)\n \"tms_timesheet_header_uniq_tms_emp_status\" UNIQUE, btree\n(res_employee_id, timesheet_period_id)\n\n\n7. tms_timesheet_period:\n\n Table \"public.tms_timesheet_period\"\n Column | Type | \nModifiers\n-------------------+-----------------------------+-------------------------------------------------------------------\n id | integer | not null default\nnextval('tms_timesheet_period_id_seq'::regclass)\n status | character varying |\n create_uid | integer |\n auto_approve_date | timestamp without time zone |\n name | character varying |\n end_date | date |\n auto_submit_date | timestamp without time zone |\n period_type | character varying |\n write_date | timestamp without time zone |\n payhours | integer |\n remarks | text |\n create_date | timestamp without time zone |\n write_uid | integer |\n start_date | date |\nIndexes:\n \"tms_timesheet_period_pkey\" PRIMARY KEY, btree (id)\n\nNote: Due to space constraint I'm unable to mention the foreign key\nconstraints and referenced by for the tables(thinking it is not required)\n\nI have also observed that based on the composite indexes on the columns of\ntms_workflow_history table the cost came to 91,462 orelse because of\nindividual indexes it remains unaltered from 92,129. \n\nI want to reduce the query cost. As observed in the plan a Subquery Scan is\ntaking around 45000 planner seeks at one place and 38000 planner seeks. Is\nthere any way to reduce this cost ? \n\nOr any other measures to be followed. My current postgresql version is 9.5.\nThanks in Advance!\n\n\nRegards,\nPavan\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Wed, 23 May 2018 00:01:06 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi Pavan,\nthat's quite a big query. I can see that the generate_series function is\ngetting repeatedly called and the planner estimates for this sub query are\nout by a factor of 66. You might try to re-write using a WITH query. I am\nassuming that you have already analyzed all the tables and also added\nappropriate indexes on join/query columns.\nRegards\nMatthew\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Wed, 23 May 2018 04:12:10 -0700 (MST)", "msg_from": "mlunnon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi Matthew,\n\nYeah and you said right!. I have analyzed the entire database and also\ncreated appropriate indexes for the columns used in WHERE/JOIN clauses.\n\nOkay I will just provide the fourth union part of the query which you can\nanalyze easier(this not that big).\n\nPlease find the query part. And refer to the table definitions in my\nprevious posts.\nQuery:\n\nselect \n 0 as id,\n header.id as header_id,\n '0' as emp_id,\n 0 as sno,\n users.alias_id as alias,\n partner.name as name,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where work_order_no != 'CORPORATE' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) billed_hrs,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'unbillable_time' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as unbilled_hrs,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'paid_time_off' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as paid_time_off,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'unpaid_leave' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as unpaid_leave,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where release_no = 'bereavement_time' and\nres_employee_id=users.res_employee_id and date in (select date::date from\ngenerate_series((select start_date from tms_timesheet_period where\nid=127),(select end_date from tms_timesheet_period where id=127),'1\nday'::interval) date)) as breavement_time,\n (select SUM( floor(effort_hours) + ( (effort_hours -\nfloor(effort_hours)) / 0.6 ))\n\t\t\t\t\tfrom tms_timesheet_details where res_employee_id=users.res_employee_id\nand date >='2018-04-16' and date <='2018-04-30' and release_no in\n('sick_leave','casual_leave','privilege_leave','optional_holiday') ) as\nleave,\n 'Not Submitted' state,\n header.res_employee_id as header_emp_id,\n 'Not Submitted' as header_status\n \n from res_users users,\n res_partner partner,\n tms_timesheet_status status,\n tms_timesheet_header header\n \n where \n header.res_employee_id=users.res_employee_id\n and status.id=header.status_id\n and users.partner_id=partner.id\n and status.name='Draft'\n and header.timesheet_period_id=127\n and header.res_employee_id in ('14145', '14147',\n'ON-14148', '11331', '11332', '11333', 'ON-11334', '65432', '65416',\n'54643', '23266', '4681', '56464', '64649', '89564', '98798', '13333',\n'44466', '87852', '65464', '65464', '44655', '8201', '65465', 'ON-78785',\n'13233', 'ON-5544', 'ON-54654', '23131', '98765', '25134', '13218', '84645',\n'4687', '6546', '4988', '89796', '79878', '7198', '15726', '2132', '5310',\n'13056', '4446', '16825', '16740', '3912', '19601', '13200', '12981',\n'ON-3332', '13166', 'ON-3144', 'ON-1251', 'ON-2799', 'ON-2338', '7286',\n'ON-2381', 'ON-3102', 'ON-2938', '64782', '5407', '54641', '46379',\n'G151151', '5007', '6011', '5050', '20869', '20204', '12410', '10488',\n'14582', '13574', '12982', '7884', '7788', '13417', '7922', '16744',\n'16746', '16756', '8292', '16745', '19989', '8297', '5020', '14184',\n'17161', '20767', '20753', '20289', '19979', '19975', '20272', '4292',\n'G9341010', '14791', '5121', 'ON-1767', 'ON-581', 'ON-700', 'ON-437',\n'ON-562', 'ON-1726', 'OFF-1060', 'ON-147', 'OFF-612', 'OFF-635', 'OFF-857',\n'ON-900280', 'ON-1934', 'ON-1922', 'ON-2258', 'OFF-2537', 'ON-2872',\n'ON-2450', 'ON-2265', 'OFF-2900', 'ON-2551', 'ON-1867', 'ON-2086',\n'ON-2348', 'OFF-2706', 'ON-2244', 'ON-2134', 'ON-2654', 'ON-2346',\n'ON-1984', 'ON-1243', 'OFF-1266', 'ON-1276', 'ON-2452', 'ON-2179',\n'ON-2931', 'ON-2164', 'ON-2468', 'ON-1473', 'ON-1481', 'ON-1521', 'ON-2455',\n'ON-2104', 'ON-2295', 'ON-1540', 'ON-900501', 'ON-1351', 'OFF-1364',\n'ON-2704', 'ON-1757', 'ON-1690', 'ON-1670', 'ON-1671', 'ON-1689', 'ON-1704',\n'ON-1714', 'ON-1655', 'ON-1709', 'ON-1737', 'ON-1725', 'ON-1750', 'ON-1731',\n'ON-1715', 'ON-1745', 'ON-1751', 'ON-2191', 'OFF-2686', 'ON-1815',\n'ON-2052', 'ON-2019', 'ON-1820', 'ON-1717', 'ON-1713', 'ON-1661',\n'OFF-1664', 'ON-1703', 'ON-1734', 'ON-1735', 'ON-1656', 'ON-1705',\n'ON-1733', 'ON-1708', 'ON-1666', 'ON-1667', 'ON-1658', 'ON-900487',\n'ON-900214', 'ON-1676', 'ON-2378', 'ON-1654', 'ON-2417', 'ON-1488',\n'ON-1500', 'ON-1506', 'ON-2875', 'ON-1531', 'ON-2099', 'ON-2195', 'ON-2038',\n'ON-1490', 'ON-1489', 'ON-1501', 'ON-1627', 'ON-1929', 'ON-900431',\n'ON-1462', 'ON-1466', 'OFF-1468', 'ON-1420', 'ON-1479', 'ON-900543',\n'ON-1485', 'ON-1493', 'ON-2347', 'ON-1499', 'ON-2324', 'ON-2733', 'ON-1736',\n'ON-1720', 'ON-1674', 'ON-1849', 'ON-1836', 'ON-1846', 'ON-2140',\n'OFF-2856', 'ON-2128', 'OFF-2524', 'ON-1845', 'ON-2336', 'ON-1945',\n'ON-2008', 'ON-1900', 'ON-2117', 'ON-1837', 'ON-2199', 'ON-2200', 'ON-1821',\n'ON-2060', 'ON-1804', 'ON-1803', 'ON-2364', 'ON-2068', 'ON-2474', 'ON-1895',\n'ON-1838', 'ON-2024', 'ON-2653', 'ON-1621', 'OFF-1145', 'OFF-994',\n'OFF-999', 'ON-1003', 'ON-812', 'OFF-1033', 'ON-1048', 'OFF-1058',\n'ON-1053', 'ON-1071', 'ON-1088', 'ON-256', 'ON-207', 'ON-206', 'ON-184',\n'OFF-268', 'ON-285', 'OFF-286', 'ON-649', 'ON-301', 'OFF-645', 'ON-338',\n'OFF-323', 'ON-347', 'ON-351', 'ON-350', 'ON-354', 'ON-719', 'ON-723',\n'ON-137', 'ON-112', 'ON-141', 'ON-752', 'ON-791', 'OFF-802', 'OFF-822',\n'ON-573', 'ON-616', 'OFF-587', 'ON-641', 'ON-664', 'ON-336', 'OFF-676',\n'ON-687', 'ON-695', 'ON-439', 'ON-406', 'ON-659', 'OFF-890', 'ON-900',\n'ON-935', 'ON-228', 'ON-942', 'ON-954', 'OFF-957', 'ON-961', 'ON-830',\n'OFF-966', 'OFF-969', 'OFF-951', 'ON-1043', 'OFF-1042', 'ON-1055',\n'ON-1109', 'ON-2212', 'ON-2036', 'OFF-1221', 'ON-1238', 'ON-1331',\n'OFF-1353', 'ON-1343', 'ON-2014', 'ON-1995', 'ON-2133', 'OFF-2189',\n'ON-1581', 'OFF-1595', 'ON-1556', 'ON-1580', 'OFF-1591', 'ON-2437',\n'ON-900466', 'ON-1611', 'OFF-1612', 'ON-1624', 'ON-2765', 'ON-1927',\n'ON-2361', 'ON-2054', 'ON-1633', 'ON-1503', 'OFF-2546', 'ON-1512',\n'ON-1536', 'ON-2543', 'ON-2558', 'ON-2237', 'ON-1535', 'ON-2436',\n'OFF-1547', 'ON-2380', 'ON-2116', 'ON-2820', 'ON-1563', 'ON-900512',\n'ON-1568', 'ON-1570', 'ON-900514', 'ON-1130', 'ON-1632', 'ON-2359',\n'ON-3176', 'ON-2132', 'ON-2012', 'ON-1762', 'ON-900230', 'ON-2299',\n'ON-3552', 'ON-2557', 'ON-2129', 'ON-1918', 'OFF-2552', 'ON-2235',\n'OFF-2773', 'ON-2123', 'ON-2658', 'ON-1866', 'ON-2506', 'OFF-2703',\n'ON-2882', 'ON-2649', 'ON-2997', 'ON-1925', 'OFF-3096', 'ON-3297',\n'ON-3359', 'ON-3352', 'ON-3357', 'ON-3378', 'ON-3071', 'OFF-2702',\n'ON-2801', 'ON-2689', 'ON-2416', 'ON-3305', 'OFF-2695', 'ON-2069',\n'ON-3318', 'OFF-3681', 'ON-1541', 'ON-2248', 'ON-2249', 'ON-2250',\n'ON-2259', 'ON-2280', 'ON-3345', 'OFF-3545', 'ON-2286', 'ON-2293',\n'ON-2277', 'ON-1180', 'ON-2304', 'OFF-3575', 'OFF-2384', 'OFF-2513',\n'ON-2444', 'OFF-3218', 'ON-2497', 'ON-2708', 'ON-2774', 'ON-2667',\n'ON-2803', 'OFF-3044', 'ON-2290', 'ON-2791', 'ON-2810', 'ON-2767',\n'ON-2415', 'ON-2489', 'ON-2180', 'ON-2131', 'ON-2207', 'ON-2233', 'ON-3045',\n'ON-3675', 'ON-2260', 'ON-2700', 'ON-2418', 'ON-2924', 'OFF-2828',\n'ON-2536', 'ON-3127', 'ON-2472', 'ON-2482', 'ON-3098', 'ON-2473', 'ON-3073',\n'ON-2855', 'OFF-2709', 'ON-2789', 'ON-2589', 'ON-2409', 'ON-3455',\n'OFF-3556', 'ON-2510', 'ON-3120', 'ON-2457', 'ON-2303', 'ON-2044',\n'ON-2313', 'ON-2326', 'ON-2312', 'OFF-2391', 'ON-2438', 'OFF-3548',\n'ON-2581', 'ON-2525', 'ON-2538', 'ON-2433', 'ON-3300', 'ON-2487', 'ON-2754',\n'OFF-3049', 'ON-2370', 'ON-3151', 'ON-3100', 'ON-3101', 'ON-1044',\n'ON-2431', 'ON-2371', 'ON-2714', 'OFF-3544', 'OFF-2388', 'ON-2790',\n'OFF-2918', 'ON-2681', 'ON-2512', 'ON-2511', 'ON-2521', 'OFF-2539',\n'ON-3551', 'OFF-3549', 'OFF-3462', 'ON-2745', 'ON-2778', 'OFF-2821',\n'ON-900498', 'ON-2812', 'OFF-2955', 'ON-2840', 'ON-2847', 'ON-3309',\n'OFF-2917', 'OFF-2857', 'ON-2795', 'ON-2793', 'ON-2796', 'ON-2873',\n'ON-2874', 'OFF-2870', 'ON-2889', 'ON-2719', 'ON-2824', 'ON-2861',\n'ON-2865', 'ON-2866', 'OFF-2826', 'OFF-2898', 'ON-3301', 'OFF-2961',\n'ON-2878', 'OFF-2886', 'ON-2914', 'ON-2909', 'OFF-2906', 'ON-2922',\n'OFF-3682', 'ON-2937', 'ON-2913', 'OFF-2916', 'ON-2923', 'OFF-3006',\n'OFF-3046', 'OFF-3042', 'OFF-3050', 'OFF-2642', 'ON-3093', 'ON-2685',\n'OFF-3112', 'ON-3576', 'OFF-3094', 'OFF-3126', 'ON-3129', 'ON-3152',\n'ON-3153', 'ON-3171', 'ON-3177', 'ON-3217', 'ON-2617', 'ON-3654', 'ON-3677',\n'ON-1817', 'ON-3684', 'ON-3686', 'ON-3685', 'ON-3278', 'ON-3317', 'ON-3316',\n'ON-3325', 'ON-3349', 'ON-3351', 'ON-3391', 'ON-3398', 'ON-3451', 'ON-3414',\n'ON-3452', 'ON-3412', 'ON-3453', 'ON-3417', 'OFF-3473', 'ON-3457',\n'ON-3523', 'ON-3546', 'ON-3554', 'ON-3553', 'ON-900552', 'G12941370',\n'6479', '14192', '87546', '19755', '16751', '2095', '12244', '12363',\n'17510', '19935', '7973', '13189', '19733', '19928', '21124', '16725',\n'7244', '3027', '11426', '12732', '8530', '10301', '19555', '19706',\n'20097', '13156', '14690', '4183', '8340', '18026', '12297', '6577',\n'11301', '12980', '18138', '5603', '17587', '19118', '12210', '7292',\n'17577', '16578', '7895', '200186', '20100', '34541', '19370', '11111',\n'1492', '1111', '2556', '3445643643', '20379', 'ON-2338P', '20899')\n\n\nAnd the explain plan for the above query can be found in the below link.\nLink: https://explain.depesz.com/s/y3J8 <http://> \n\nPlease help me tune this query or logic to rewrite at the painful area in\nthe query.\n\nThanks in Advance!\n\nRegards,\nPavan\n\n\n\n\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Wed, 23 May 2018 06:39:21 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "On Wed, May 23, 2018 at 12:01:06AM -0700, pavan95 wrote:\n> As said, created index on the res_users.res_employee_id and the below link\n> is the explain plan result.\n> \n> Link: https://explain.depesz.com/s/hoct\n> \n> And the cost of Previous query is 92,129 and the cost of current modified\n> query after creating the above said index is 91,462. But good thing is we\n\nForget the cost - that's postgres *model* of the combined IO+CPU.\nIf the model is off, that's may cause bad plans and could be looked into\nfurther.\n\nIn any case, that index cut your runtime from 75sec to 60sec (in spite of the\nmodelled cost).\n\nIt looks like you resolved the bad estimate on the users table?\n\n> 2. tms_workflow_history:\n> Indexes:\n> \"tms_workflow_history_pkey\" PRIMARY KEY, btree (id)\n> \"curract_state_isfinal_app_idx\" btree (current_activity, state, is_final_approver)\n> \"timesheet_id_group_id_active_idx\" btree (timesheet_id, group_id, active)\n> \"tms_wkf_his_active_is_final_approveridx\" btree (active, is_final_approver)\n> \"tms_wkf_his_group_id_idx\" btree (group_id)\n> \"tms_wkf_his_timesheet_id_idx\" btree (timesheet_id)\n> \"tms_wkf_hist_current_activity_idx\" btree (current_activity)\n> \"tms_wkf_hist_state_idx\" btree (state)\n> \"wsr_id_idx\" btree (wsr_id)\n\nHow big is the table ? And curract_state_isfinal_app_idx ?\nHave these been reindexed (or pg_repacked) recently?\n\nIt seems to me that the remaining query optimization is to improve this:\n> Bitmap Heap Scan on tms_workflow_history history (cost=193.19..1,090.50 rows=6,041 width=12) (actual time=3.692..15.714 rows=11,351 loops=1)\n\nI think you could consider clustering (or repacking) the table on\ncurract_state_isfinal_app_idx (but you'll have to judge if that's okay and\nwon't negatively affect other queries).\n\nBut, what's your target runtime ? Improvements here could cut at most 15sec\noff the total 60sec. If you're hoping to save more than that, you'll need to\n(also) look further than the query:\n\n - postgres parameters: what are shared_buffers, work_mem, effective_cache_size ?\n + https://wiki.postgresql.org/wiki/Server_Configuration\n - are there other DBs/applications running on the server/VM ?\n - kernel tuning (readahead, VM parameters, write cache, scheduler, THP, etc)\n - server hardware (what OS? storage? RAM? filesystem?)\n - how does the storage perform outside of postgres?\n + something like this: /usr/sbin/bonnie++ -f -n0 -x4 -d /var/lib/pgsql\n\nJustin\n\n", "msg_date": "Wed, 23 May 2018 08:43:22 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi Justin,\n\n>How big is the table ? And curract_state_isfinal_app_idx ? \n>Have these been reindexed (or pg_repacked) recently? \n\nThe size of the table 'tms_workflow_history' is 7600Kb(which is pretty\nsmall). Yes those indexes were dropped and recreated. \n\n>It looks like you resolved the bad estimate on the users table? \nYeah, even I think the same.\n\nPlease find the explain plan which got increased again vastly. Is this\nbecause of the increase in rows?\n\nLink : https://explain.depesz.com/s/Ifr <http://> \n\nThe above is the explain plan taken from production server. And this is the\nmain plan to tune.\n\nPlease let me know the where I'm going wrong. Thank you in Advance.!!\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Wed, 23 May 2018 07:03:18 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "On Wed, May 23, 2018 at 07:03:18AM -0700, pavan95 wrote:\n> Please find the explain plan which got increased again vastly. Is this\n> because of the increase in rows?\n> \n> Link : https://explain.depesz.com/s/Ifr <http://> \n\nThat's explain without \"analyze\", so not very useful.\n\nThere's handful of questions:\n\nOn Wed, May 23, 2018 at 08:43:22AM -0500, Justin Pryzby wrote:\n> - postgres parameters: what are shared_buffers, work_mem, effective_cache_size ?\n> + https://wiki.postgresql.org/wiki/Server_Configuration\n> - are there other DBs/applications running on the server/VM ?\n> - kernel tuning (readahead, VM parameters, write cache, scheduler, THP, etc)\n> - server hardware (what OS? storage? RAM? filesystem?)\n> - how does the storage perform outside of postgres?\n> + something like this: /usr/sbin/bonnie++ -f -n0 -x4 -d /var/lib/pgsql\n\nJustin\n\n", "msg_date": "Wed, 23 May 2018 15:01:32 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "Hi Justin,\n\nPlease find the below explain plan link.\n\nLink: https://explain.depesz.com/s/owE <http://> \n\n\nAny help is appreciated. Thanks in Advance.\n\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Wed, 23 May 2018 22:20:42 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" }, { "msg_contents": "On Wed, May 23, 2018 at 10:20:42PM -0700, pavan95 wrote:\n> Hi Justin,\n> \n> Please find the below explain plan link.\n> \n> Link: https://explain.depesz.com/s/owE <http://> \n\nThat's explain analyze but explain(analyze,buffers) is better.\n\nIs this on a completely different server than the previous plans ?\n\nThis rowcount misestimate appears to be a significant part of the problem:\n Merge Join (cost=228.77..992.11 ROWS=20 width=22) (actual time=4.353..12.439 ROWS=343 loops=1)\n Merge Cond: (history_2.timesheet_id = header_2.id)\n\nYou could look at the available stats for that table's column in pg_stats.\nIs there an \"most common values\" list?\nMaybe you need to ALTER TABLE .. SET STATISTICS 999 (or some increased value)\nand re-analyze ?\n\nYou can see these are also taking large component of the query time:\n\n Bitmap Index Scan on ts_detail_release_no_idx (cost=0.00..33.86 rows=1,259 width=0) (actual time=0.304..0.304 rows=1,331 LOOPS=327)\n Index Cond: ((release_no)::text = 'paid_time_off'::text)\n...\n Bitmap Index Scan on ts_detail_release_no_idx (cost=0.00..33.86 rows=1,259 width=0) (actual time=0.304..0.304 rows=1,331 LOOPS=343)\n Index Cond: ((release_no)::text = 'paid_time_off'::text)\n\nI wonder whether it would help to\nCREATE INDEX ON tms_timesheet_details(timesheet_header_id) WHERE\n((release_no)::text = 'paid_time_off'::text);\n\nIn addition to the other settings I asked about, it might be interesting to\nSHOW effective_io_concurrency;\n\nYou're at the point where I can't reasonably contribute much more.\n\nJustin\n\n", "msg_date": "Thu, 24 May 2018 01:06:40 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me in reducing the CPU cost for the high cost query below,\n as it is hitting production seriously!!" } ]
[ { "msg_contents": "What would the list think of a web form for submitting problems the performance\nlist, similar to the pgsql-bugs form?\n\nAlternately, or perhaps additionally, a script (hopefully bundled with\npostgres) which collects at least the non-query specific info and probably\ncreates .logfile file for attachment.\n\nI assume fields would be mostly the content/questions off the SlowQuery wiki\npage, plus everything else asked with any frequency.\n\nThere could also be \"required\" fields..\n\nJustin\n\n", "msg_date": "Thu, 24 May 2018 18:57:13 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "propose web form for submission of performance problems" }, { "msg_contents": "On Thu, May 24, 2018 at 4:57 PM, Justin Pryzby <[email protected]> wrote:\n\n> What would the list think of a web form for submitting problems the\n> performance\n> list, similar to the pgsql-bugs form?\n>\n> Alternately, or perhaps additionally, a script (hopefully bundled with\n> postgres) which collects at least the non-query specific info and probably\n> creates .logfile file for attachment.\n>\n> I assume fields would be mostly the content/questions off the SlowQuery\n> wiki\n> page, plus everything else asked with any frequency.\n>\n> There could also be \"required\" fields..\n>\n\nIs there something wrong with this email group? I actually to like it. Low\ntech, but it works. If you think such a web site would be good, you might\nexplain the benefits over a simple email list like this one (which is\narchived and easily searchable via Google).\n\nHaving watched and participated in this mailing list since around 2006, my\nguess is that well over 95% of the \"problems\" are people who are learning,\nand just need advice for tuning their database and/or rewriting their\nqueries. It is rare that someone reports a genuine performance problem in\nPostgres.\n\nI like watching this mailing list; it gives a sense of the ebb and flow of\nproblem reports, and when one comes along that interests me, I can easily\nfollow it. And occasionally, I even answer questions. I probably wouldn't\ndo that with a bug-reporting type of web site.\n\nIt would also be bad to have both an email list and a separate web site.\nThat would simply split the community.\n\nCraig\n\n\n>\n> Justin\n>\n>\n\nOn Thu, May 24, 2018 at 4:57 PM, Justin Pryzby <[email protected]> wrote:What would the list think of a web form for submitting problems the performance\nlist, similar to the pgsql-bugs form?\n\nAlternately, or perhaps additionally, a script (hopefully bundled with\npostgres) which collects at least the non-query specific info and probably\ncreates .logfile file for attachment.\n\nI assume fields would be mostly the content/questions off the SlowQuery wiki\npage, plus everything else asked with any frequency.\n\nThere could also be \"required\" fields..Is there something wrong with this email group? I actually to like it. Low tech, but it works. If you think such a web site would be good, you might explain the benefits over a simple email list like this one (which is archived and easily searchable via Google).Having watched and participated in this mailing list since around 2006, my guess is that well over 95% of the \"problems\" are people who are learning, and just need advice for tuning their database and/or rewriting their queries. It is rare that someone reports a genuine performance problem in Postgres.I like watching this mailing list; it gives a sense of the ebb and flow of problem reports, and when one comes along that interests me, I can easily follow it. And occasionally, I even answer questions. I probably wouldn't do that with a bug-reporting type of web site.It would also be bad to have both an email list and a separate web site. That would simply split the community.Craig \n\nJustin", "msg_date": "Thu, 24 May 2018 18:27:31 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: propose web form for submission of performance problems" }, { "msg_contents": "On Thu, May 24, 2018 at 06:27:31PM -0700, Craig James wrote:\n> On Thu, May 24, 2018 at 4:57 PM, Justin Pryzby <[email protected]> wrote:\n> \n> > What would the list think of a web form for submitting problems the\n> > performance\n> > list, similar to the pgsql-bugs form?\n> >\n> > Alternately, or perhaps additionally, a script (hopefully bundled with\n> > postgres) which collects at least the non-query specific info and probably\n> > creates .logfile file for attachment.\n> >\n> > I assume fields would be mostly the content/questions off the SlowQuery\n> > wiki\n> > page, plus everything else asked with any frequency.\n> >\n> > There could also be \"required\" fields..\n> >\n> \n> Is there something wrong with this email group? I actually to like it. Low\n\nI meant something exactly like the bug form:\n\thttps://www.postgresql.org/account/login/?next=/account/submitbug/\n\thttps://www.postgresql.org/list/pgsql-bugs/2018-05/\n\"Entering a bug report this way causes it to be mailed to the\n<[email protected]> mailing list.\"\n\nThe goal was to continue the list but encourage problem reports to include all\nof the most commonly-needed(missing) information, and in a standard format too.\n\nBut actually I see that nowadays it requires an account, which isn't great and\nmight discourage many people from using the web form. But maybe that's better\nfor some people than requiring subscription to the list.\n\nJustin\n\n", "msg_date": "Thu, 24 May 2018 20:52:00 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: propose web form for submission of performance problems" }, { "msg_contents": "Craig James schrieb am 25.05.2018 um 03:27:\n> What would the list think of a web form for submitting problems the performance\n> list, similar to the pgsql-bugs form?\n> \n> Alternately, or perhaps additionally, a script (hopefully bundled with\n> postgres) which collects at least the non-query specific info and probably\n> creates .logfile file for attachment.\n> \n> I assume fields would be mostly the content/questions off the SlowQuery wiki\n> page, plus everything else asked with any frequency.\n> \n> There could also be \"required\" fields..\n> \n> Is there something wrong with this email group? I actually to like\n> it. Low tech, but it works. If you think such a web site would be\n> good, you might explain the benefits over a simple email list like\n> this one (which is archived and easily searchable via Google).\n> \nI don't think anything is wrong with the list, and I assume neither does \nJustin. \n\nBut occasionally the bug submit form is misused for troubleshooting \nperformance problems \"Bug #42: Query is too slow\" \n\nI could imagine that having a separate form to report performance problems \nand asking for different pieces of information which then sends that to this \nlist (instead of bugs) isn't a bad idea. \n\nHowever I am not sure that the frequency of the bug-list misuse warrants that.\n\nThomas\n\n\n\n", "msg_date": "Fri, 25 May 2018 07:45:46 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: propose web form for submission of performance problems" } ]
[ { "msg_contents": "Hi all,\n\nWe have a query which is rather slow (about 10 seconds), and it looks like this:\n\nselect inventory.date, asset.name, inventory.quantity\nfrom temp.inventory\nleft outer join temp.asset on asset.id = id_asset\norder by inventory.date, asset.name\nlimit 100\n\nThe inventory table has the quantity of each asset in the inventory on each date (complete SQL to create and populate the tables with dummy data is below). The query plan looks like this (the non-parallel version is similar):\n\n[cid:[email protected]]\n\nOr in text form:\n\nLimit (cost=217591.77..217603.60 rows=100 width=32) (actual time=9122.235..9122.535 rows=100 loops=1)\n Buffers: shared hit=6645, temp read=6363 written=6364\n -> Gather Merge (cost=217591.77..790859.62 rows=4844517 width=32) (actual time=9122.232..9122.518 rows=100 loops=1)\n Workers Planned: 3\n Workers Launched: 3\n Buffers: shared hit=6645, temp read=6363 written=6364\n -> Sort (cost=216591.73..220628.83 rows=1614839 width=32) (actual time=8879.909..8880.030 rows=727 loops=4)\n Sort Key: inventory.date, asset.name\n Sort Method: external merge Disk: 50904kB\n Buffers: shared hit=27365, temp read=25943 written=25947\n -> Hash Join (cost=26.52..50077.94 rows=1614839 width=32) (actual time=0.788..722.095 rows=1251500 loops=4)\n Hash Cond: (inventory.id_asset = asset.id)\n Buffers: shared hit=27236\n -> Parallel Seq Scan on inventory (cost=0.00..29678.39 rows=1614839 width=12) (actual time=0.025..237.977 rows=1251500 loops=4)\n Buffers: shared hit=27060\n -> Hash (cost=14.01..14.01 rows=1001 width=28) (actual time=0.600..0.600 rows=1001 loops=4)\n Buckets: 1024 Batches: 1 Memory Usage: 68kB\n Buffers: shared hit=32\n -> Seq Scan on asset (cost=0.00..14.01 rows=1001 width=28) (actual time=0.026..0.279 rows=1001 loops=4)\n Buffers: shared hit=32\nPlanning time: 0.276 ms\nExecution time: 9180.144 ms\n\nI can see why it does this, but I can also imagine a potential optimisation, which would enable it to select far fewer rows from the inventory table.\n\nAs we are joining to the primary key of the asset table, we know that this join will not add extra rows to the output. Every output row comes from a distinct inventory row. Therefore only 100 rows of the inventory table are required. But which ones?\n\nIf we selected exactly 100 rows from inventory, ordered by date, then all of the dates that were complete (every row for that date returned) would be in the output. However, if there is a date which is incomplete (we haven't emitted all the inventory records for that date), then it's possible that we would need some records that we haven't emitted yet. This can only be known after joining to the asset table and sorting this last group by both date and asset name.\n\nBut we do know that there can only be 0 or 1 incomplete groups: either the last group (by date) is incomplete, if the LIMIT cut it off in mid-group, or its end coincided with the LIMIT boundary and it is complete. As long as we continue selecting rows from this table until we exhaust the prefix of the overall SORT which applies to it (in this case, just the date) then it will be complete, and we will have all the inventory rows that can appear in the output (because no possible values of columns that appear later in the sort order can cause any rows with different dates to appear in the output).\n\nI'm imagining something like a sort-limit-finish node, which sorts its input and then returns at least the limit number of rows, but keeps returning rows until it exhausts the last sort prefix that it read.\n\nThis node could be created from an ordinary SORT and LIMIT pair:\n\nSORT + LIMIT -> SORT-LIMIT-FINISH + SORT + LIMIT\n\nAnd then pushed down through either a left join, or an inner join on a foreign key, when the right side is unique, and no columns from the right side appear in WHERE conditions, nor anywhere in the sort order except at the end. This sort column suffix would be removed by pushing the node down. The resulting query plan would then look something like:\n\nIndex Scan on inventory\nSORT-LIMIT-FINISH(sort=[inventory.date], limit=100) (pushed down through the join to asset)\nSeq Scan on asset\nHash Left Join (inventory.id_asset = asset.id)\nSort (inventory.date, asset.name)\nLimit (100)\n\nAnd would emit only about 100-1000 inventory rows from the index scan.\n\nDoes this sound correct, reasonable and potentially interesting to Postgres developers?\n\nSQL to reproduce:\n\ncreate schema temp;\ncreate table temp.asset (\n id serial primary key,\n name text\n);\ninsert into temp.asset (name) select 'Thing ' || random()::text as name from generate_series(0, 1000) as s;\ncreate table temp.inventory (\n date date,\n id_asset int,\n quantity int,\n primary key (date, id_asset),\n CONSTRAINT id_asset_fk FOREIGN KEY (id_asset) REFERENCES temp.asset (id) MATCH SIMPLE\n);\ninsert into temp.inventory (date, id_asset, quantity)\nselect current_date - days, asset.id, random() from temp.asset, generate_series(0, 5000) as days;\n\nThanks, Chris.", "msg_date": "Wed, 30 May 2018 15:46:40 +0000", "msg_from": "Christopher Wilson <[email protected]>", "msg_from_op": true, "msg_subject": "Possible optimisation: push down SORT and LIMIT nodes" }, { "msg_contents": "On Wed, May 30, 2018 at 03:46:40PM +0000, Christopher Wilson wrote:\n> We have a query which is rather slow (about 10 seconds), and it looks like this:\n> \n> The inventory table has the quantity of each asset in the inventory on each\n> date (complete SQL to create and populate the tables with dummy data is\n> below). The query plan looks like this (the non-parallel version is similar):\n\nHi,\n\nThanks for including the test case.\n\n> Limit (cost=217591.77..217603.60 rows=100 width=32) (actual time=9122.235..9122.535 rows=100 loops=1)\n...\n> -> Sort (cost=216591.73..220628.83 rows=1614839 width=32) (actual time=8879.909..8880.030 rows=727 loops=4)\n> Sort Key: inventory.date, asset.name\n> Sort Method: external merge Disk: 50904kB\n> Buffers: shared hit=27365, temp read=25943 written=25947\n\nYep, the sort is expensive and largely wasted..\n\n> I'm imagining something like a sort-limit-finish node, which sorts its input\n> and then returns at least the limit number of rows, but keeps returning rows\n> until it exhausts the last sort prefix that it read.\n[...]\n> Does this sound correct, reasonable and potentially interesting to Postgres\n> developers?\n\nI think your analysis may be (?) unecessarily specific to your specific problem\nquery.\n\nFor diagnostic purposes, I was able to to vastly improve the query runtime with\na CTE (WITH):\n\n|postgres=# explain(analyze,buffers) WITH x AS (SELECT inventory.date, asset.name, inventory.quantity FROM temp.inventory LEFT JOIN temp.asset ON asset.id=id_asset LIMIT 99) SELECT * FROM x ORDER BY date, name;\n| Sort (cost=1090.60..1090.85 rows=99 width=40) (actual time=3.764..3.988 rows=99 loops=1)\n| Sort Key: x.date, x.name\n| Sort Method: quicksort Memory: 32kB\n| Buffers: shared hit=298\n| CTE x\n| -> Limit (cost=0.28..889.32 rows=99 width=31) (actual time=0.063..2.385 rows=99 loops=1)\n| Buffers: shared hit=298\n| -> Nested Loop Left Join (cost=0.28..44955006.99 rows=5006001 width=31) (actual time=0.058..1.940 rows=99 loops=1)\n| Buffers: shared hit=298\n| -> Seq Scan on inventory (cost=0.00..5033061.00 rows=5006001 width=12) (actual time=0.020..0.275 rows=99 loops=1)\n| Buffers: shared hit=1\n| -> Index Scan using asset_pkey on asset (cost=0.28..7.98 rows=1 width=27) (actual time=0.008..0.008 rows=1 loops=99)\n| Index Cond: (id = inventory.id_asset)\n| Buffers: shared hit=297\n| -> CTE Scan on x (cost=0.00..198.00 rows=99 width=40) (actual time=0.073..2.989 rows=99 loops=1)\n| Buffers: shared hit=298\n| Planning time: 0.327 ms\n| Execution time: 4.260 ms\n\nIt's not clear to me if there's some reason why the planner couldn't know to\nuse a similar plan (sort-limit-... rather than limit-sort-...)\n\nJustin\n\n", "msg_date": "Wed, 30 May 2018 14:02:31 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible optimisation: push down SORT and LIMIT nodes" }, { "msg_contents": "On Wed, May 30, 2018 at 02:02:31PM -0500, Justin Pryzby wrote:\n> For diagnostic purposes, I was able to to vastly improve the query runtime with\n> a CTE (WITH):\n\nI realized this was broken as soon as I sent it (for the essential reason of\ndiscarding rows before having sorted them). Sorry for the noise.\n\nJustin\n\n", "msg_date": "Wed, 30 May 2018 14:09:44 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible optimisation: push down SORT and LIMIT nodes" } ]
[ { "msg_contents": "Hi,\n\nI have a query with a strange query plan.\n\nThis query is roughly searching for sales, and convert them with a currency\nrate. As currency rate changes from time to time, table contains the\ncurrency, the company, the rate, the start date of availability of this\nrate and the end date of availability.\n\nThe join is done using :\n left join currency_rate cr on (cr.currency_id = pp.currency_id and\n cr.company_id = s.company_id and\n cr.date_start <= coalesce(s.date_order, now()) and\n (cr.date_end is null or cr.date_end > coalesce(s.date_order,\nnow())))\n\nThe tricky part is the date range on the currency rate, which is not an\nequality.\n\nthe query plan shows:\n-> Sort (cost=120.13..124.22 rows=1637 width=56) (actual\ntime=14.300..72084.758 rows=308054684 loops=1)\n Sort Key: cr.currency_id, cr.company_id\n Sort Method: quicksort Memory: 172kB\n -> CTE Scan on currency_rate cr\n(cost=0.00..32.74 rows=1637 width=56) (actual time=1.403..13.610 rows=1576\nloops=1)\n\nThere's 2 challenging things :\n- planner estimates 1637 rows, and get 300 million lines\n- sorting is generating lines\n\nlater in the query plan, you find the join:\n-> Merge Left Join (cost=341056.75..351344.76 rows=1165112 width=224)\n(actual time=9792.635..269120.409 rows=1170055 loops=1)\n Merge Cond: ((pp.currency_id = cr.currency_id) AND\n(s.company_id = cr.company_id))\n Join Filter: ((cr.date_start <=\nCOALESCE((s.date_order)::timestamp with time zone, now())) AND\n((cr.date_end IS NULL) OR (cr.date_end > COALESCE((s.date_order)::timestamp\nwith time zone, now()))))\n Rows Removed by Join Filter: 307266434\n\nIt seems the join deletes all the generated million lines, which is correct.\n\nMy question is then , is there a better way to join a table to another\nusing a date range, knowing that there's no overlap between date ranges?\nShould we generate a virtual table with rates for all dates, and joining\nusing an equality?\n\nFor now, the more currency rates, the slowest the query. There's not that\nmuch currency rates (1k in this case), as you can only have one rate per\nday per currency.\n\nHave a nice day,\n\nNicolas.\n\nHi,I have a query with a strange query plan.This query is roughly searching for sales, and convert them with a currency rate. As currency rate changes from time to time, table contains the currency, the company, the rate, the start date of availability of this rate and the end date of availability.The join is done using :    left join currency_rate cr on (cr.currency_id = pp.currency_id and          cr.company_id = s.company_id and          cr.date_start <= coalesce(s.date_order, now()) and         (cr.date_end is null or cr.date_end > coalesce(s.date_order, now())))The tricky part is the date range on the currency rate, which is not an equality.the query plan shows:->  Sort  (cost=120.13..124.22 rows=1637 width=56) (actual time=14.300..72084.758 rows=308054684 loops=1)                          Sort Key: cr.currency_id, cr.company_id                          Sort Method: quicksort  Memory: 172kB                          ->  CTE Scan on currency_rate cr  (cost=0.00..32.74 rows=1637 width=56) (actual time=1.403..13.610 rows=1576 loops=1)There's 2 challenging things :- planner estimates 1637 rows, and get 300 million lines- sorting is generating lineslater in the query plan, you find the join:->  Merge Left Join  (cost=341056.75..351344.76 rows=1165112 width=224) (actual time=9792.635..269120.409 rows=1170055 loops=1)                    Merge Cond: ((pp.currency_id = cr.currency_id) AND (s.company_id = cr.company_id))                    Join Filter: ((cr.date_start <= COALESCE((s.date_order)::timestamp with time zone, now())) AND ((cr.date_end IS NULL) OR (cr.date_end > COALESCE((s.date_order)::timestamp with time zone, now()))))                    Rows Removed by Join Filter: 307266434It seems the join deletes all the generated million lines, which is correct.My question is then , is there a better way to join a table to another using a date range, knowing that there's no overlap between date ranges?Should we generate a virtual table with rates for all dates, and joining using an equality?For now, the more currency rates, the slowest the query. There's not that much currency rates (1k in this case), as you can only have one rate per day per currency.Have a nice day,Nicolas.", "msg_date": "Thu, 31 May 2018 13:22:57 +0200", "msg_from": "Nicolas Seinlet <[email protected]>", "msg_from_op": true, "msg_subject": "Sort is generating rows" }, { "msg_contents": "On Thu, May 31, 2018 at 7:22 AM, Nicolas Seinlet <[email protected]>\nwrote:\n\n> Hi,\n>\n> I have a query with a strange query plan.\n>\n> This query is roughly searching for sales, and convert them with a\n> currency rate. As currency rate changes from time to time, table contains\n> the currency, the company, the rate, the start date of availability of this\n> rate and the end date of availability.\n>\n> The join is done using :\n> left join currency_rate cr on (cr.currency_id = pp.currency_id and\n> cr.company_id = s.company_id and\n> cr.date_start <= coalesce(s.date_order, now()) and\n> (cr.date_end is null or cr.date_end > coalesce(s.date_order,\n> now())))\n>\n> The tricky part is the date range on the currency rate, which is not an\n> equality.\n>\n> the query plan shows:\n> -> Sort (cost=120.13..124.22 rows=1637 width=56) (actual\n> time=14.300..72084.758 rows=308054684 loops=1)\n> Sort Key: cr.currency_id, cr.company_id\n> Sort Method: quicksort Memory: 172kB\n> -> CTE Scan on currency_rate cr\n> (cost=0.00..32.74 rows=1637 width=56) (actual time=1.403..13.610 rows=1576\n> loops=1)\n>\n> There's 2 challenging things :\n> - planner estimates 1637 rows, and get 300 million lines\n> - sorting is generating lines\n>\n\nThese are both explained by the same thing. The sort is feeding into a\nmerge join. For every row in the other node which have the same value of\nthe scan keys, the entire section of this sort with those same keys gets\nscanned again. The repeated scanning gets counted in the actual row count,\nbut isn't counted in the expected row count, or the actual row count of the\nthing feeding into the sort (the CTE)\n\n\n>\n>\nFor now, the more currency rates, the slowest the query. There's not that\n> much currency rates (1k in this case), as you can only have one rate per\n> day per currency.\n>\n\nIf it is only per currency per day, then why is company_id present? In any\ncase, you might be better off listing the rates per day, rather than as a\nrange, and then doing an equality join.\n\nCheers,\n\nJeff\n\nOn Thu, May 31, 2018 at 7:22 AM, Nicolas Seinlet <[email protected]> wrote:Hi,I have a query with a strange query plan.This query is roughly searching for sales, and convert them with a currency rate. As currency rate changes from time to time, table contains the currency, the company, the rate, the start date of availability of this rate and the end date of availability.The join is done using :    left join currency_rate cr on (cr.currency_id = pp.currency_id and          cr.company_id = s.company_id and          cr.date_start <= coalesce(s.date_order, now()) and         (cr.date_end is null or cr.date_end > coalesce(s.date_order, now())))The tricky part is the date range on the currency rate, which is not an equality.the query plan shows:->  Sort  (cost=120.13..124.22 rows=1637 width=56) (actual time=14.300..72084.758 rows=308054684 loops=1)                          Sort Key: cr.currency_id, cr.company_id                          Sort Method: quicksort  Memory: 172kB                          ->  CTE Scan on currency_rate cr  (cost=0.00..32.74 rows=1637 width=56) (actual time=1.403..13.610 rows=1576 loops=1)There's 2 challenging things :- planner estimates 1637 rows, and get 300 million lines- sorting is generating linesThese are both explained by the same thing.  The sort is feeding into a merge join.  For every row in the other node which have the same value of the scan keys, the entire section of this sort with those same keys gets scanned again.  The repeated scanning gets counted in the actual row count, but isn't counted in the expected row count, or the actual row count of the thing feeding into the sort (the CTE)  For now, the more currency rates, the slowest the query. There's not that much currency rates (1k in this case), as you can only have one rate per day per currency.If it is only per currency per day, then why is company_id present? In any case, you might be better off listing the rates per day, rather than as a range, and then doing an equality join.Cheers,Jeff", "msg_date": "Thu, 31 May 2018 09:10:56 -0400", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort is generating rows" }, { "msg_contents": "2018-05-31 15:10 GMT+02:00 Jeff Janes <[email protected]>:\n\n> On Thu, May 31, 2018 at 7:22 AM, Nicolas Seinlet <[email protected]>\n> wrote:\n>\n>> Hi,\n>>\n>> I have a query with a strange query plan.\n>>\n>> This query is roughly searching for sales, and convert them with a\n>> currency rate. As currency rate changes from time to time, table contains\n>> the currency, the company, the rate, the start date of availability of this\n>> rate and the end date of availability.\n>>\n>> The join is done using :\n>> left join currency_rate cr on (cr.currency_id = pp.currency_id and\n>> cr.company_id = s.company_id and\n>> cr.date_start <= coalesce(s.date_order, now()) and\n>> (cr.date_end is null or cr.date_end > coalesce(s.date_order,\n>> now())))\n>>\n>> The tricky part is the date range on the currency rate, which is not an\n>> equality.\n>>\n>> the query plan shows:\n>> -> Sort (cost=120.13..124.22 rows=1637 width=56) (actual\n>> time=14.300..72084.758 rows=308054684 loops=1)\n>> Sort Key: cr.currency_id, cr.company_id\n>> Sort Method: quicksort Memory: 172kB\n>> -> CTE Scan on currency_rate cr\n>> (cost=0.00..32.74 rows=1637 width=56) (actual time=1.403..13.610 rows=1576\n>> loops=1)\n>>\n>> There's 2 challenging things :\n>> - planner estimates 1637 rows, and get 300 million lines\n>> - sorting is generating lines\n>>\n>\n> These are both explained by the same thing. The sort is feeding into a\n> merge join. For every row in the other node which have the same value of\n> the scan keys, the entire section of this sort with those same keys gets\n> scanned again. The repeated scanning gets counted in the actual row count,\n> but isn't counted in the expected row count, or the actual row count of the\n> thing feeding into the sort (the CTE)\n>\n>\n>>\n>>\n> For now, the more currency rates, the slowest the query. There's not that\n>> much currency rates (1k in this case), as you can only have one rate per\n>> day per currency.\n>>\n>\n> If it is only per currency per day, then why is company_id present? In any\n> case, you might be better off listing the rates per day, rather than as a\n> range, and then doing an equality join.\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi,\n\nThanks for the answer.\n\nYou're right, company_id is present, so you can have one rate per day per\ncurrency per company. I've tried to simplify the question without modifying\nthe query plan, so I didn't talk about it even if it's present. I will now\ntry to generate a virtual table of rates per dates.\n\n2018-05-31 15:10 GMT+02:00 Jeff Janes <[email protected]>:On Thu, May 31, 2018 at 7:22 AM, Nicolas Seinlet <[email protected]> wrote:Hi,I have a query with a strange query plan.This query is roughly searching for sales, and convert them with a currency rate. As currency rate changes from time to time, table contains the currency, the company, the rate, the start date of availability of this rate and the end date of availability.The join is done using :    left join currency_rate cr on (cr.currency_id = pp.currency_id and          cr.company_id = s.company_id and          cr.date_start <= coalesce(s.date_order, now()) and         (cr.date_end is null or cr.date_end > coalesce(s.date_order, now())))The tricky part is the date range on the currency rate, which is not an equality.the query plan shows:->  Sort  (cost=120.13..124.22 rows=1637 width=56) (actual time=14.300..72084.758 rows=308054684 loops=1)                          Sort Key: cr.currency_id, cr.company_id                          Sort Method: quicksort  Memory: 172kB                          ->  CTE Scan on currency_rate cr  (cost=0.00..32.74 rows=1637 width=56) (actual time=1.403..13.610 rows=1576 loops=1)There's 2 challenging things :- planner estimates 1637 rows, and get 300 million lines- sorting is generating linesThese are both explained by the same thing.  The sort is feeding into a merge join.  For every row in the other node which have the same value of the scan keys, the entire section of this sort with those same keys gets scanned again.  The repeated scanning gets counted in the actual row count, but isn't counted in the expected row count, or the actual row count of the thing feeding into the sort (the CTE)  For now, the more currency rates, the slowest the query. There's not that much currency rates (1k in this case), as you can only have one rate per day per currency.If it is only per currency per day, then why is company_id present? In any case, you might be better off listing the rates per day, rather than as a range, and then doing an equality join.Cheers,Jeff\nHi,Thanks for the answer. You're right, company_id is present, so you can have one rate per day per currency per company. I've tried to simplify the question without modifying the query plan, so I didn't talk about it even if it's present. I will now try to generate a virtual table of rates per dates.", "msg_date": "Thu, 31 May 2018 15:16:17 +0200", "msg_from": "Nicolas Seinlet <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort is generating rows" } ]
[ { "msg_contents": "Trying to optimize the Elapsed Time (ET) of this query. Currently, it is \nhovering around 3 hrs.\n\nRunning a 'vaccum analyse' had no effect on ET. Even forcing an \n'indexonly' scan by disabling 'enable_seqscan', still around the 3 hrs.\n\nThe table is around 4.6B rows,\n\n  explain select cit_id, cl_value from reflink.citation_locators where \ncl_value = '1507617681' and vclf_number = 1 ;\n                                        QUERY PLAN\n-----------------------------------------------------------------------------------------\n  Bitmap Heap Scan on citation_locators (cost=5066559.01..50999084.79 \nrows=133 width=23)\n    Recheck Cond: (vclf_number = 1)\n    Filter: (cl_value = '1507617681'::text)\n    ->  Bitmap Index Scan on cl_indx_fk02 (cost=0.00..5066558.97 \nrows=493984719 width=0)\n          Index Cond: (vclf_number = 1)\n(5 rows)\n\nreflink.citation_locators\n Table\"reflink.citation_locators\"\n Column | Type | Modifiers | Storage | Stats target | Description\n------------------+--------------------------+-----------+----------+--------------+-------------\n cl_id | bigint | notnull | plain | |\n cl_value | text | notnull | extended | |\n vclf_number | integer | notnull | plain | |\n cit_id | bigint | notnull | plain | |\n cl_date_created | timestamp with time zone | notnull | plain | |\n cl_date_modified | timestamp with time zone | | plain | |\nIndexes:\n \"cl_pk\" PRIMARY KEY, btree (cl_id)\n \"cl_cnst_uk01\" UNIQUE CONSTRAINT, btree (cit_id, vclf_number, cl_value)\n \"cl_indx_fk01\" btree (cit_id)\n \"cl_indx_fk02\" btree (vclf_number)\nForeign-key constraints:\n \"cl_cnst_fk01\" FOREIGN KEY (cit_id) REFERENCES citations(cit_id) NOT VALID\"cl_cnst_fk02\" FOREIGN KEY (vclf_number) REFERENCES valid_cit_locator_fields(vclf_number)\n\n\n\n\n\n\n\nTrying to optimize the Elapsed Time (ET) of this query.\n Currently, it is hovering around 3 hrs.\nRunning a 'vaccum analyse' had no effect on ET. Even forcing an\n 'indexonly' scan by disabling 'enable_seqscan', still around the 3\n hrs. \n\nThe table is around 4.6B rows, \n\n explain select cit_id, cl_value from reflink.citation_locators\n where cl_value = '1507617681' and vclf_number = 1 ;\n                                        QUERY\n PLAN                                        \n-----------------------------------------------------------------------------------------\n  Bitmap Heap Scan on citation_locators \n (cost=5066559.01..50999084.79 rows=133 width=23)\n    Recheck Cond: (vclf_number = 1)\n    Filter: (cl_value = '1507617681'::text)\n    ->  Bitmap Index Scan on cl_indx_fk02 \n (cost=0.00..5066558.97 rows=493984719 width=0)\n          Index Cond: (vclf_number = 1)\n (5 rows)\n\n\nreflink.citation_locators \n Table \"reflink.citation_locators\"\n Column | Type | Modifiers | Storage | Stats target | Description \n------------------+--------------------------+-----------+----------+--------------+-------------\n cl_id | bigint | not null | plain | | \n cl_value | text | not null | extended | | \n vclf_number | integer | not null | plain | | \n cit_id | bigint | not null | plain | | \n cl_date_created | timestamp with time zone | not null | plain | | \n cl_date_modified | timestamp with time zone | | plain | | \nIndexes:\n \"cl_pk\" PRIMARY KEY, btree (cl_id)\n \"cl_cnst_uk01\" UNIQUE CONSTRAINT, btree (cit_id, vclf_number, cl_value)\n \"cl_indx_fk01\" btree (cit_id)\n \"cl_indx_fk02\" btree (vclf_number)\nForeign-key constraints:\n \"cl_cnst_fk01\" FOREIGN KEY (cit_id) REFERENCES citations(cit_id) NOT VALID \"cl_cnst_fk02\" FOREIGN KEY (vclf_number) REFERENCES valid_cit_locator_fields(vclf_number)", "msg_date": "Tue, 5 Jun 2018 10:17:08 -0400", "msg_from": "Fred Habash <[email protected]>", "msg_from_op": true, "msg_subject": "Simple Query Elapsed Time ~ 3 hrs Using Bitmap Index/Heap Scan" }, { "msg_contents": "Hello\nTry using index btree(vclf_number, cl_value) instead of btree (vclf_number).\n\nregards, Sergei\n\n", "msg_date": "Tue, 05 Jun 2018 17:26:33 +0300", "msg_from": "Sergei Kornilov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Query Elapsed Time ~ 3 hrs Using Bitmap Index/Heap Scan" }, { "msg_contents": "Probably the cardinality of \"vclf_number\" is really bad. So the scan on that index is returning many million or billion rows and then you get a recheck which takes semi-forever. So you need an index on cl_value or both vclf_number and cl_value. If you know some properties of the values actually stored inside of those that will help. \n\nMatthew Hall\n\n> On Jun 5, 2018, at 7:17 AM, Fred Habash <[email protected]> wrote:\n> \n> Trying to optimize the Elapsed Time (ET) of this query. Currently, it is hovering around 3 hrs.\n> \n> Running a 'vaccum analyse' had no effect on ET. Even forcing an 'indexonly' scan by disabling 'enable_seqscan', still around the 3 hrs. \n> The table is around 4.6B rows, \n> explain select cit_id, cl_value from reflink.citation_locators where cl_value = '1507617681' and vclf_number = 1 ;\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------\n> Bitmap Heap Scan on citation_locators (cost=5066559.01..50999084.79 rows=133 width=23)\n> Recheck Cond: (vclf_number = 1)\n> Filter: (cl_value = '1507617681'::text)\n> -> Bitmap Index Scan on cl_indx_fk02 (cost=0.00..5066558.97 rows=493984719 width=0)\n> Index Cond: (vclf_number = 1)\n> (5 rows)\n> \n> reflink.citation_locators \n> Table \"reflink.citation_locators\"\n> Column | Type | Modifiers | Storage | Stats target | Description \n> ------------------+--------------------------+-----------+----------+--------------+-------------\n> cl_id | bigint | not null | plain | | \n> cl_value | text | not null | extended | | \n> vclf_number | integer | not null | plain | | \n> cit_id | bigint | not null | plain | | \n> cl_date_created | timestamp with time zone | not null | plain | | \n> cl_date_modified | timestamp with time zone | | plain | | \n> Indexes:\n> \"cl_pk\" PRIMARY KEY, btree (cl_id)\n> \"cl_cnst_uk01\" UNIQUE CONSTRAINT, btree (cit_id, vclf_number, cl_value)\n> \"cl_indx_fk01\" btree (cit_id)\n> \"cl_indx_fk02\" btree (vclf_number)\n> Foreign-key constraints:\n> \"cl_cnst_fk01\" FOREIGN KEY (cit_id) REFERENCES citations(cit_id) NOT VALID \"cl_cnst_fk02\" FOREIGN KEY (vclf_number) REFERENCES valid_cit_locator_fields(vclf_number)\n\nProbably the cardinality of \"vclf_number\" is really bad. So the scan on that index is returning many million or billion rows and then you get a recheck which takes semi-forever. So you need an index on cl_value or both vclf_number and cl_value. If you know some properties of the values actually stored inside of those that will help. Matthew HallOn Jun 5, 2018, at 7:17 AM, Fred Habash <[email protected]> wrote:\n\nTrying to optimize the Elapsed Time (ET) of this query.\n Currently, it is hovering around 3 hrs.\nRunning a 'vaccum analyse' had no effect on ET. Even forcing an\n 'indexonly' scan by disabling 'enable_seqscan', still around the 3\n hrs. \n\nThe table is around 4.6B rows, \n\n explain select cit_id, cl_value from reflink.citation_locators\n where cl_value = '1507617681' and vclf_number = 1 ;\n                                        QUERY\n PLAN                                        \n-----------------------------------------------------------------------------------------\n  Bitmap Heap Scan on citation_locators \n (cost=5066559.01..50999084.79 rows=133 width=23)\n    Recheck Cond: (vclf_number = 1)\n    Filter: (cl_value = '1507617681'::text)\n    ->  Bitmap Index Scan on cl_indx_fk02 \n (cost=0.00..5066558.97 rows=493984719 width=0)\n          Index Cond: (vclf_number = 1)\n (5 rows)\n\n\nreflink.citation_locators \n Table \"reflink.citation_locators\"\n Column | Type | Modifiers | Storage | Stats target | Description \n------------------+--------------------------+-----------+----------+--------------+-------------\n cl_id | bigint | not null | plain | | \n cl_value | text | not null | extended | | \n vclf_number | integer | not null | plain | | \n cit_id | bigint | not null | plain | | \n cl_date_created | timestamp with time zone | not null | plain | | \n cl_date_modified | timestamp with time zone | | plain | | \nIndexes:\n \"cl_pk\" PRIMARY KEY, btree (cl_id)\n \"cl_cnst_uk01\" UNIQUE CONSTRAINT, btree (cit_id, vclf_number, cl_value)\n \"cl_indx_fk01\" btree (cit_id)\n \"cl_indx_fk02\" btree (vclf_number)\nForeign-key constraints:\n \"cl_cnst_fk01\" FOREIGN KEY (cit_id) REFERENCES citations(cit_id) NOT VALID \"cl_cnst_fk02\" FOREIGN KEY (vclf_number) REFERENCES valid_cit_locator_fields(vclf_number)", "msg_date": "Tue, 5 Jun 2018 07:42:20 -0700", "msg_from": "Matthew Hall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Query Elapsed Time ~ 3 hrs Using Bitmap Index/Heap Scan" }, { "msg_contents": "Fred Habash <[email protected]> writes:\n> Indexes:\n> \"cl_pk\" PRIMARY KEY, btree (cl_id)\n> \"cl_cnst_uk01\" UNIQUE CONSTRAINT, btree (cit_id, vclf_number, cl_value)\n> \"cl_indx_fk01\" btree (cit_id)\n> \"cl_indx_fk02\" btree (vclf_number)\n\nThis is pretty inefficient index design. Your query is slow because the\nonly selective condition it has is on cl_value, but you have no index\nthat can be searched with cl_value as the leading condition. Moreover,\nyou have two indexes that can be searched with cit_id as the leading\ncondition, which is just wasteful. I'd try reorganizing the cl_cnst_uk01\nindex as (cl_value, vclf_number, cit_id) so that it can serve for\nsearches on cl_value, while still enforcing the same uniqueness condition.\nThis particular column ordering would also let your query use the\nvclf_number constraint as a secondary search condition, which would\nhelp even more.\n\nThere's relevant advice about index design in the manual,\n\nhttps://www.postgresql.org/docs/current/static/indexes.html\n\n(see 11.3 and 11.5 particularly)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 05 Jun 2018 10:42:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Query Elapsed Time ~ 3 hrs Using Bitmap Index/Heap Scan" }, { "msg_contents": "Indexes are being redone as per these insights. Appreciate the great support.\n\n----------------\nThank you\n\nFrom: Matthew Hall\nSent: Tuesday, June 5, 2018 10:42 AM\nTo: Fred Habash\nCc: [email protected]\nSubject: Re: Simple Query Elapsed Time ~ 3 hrs Using Bitmap Index/Heap Scan\n\nProbably the cardinality of \"vclf_number\" is really bad. So the scan on that index is returning many million or billion rows and then you get a recheck which takes semi-forever. So you need an index on cl_value or both vclf_number and cl_value. If you know some properties of the values actually stored inside of those that will help. \nMatthew Hall\n\nOn Jun 5, 2018, at 7:17 AM, Fred Habash <[email protected]> wrote:\nTrying to optimize the Elapsed Time (ET) of this query. Currently, it is hovering around 3 hrs.\nRunning a 'vaccum analyse' had no effect on ET. Even forcing an 'indexonly' scan by disabling 'enable_seqscan', still around the 3 hrs. \nThe table is around 4.6B rows, \n explain select cit_id, cl_value from reflink.citation_locators where cl_value = '1507617681' and vclf_number = 1 ;\n                                       QUERY PLAN                                        \n-----------------------------------------------------------------------------------------\n Bitmap Heap Scan on citation_locators  (cost=5066559.01..50999084.79 rows=133 width=23)\n   Recheck Cond: (vclf_number = 1)\n   Filter: (cl_value = '1507617681'::text)\n   ->  Bitmap Index Scan on cl_indx_fk02  (cost=0.00..5066558.97 rows=493984719 width=0)\n         Index Cond: (vclf_number = 1)\n(5 rows)\nreflink.citation_locators \n Table \"reflink.citation_locators\"\n Column | Type | Modifiers | Storage | Stats target | Description \n------------------+--------------------------+-----------+----------+--------------+-------------\n cl_id | bigint | not null | plain | | \n cl_value | text | not null | extended | | \n vclf_number | integer | not null | plain | | \n cit_id | bigint | not null | plain | | \n cl_date_created | timestamp with time zone | not null | plain | | \n cl_date_modified | timestamp with time zone | | plain | | \nIndexes:\n \"cl_pk\" PRIMARY KEY, btree (cl_id)\n \"cl_cnst_uk01\" UNIQUE CONSTRAINT, btree (cit_id, vclf_number, cl_value)\n \"cl_indx_fk01\" btree (cit_id)\n \"cl_indx_fk02\" btree (vclf_number)\nForeign-key constraints:\n \"cl_cnst_fk01\" FOREIGN KEY (cit_id) REFERENCES citations(cit_id) NOT VALID \"cl_cnst_fk02\" FOREIGN KEY (vclf_number) REFERENCES valid_cit_locator_fields(vclf_number)\n\n\nIndexes are being redone as per these insights. Appreciate the great support. ----------------Thank you From: Matthew HallSent: Tuesday, June 5, 2018 10:42 AMTo: Fred HabashCc: [email protected]: Re: Simple Query Elapsed Time ~ 3 hrs Using Bitmap Index/Heap Scan Probably the cardinality of \"vclf_number\" is really bad. So the scan on that index is returning many million or billion rows and then you get a recheck which takes semi-forever. So you need an index on cl_value or both vclf_number and cl_value. If you know some properties of the values actually stored inside of those that will help. Matthew HallOn Jun 5, 2018, at 7:17 AM, Fred Habash <[email protected]> wrote:Trying to optimize the Elapsed Time (ET) of this query. Currently, it is hovering around 3 hrs.Running a 'vaccum analyse' had no effect on ET. Even forcing an 'indexonly' scan by disabling 'enable_seqscan', still around the 3 hrs. The table is around 4.6B rows,  explain select cit_id, cl_value from reflink.citation_locators where cl_value = '1507617681' and vclf_number = 1 ;                                       QUERY PLAN                                        ----------------------------------------------------------------------------------------- Bitmap Heap Scan on citation_locators  (cost=5066559.01..50999084.79 rows=133 width=23)   Recheck Cond: (vclf_number = 1)   Filter: (cl_value = '1507617681'::text)   ->  Bitmap Index Scan on cl_indx_fk02  (cost=0.00..5066558.97 rows=493984719 width=0)         Index Cond: (vclf_number = 1)(5 rows)reflink.citation_locators                                 Table \"reflink.citation_locators\"      Column      |           Type           | Modifiers | Storage  | Stats target | Description ------------------+--------------------------+-----------+----------+--------------+------------- cl_id            | bigint                   | not null  | plain    |              |  cl_value         | text                     | not null  | extended |              |  vclf_number      | integer                  | not null  | plain    |              |  cit_id           | bigint                   | not null  | plain    |              |  cl_date_created  | timestamp with time zone | not null  | plain    |              |  cl_date_modified | timestamp with time zone |           | plain    |              | Indexes:    \"cl_pk\" PRIMARY KEY, btree (cl_id)    \"cl_cnst_uk01\" UNIQUE CONSTRAINT, btree (cit_id, vclf_number, cl_value)    \"cl_indx_fk01\" btree (cit_id)    \"cl_indx_fk02\" btree (vclf_number)Foreign-key constraints:    \"cl_cnst_fk01\" FOREIGN KEY (cit_id) REFERENCES citations(cit_id) NOT VALID    \"cl_cnst_fk02\" FOREIGN KEY (vclf_number) REFERENCES valid_cit_locator_fields(vclf_number)", "msg_date": "Tue, 5 Jun 2018 13:18:10 -0400", "msg_from": "Fd Habash <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Simple Query Elapsed Time ~ 3 hrs Using Bitmap Index/Heap Scan" } ]
[ { "msg_contents": "In older versions of pg_upgrade (e.g from 9.2 to 9.3), I was able to run pg_upgrade without stopping old cluster using the check flag.\n\npg_upgrade -b <old-bin> -B <new-bin> -d <old-data> -D <new-data> -p 5432 -P 5434 -r -v -k -c\n\nNote the \"c\" flag at the end\n\nHowever pg_upgrade in 10 (I tried from 9.3 to 10.4), when I did not stop the old cluster, the upgrade failed:\n\n***\nThere seems to be a postmaster servicing the old cluster.\nPlease shutdown that postmaster and try again.\nFailure, exiting\n\nIs this expected?\n\nAlso, when I stopped the old cluster and ran pg_upgrade with \"-c\" flag, the file global/pg_control got renamed to global/pg_control.old. The \"-c\" flag never renamed anything in the old cluster in older pg_upgrade\n\n\n\n\n\n\n\n\n\n\n\nIn older versions of pg_upgrade (e.g from 9.2 to 9.3), I was able to run pg_upgrade without stopping old cluster using the check flag.\n\n \npg_upgrade -b <old-bin> -B <new-bin> -d <old-data> -D <new-data> -p 5432 -P 5434 -r -v -k -c      \n\n \nNote the “c” flag at the end\n \nHowever pg_upgrade in 10 (I tried from 9.3 to 10.4), when I did not stop the old cluster, the upgrade failed:\n \n***\nThere seems to be a postmaster servicing the old cluster.\nPlease shutdown that postmaster and try again.\n\nFailure, exiting\n\n \nIs this expected?\n \nAlso, when I stopped the old cluster and ran pg_upgrade with “-c” flag, the file global/pg_control got renamed to global/pg_control.old. The “-c” flag never renamed anything in the old cluster\n in older pg_upgrade", "msg_date": "Tue, 12 Jun 2018 20:34:56 +0000", "msg_from": "Murthy Nunna <[email protected]>", "msg_from_op": true, "msg_subject": "pg_upgrade 10.2" }, { "msg_contents": "On 06/12/2018 01:34 PM, Murthy Nunna wrote:\n> In older versions of pg_upgrade (e.g from 9.2 to 9.3), I was able to run \n> pg_upgrade without stopping old cluster using the check flag.\n> \n> pg_upgrade -b <old-bin> -B <new-bin> -d <old-data> -D <new-data> -p 5432 \n> -P 5434 -r -v -k -c\n> \n> Note the �c� flag at the end\n\nI take the below to it mean it should work:\n\nhttps://www.postgresql.org/docs/10/static/pgupgrade.html\n> \"You can use pg_upgrade --check to perform only the checks, even if the \nold server is still running. pg_upgrade --check will also outline any \nmanual adjustments you will need to make after the upgrade. If you are \ngoing to be using link mode, you should use the --link option with \n--check to enable link-mode-specific checks.\"\n\nMight want to try without -k to see what happens.\n\nMore comments below.\n\n> However pg_upgrade in 10 (I tried from 9.3 to 10.4), when I did not stop \n> the old cluster, the upgrade failed:\n> \n> ***\n> \n> There seems to be a postmaster servicing the old cluster.\n> \n> Please shutdown that postmaster and try again.\n> \n> Failure, exiting\n> \n> Is this expected?\n> \n> Also, when I stopped the old cluster and ran pg_upgrade with �-c� flag, \n> the file global/pg_control got renamed to global/pg_control.old. The \n> �-c� flag never renamed anything in the old cluster in older pg_upgrade\n\nAgain seems related to -k:\n\n\"\nIf you ran pg_upgrade without --link or did not start the new server, \nthe old cluster was not modified except that, if linking started, a .old \nsuffix was appended to $PGDATA/global/pg_control. To reuse the old \ncluster, possibly remove the .old suffix from $PGDATA/global/pg_control; \nyou can then restart the old cluster.\n\"\n> \n\n\n-- \nAdrian Klaver\[email protected]\n\n", "msg_date": "Tue, 12 Jun 2018 13:47:30 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade 10.2" }, { "msg_contents": "Thanks Adrian.\nI removed \"-k\" flag. But still got same error.\n\nThere seems to be a postmaster servicing the old cluster.\nPlease shutdown that postmaster and try again.\nFailure, exiting\n\n-----Original Message-----\nFrom: Adrian Klaver [mailto:[email protected]] \nSent: Tuesday, June 12, 2018 3:48 PM\nTo: Murthy Nunna <[email protected]>; [email protected]; [email protected]; [email protected]\nSubject: Re: pg_upgrade 10.2\n\nOn 06/12/2018 01:34 PM, Murthy Nunna wrote:\n> In older versions of pg_upgrade (e.g from 9.2 to 9.3), I was able to \n> run pg_upgrade without stopping old cluster using the check flag.\n> \n> pg_upgrade -b <old-bin> -B <new-bin> -d <old-data> -D <new-data> -p \n> 5432 -P 5434 -r -v -k -c\n> \n> Note the \"c\" flag at the end\n\nI take the below to it mean it should work:\n\nhttps://urldefense.proofpoint.com/v2/url?u=https-3A__www.postgresql.org_docs_10_static_pgupgrade.html&d=DwID-g&c=gRgGjJ3BkIsb5y6s49QqsA&r=0wrsmPzpZSao0v32yCcG2Q&m=g2e1NMngBLIcEgi5UjlCHkyJ5zK1Su-vsaRw0Y9N0Dc&s=PDVmjA_uW6cJvV4lWR8vgkiArplzgd5Rs4taLA6ZY6Q&e=\n> \"You can use pg_upgrade --check to perform only the checks, even if \n> the\nold server is still running. pg_upgrade --check will also outline any manual adjustments you will need to make after the upgrade. If you are going to be using link mode, you should use the --link option with --check to enable link-mode-specific checks.\"\n\nMight want to try without -k to see what happens.\n\nMore comments below.\n\n> However pg_upgrade in 10 (I tried from 9.3 to 10.4), when I did not \n> stop the old cluster, the upgrade failed:\n> \n> ***\n> \n> There seems to be a postmaster servicing the old cluster.\n> \n> Please shutdown that postmaster and try again.\n> \n> Failure, exiting\n> \n> Is this expected?\n> \n> Also, when I stopped the old cluster and ran pg_upgrade with \"-c\" \n> flag, the file global/pg_control got renamed to global/pg_control.old. \n> The \"-c\" flag never renamed anything in the old cluster in older \n> pg_upgrade\n\nAgain seems related to -k:\n\n\"\nIf you ran pg_upgrade without --link or did not start the new server, the old cluster was not modified except that, if linking started, a .old suffix was appended to $PGDATA/global/pg_control. To reuse the old cluster, possibly remove the .old suffix from $PGDATA/global/pg_control; you can then restart the old cluster.\n\"\n> \n\n\n--\nAdrian Klaver\[email protected]\n\n", "msg_date": "Tue, 12 Jun 2018 20:58:25 +0000", "msg_from": "Murthy Nunna <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pg_upgrade 10.2" }, { "msg_contents": "On 06/12/2018 01:58 PM, Murthy Nunna wrote:\n> Thanks Adrian.\n> I removed \"-k\" flag. But still got same error.\n> \n> There seems to be a postmaster servicing the old cluster.\n> Please shutdown that postmaster and try again.\n> Failure, exiting\n> \n\nWell according to the code in pg_upgrade.c that message should not be \nreached when the check option is specified:\n\nif (!user_opts.check)\n pg_fatal(\"There seems to be a postmaster servicing the old \ncluster.\\n\"\n \"Please shutdown that postmaster and try again.\\n\");\nelse\n *live_check = true;\n\nCan we see the actual command you ran?\n\n\n-- \nAdrian Klaver\[email protected]\n\n", "msg_date": "Tue, 12 Jun 2018 14:13:27 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade 10.2" }, { "msg_contents": "pg_upgrade -V\r\npg_upgrade (PostgreSQL) 10.4\r\n\r\npg_upgrade -b /fnal/ups/prd/postgres/v9_3_14_x64/Linux-2-6/bin -B /fnal/ups/prd/postgres/v10_4_x64/Linux-2-6/bin -d /data0/pgdata/ifb_prd_last -D /data0/pgdata/ifb_prd_last_104 -p 5433 -P 5434 -r -v –c\r\n\r\n\r\n-----Original Message-----\r\nFrom: Adrian Klaver [mailto:[email protected]] \r\nSent: Tuesday, June 12, 2018 4:13 PM\r\nTo: Murthy Nunna <[email protected]>; [email protected]; [email protected]; [email protected]\r\nSubject: Re: pg_upgrade 10.2\r\n\r\nOn 06/12/2018 01:58 PM, Murthy Nunna wrote:\r\n> Thanks Adrian.\r\n> I removed \"-k\" flag. But still got same error.\r\n> \r\n> There seems to be a postmaster servicing the old cluster.\r\n> Please shutdown that postmaster and try again.\r\n> Failure, exiting\r\n> \r\n\r\nWell according to the code in pg_upgrade.c that message should not be reached when the check option is specified:\r\n\r\nif (!user_opts.check)\r\n pg_fatal(\"There seems to be a postmaster servicing the old cluster.\\n\"\r\n \"Please shutdown that postmaster and try again.\\n\"); else\r\n *live_check = true;\r\n\r\nCan we see the actual command you ran?\r\n\r\n\r\n--\r\nAdrian Klaver\r\[email protected]\r\n", "msg_date": "Tue, 12 Jun 2018 21:18:07 +0000", "msg_from": "Murthy Nunna <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pg_upgrade 10.2" }, { "msg_contents": "On 06/12/2018 02:18 PM, Murthy Nunna wrote:\n> pg_upgrade -V\n> pg_upgrade (PostgreSQL) 10.4\n> \n> pg_upgrade -b /fnal/ups/prd/postgres/v9_3_14_x64/Linux-2-6/bin -B /fnal/ups/prd/postgres/v10_4_x64/Linux-2-6/bin -d /data0/pgdata/ifb_prd_last -D /data0/pgdata/ifb_prd_last_104 -p 5433 -P 5434 -r -v –c\n> \n>\n\nLooks good to me. The only thing that stands out is that in your \noriginal post you had:\n\n-p 5432\n\nand above you have:\n\n-p 5433\n\nNot sure if that makes a difference.\n\nThe only suggestion I have at the moment is to move -c from the end of \nthe line to somewhere earlier on the chance that there is a bug that is \nnot finding it when it's at the end.\n\n\n-- \nAdrian Klaver\[email protected]\n\n", "msg_date": "Tue, 12 Jun 2018 14:34:52 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade 10.2" }, { "msg_contents": "Hi Adrian,\r\n\r\nPort numbers are correct.\r\n\r\nI moved the position of -c (-p 5433 -P 5434 -c -r -v). Now it is NOT complaining about old cluster running. However, I am running into a different problem.\r\n\r\nNew cluster database \"ifb_prd_last\" is not empty\r\nFailure, exiting\r\n\r\nNote: ifb_prd_last is not new cluster. It is actually old cluster.\r\n\r\nIs this possibly because in one of my earlier attempts where I shutdown old cluster and ran pg_upgrade with -c at the end of the command line. I think -c was ignored and my cluster has been upgraded in that attempt. Is that possible?\r\n\r\n\r\n-----Original Message-----\r\nFrom: Adrian Klaver [mailto:[email protected]] \r\nSent: Tuesday, June 12, 2018 4:35 PM\r\nTo: Murthy Nunna <[email protected]>; [email protected]; [email protected]; [email protected]\r\nSubject: Re: pg_upgrade 10.2\r\n\r\nOn 06/12/2018 02:18 PM, Murthy Nunna wrote:\r\n> pg_upgrade -V\r\n> pg_upgrade (PostgreSQL) 10.4\r\n> \r\n> pg_upgrade -b /fnal/ups/prd/postgres/v9_3_14_x64/Linux-2-6/bin -B \r\n> /fnal/ups/prd/postgres/v10_4_x64/Linux-2-6/bin -d \r\n> /data0/pgdata/ifb_prd_last -D /data0/pgdata/ifb_prd_last_104 -p 5433 \r\n> -P 5434 -r -v –c\r\n> \r\n>\r\n\r\nLooks good to me. The only thing that stands out is that in your original post you had:\r\n\r\n-p 5432\r\n\r\nand above you have:\r\n\r\n-p 5433\r\n\r\nNot sure if that makes a difference.\r\n\r\nThe only suggestion I have at the moment is to move -c from the end of the line to somewhere earlier on the chance that there is a bug that is not finding it when it's at the end.\r\n\r\n\r\n--\r\nAdrian Klaver\r\[email protected]\r\n", "msg_date": "Tue, 12 Jun 2018 21:49:32 +0000", "msg_from": "Murthy Nunna <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pg_upgrade 10.2" }, { "msg_contents": "Murthy Nunna <[email protected]> writes:\n\n> Hi Adrian,\n>\n> Port numbers are correct.\n>\n> I moved the position of -c (-p 5433 -P 5434 -c -r -v). Now it is NOT complaining about old cluster running. However, I am running into a different problem.\n\nI noted in your earlier message the final -c... the dash was not a\nregular 7bit ascii char but some UTF or whatever dash char.\n\nI wonder if that's what you fed your shell and it caused a silent\nparsing issue, eg the -c dropped.\n\nBut of course email clients wrap and mangle text like that all sorts of\nfun ways so lordy knows just what you originally sent :-)\n\nFWIW\n\n\n>\n> New cluster database \"ifb_prd_last\" is not empty\n> Failure, exiting\n>\n> Note: ifb_prd_last is not new cluster. It is actually old cluster.\n>\n> Is this possibly because in one of my earlier attempts where I\n> shutdown old cluster and ran pg_upgrade with -c at the end of the\n> command line. I think -c was ignored and my cluster has been upgraded\n> in that attempt. Is that possible?\n>\n>\n> -----Original Message-----\n> From: Adrian Klaver [mailto:[email protected]] \n> Sent: Tuesday, June 12, 2018 4:35 PM\n> To: Murthy Nunna <[email protected]>; [email protected]; [email protected]; [email protected]\n> Subject: Re: pg_upgrade 10.2\n>\n> On 06/12/2018 02:18 PM, Murthy Nunna wrote:\n>> pg_upgrade -V\n>> pg_upgrade (PostgreSQL) 10.4\n>> \n>> pg_upgrade -b /fnal/ups/prd/postgres/v9_3_14_x64/Linux-2-6/bin -B \n>> /fnal/ups/prd/postgres/v10_4_x64/Linux-2-6/bin -d \n>> /data0/pgdata/ifb_prd_last -D /data0/pgdata/ifb_prd_last_104 -p 5433 \n>> -P 5434 -r -v –c\n>> \n>>\n>\n> Looks good to me. The only thing that stands out is that in your original post you had:\n>\n> -p 5432\n>\n> and above you have:\n>\n> -p 5433\n>\n> Not sure if that makes a difference.\n>\n> The only suggestion I have at the moment is to move -c from the end of the line to somewhere earlier on the chance that there is a bug that is not finding it when it's at the end.\n>\n>\n> --\n> Adrian Klaver\n> [email protected]\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n", "msg_date": "Tue, 12 Jun 2018 18:24:10 -0500", "msg_from": "Jerry Sievers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade 10.2" }, { "msg_contents": "Jerry,\r\n\r\nOMG, I think you nailed this... I know what I did. I cut/pasted the command from an e-mail... I have seen this issue before with stuff not related to postgres. But then those commands failed in syntax error and then you know what you did wrong.\r\n\r\nSimilarly, I expect pg_upgrade to throw an error if it finds something it doesn't understand instead of ignoring and causing damage. Don't you agree?\r\n\r\nThanks for pointing that out. I will redo my upgrade.\r\n\r\n-r -v -k -c\t--- good flags no utf8\r\n-r -v -k –c\t--- bad flags....\r\n\r\n\r\n\r\n\r\n-----Original Message-----\r\nFrom: Jerry Sievers [mailto:[email protected]] \r\nSent: Tuesday, June 12, 2018 6:24 PM\r\nTo: Murthy Nunna <[email protected]>\r\nCc: Adrian Klaver <[email protected]>; [email protected]; [email protected]; [email protected]\r\nSubject: Re: pg_upgrade 10.2\r\n\r\nMurthy Nunna <[email protected]> writes:\r\n\r\n> Hi Adrian,\r\n>\r\n> Port numbers are correct.\r\n>\r\n> I moved the position of -c (-p 5433 -P 5434 -c -r -v). Now it is NOT complaining about old cluster running. However, I am running into a different problem.\r\n\r\nI noted in your earlier message the final -c... the dash was not a regular 7bit ascii char but some UTF or whatever dash char.\r\n\r\nI wonder if that's what you fed your shell and it caused a silent parsing issue, eg the -c dropped.\r\n\r\nBut of course email clients wrap and mangle text like that all sorts of fun ways so lordy knows just what you originally sent :-)\r\n\r\nFWIW\r\n\r\n\r\n>\r\n> New cluster database \"ifb_prd_last\" is not empty Failure, exiting\r\n>\r\n> Note: ifb_prd_last is not new cluster. It is actually old cluster.\r\n>\r\n> Is this possibly because in one of my earlier attempts where I \r\n> shutdown old cluster and ran pg_upgrade with -c at the end of the \r\n> command line. I think -c was ignored and my cluster has been upgraded \r\n> in that attempt. Is that possible?\r\n>\r\n>\r\n> -----Original Message-----\r\n> From: Adrian Klaver [mailto:[email protected]]\r\n> Sent: Tuesday, June 12, 2018 4:35 PM\r\n> To: Murthy Nunna <[email protected]>; \r\n> [email protected]; [email protected]; \r\n> [email protected]\r\n> Subject: Re: pg_upgrade 10.2\r\n>\r\n> On 06/12/2018 02:18 PM, Murthy Nunna wrote:\r\n>> pg_upgrade -V\r\n>> pg_upgrade (PostgreSQL) 10.4\r\n>> \r\n>> pg_upgrade -b /fnal/ups/prd/postgres/v9_3_14_x64/Linux-2-6/bin -B \r\n>> /fnal/ups/prd/postgres/v10_4_x64/Linux-2-6/bin -d \r\n>> /data0/pgdata/ifb_prd_last -D /data0/pgdata/ifb_prd_last_104 -p 5433 \r\n>> -P 5434 -r -v –c\r\n>> \r\n>>\r\n>\r\n> Looks good to me. The only thing that stands out is that in your original post you had:\r\n>\r\n> -p 5432\r\n>\r\n> and above you have:\r\n>\r\n> -p 5433\r\n>\r\n> Not sure if that makes a difference.\r\n>\r\n> The only suggestion I have at the moment is to move -c from the end of the line to somewhere earlier on the chance that there is a bug that is not finding it when it's at the end.\r\n>\r\n>\r\n> --\r\n> Adrian Klaver\r\n> [email protected]\r\n\r\n--\r\nJerry Sievers\r\nPostgres DBA/Development Consulting\r\ne: [email protected]\r\np: 312.241.7800\r\n", "msg_date": "Tue, 12 Jun 2018 23:41:22 +0000", "msg_from": "Murthy Nunna <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pg_upgrade 10.2" }, { "msg_contents": "On 06/12/2018 02:49 PM, Murthy Nunna wrote:\n> Hi Adrian,\n> \n> Port numbers are correct.\n> \n> I moved the position of -c (-p 5433 -P 5434 -c -r -v). Now it is NOT complaining about old cluster running. However, I am running into a different problem.\n> \n> New cluster database \"ifb_prd_last\" is not empty\n> Failure, exiting\n> \n> Note: ifb_prd_last is not new cluster. It is actually old cluster.\n> \n> Is this possibly because in one of my earlier attempts where I shutdown old cluster and ran pg_upgrade with -c at the end of the command line. I think -c was ignored and my cluster has been upgraded in that attempt. Is that possible?\n\nI don't so because it exited before it got the upgrading part.\n\n\n-- \nAdrian Klaver\[email protected]\n\n", "msg_date": "Tue, 12 Jun 2018 16:50:43 -0700", "msg_from": "Adrian Klaver <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade 10.2" }, { "msg_contents": "Murthy Nunna <[email protected]> writes:\n\n> Jerry,\n>\n> OMG, I think you nailed this... I know what I did. I cut/pasted the\n> command from an e-mail... I have seen this issue before with stuff not\n\nOh! I suggest you lose that habit ASAP before ever issuing another\ncommand to anything :-)\n\n> related to postgres. But then those commands failed in syntax error\n> and then you know what you did wrong.\n>\n> Similarly, I expect pg_upgrade to throw an error if it finds something it doesn't understand instead of ignoring and causing damage. Don't you agree?\n\nWell, pg_upgrade might never have seen your $silly-dash since possibly\nyour shell or terminal driver swallowed it.\n\n>\n> Thanks for pointing that out. I will redo my upgrade.\n>\n> -r -v -k -c\t--- good flags no utf8\n> -r -v -k –c\t--- bad flags....\n>\n>\n>\n>\n> -----Original Message-----\n> From: Jerry Sievers [mailto:[email protected]] \n> Sent: Tuesday, June 12, 2018 6:24 PM\n> To: Murthy Nunna <[email protected]>\n> Cc: Adrian Klaver <[email protected]>; [email protected]; [email protected]; [email protected]\n> Subject: Re: pg_upgrade 10.2\n>\n> Murthy Nunna <[email protected]> writes:\n>\n>> Hi Adrian,\n>>\n>> Port numbers are correct.\n>>\n>> I moved the position of -c (-p 5433 -P 5434 -c -r -v). Now it is NOT complaining about old cluster running. However, I am running into a different problem.\n>\n> I noted in your earlier message the final -c... the dash was not a regular 7bit ascii char but some UTF or whatever dash char.\n>\n> I wonder if that's what you fed your shell and it caused a silent parsing issue, eg the -c dropped.\n>\n> But of course email clients wrap and mangle text like that all sorts of fun ways so lordy knows just what you originally sent :-)\n>\n> FWIW\n>\n>\n>>\n>> New cluster database \"ifb_prd_last\" is not empty Failure, exiting\n>>\n>> Note: ifb_prd_last is not new cluster. It is actually old cluster.\n>>\n>> Is this possibly because in one of my earlier attempts where I \n>> shutdown old cluster and ran pg_upgrade with -c at the end of the \n>> command line. I think -c was ignored and my cluster has been upgraded \n>> in that attempt. Is that possible?\n>>\n>>\n>> -----Original Message-----\n>> From: Adrian Klaver [mailto:[email protected]]\n>> Sent: Tuesday, June 12, 2018 4:35 PM\n>> To: Murthy Nunna <[email protected]>; \n>> [email protected]; [email protected]; \n>> [email protected]\n>> Subject: Re: pg_upgrade 10.2\n>>\n>> On 06/12/2018 02:18 PM, Murthy Nunna wrote:\n>>> pg_upgrade -V\n>>> pg_upgrade (PostgreSQL) 10.4\n>>> \n>>> pg_upgrade -b /fnal/ups/prd/postgres/v9_3_14_x64/Linux-2-6/bin -B \n>>> /fnal/ups/prd/postgres/v10_4_x64/Linux-2-6/bin -d \n>>> /data0/pgdata/ifb_prd_last -D /data0/pgdata/ifb_prd_last_104 -p 5433 \n>>> -P 5434 -r -v –c\n>>> \n>>>\n>>\n>> Looks good to me. The only thing that stands out is that in your original post you had:\n>>\n>> -p 5432\n>>\n>> and above you have:\n>>\n>> -p 5433\n>>\n>> Not sure if that makes a difference.\n>>\n>> The only suggestion I have at the moment is to move -c from the end of the line to somewhere earlier on the chance that there is a bug that is not finding it when it's at the end.\n>>\n>>\n>> --\n>> Adrian Klaver\n>> [email protected]\n>\n> --\n> Jerry Sievers\n> Postgres DBA/Development Consulting\n> e: [email protected]\n> p: 312.241.7800\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n", "msg_date": "Tue, 12 Jun 2018 18:56:30 -0500", "msg_from": "Jerry Sievers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade 10.2" }, { "msg_contents": "Murthy Nunna <[email protected]> writes:\n\n<snip>\n\nBTW, this message was and remained cross-posted to 3 groups which is\nconsidered bad style around here and I was negligent too in the previous\nreply which also went out to all of them.\n\nPlease take note.\n\nThank\n\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n", "msg_date": "Tue, 12 Jun 2018 18:59:21 -0500", "msg_from": "Jerry Sievers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade 10.2" } ]